id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
24,129,233 | https://en.wikipedia.org/wiki/Unit%20doublet | In mathematics, the unit doublet is the derivative of the Dirac delta function. It can be used to differentiate signals in electrical engineering: If u1 is the unit doublet, then
where is the convolution operator.
The function is zero for all values except zero, where its behaviour is interesting. Its integral over any interval enclosing zero is zero. However, the integral of its absolute value over any region enclosing zero goes to infinity. The function can be thought of as the limiting case of two rectangles, one in the second quadrant, and the other in the fourth. The length of each rectangle is k, whereas their breadth is 1/k2, where k tends to zero.
References
Generalized functions | Unit doublet | Mathematics | 152 |
1,711,336 | https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s%20theorem | In mathematics, Apéry's theorem is a result in number theory that states the Apéry's constant ζ(3) is irrational. That is, the number
cannot be written as a fraction where p and q are integers. The theorem is named after Roger Apéry.
The special values of the Riemann zeta function at even integers () can be shown in terms of Bernoulli numbers to be irrational, while it remains open whether the function's values are in general rational or not at the odd integers () (though they are conjectured to be irrational).
History
Leonhard Euler proved that if n is a positive integer then
for some rational number . Specifically, writing the infinite series on the left as , he showed
where the are the rational Bernoulli numbers. Once it was proved that is always irrational, this showed that is irrational for all positive integers n.
No such representation in terms of π is known for the so-called zeta constants for odd arguments, the values for positive integers n. It has been conjectured that the ratios of these quantities
are transcendental for every integer .
Because of this, no proof could be found to show that the zeta constants with odd arguments were irrational, even though they were (and still are) all believed to be transcendental. However, in June 1978, Roger Apéry gave a talk titled "Sur l'irrationalité de ζ(3)." During the course of the talk he outlined proofs that and were irrational, the latter using methods simplified from those used to tackle the former rather than relying on the expression in terms of π. Due to the wholly unexpected nature of the proof and Apéry's blasé and very sketchy approach to the subject, many of the mathematicians in the audience dismissed the proof as flawed. However Henri Cohen, Hendrik Lenstra, and Alfred van der Poorten suspected Apéry was on to something and set out to confirm his proof. Two months later they finished verification of Apéry's proof, and on August 18 Cohen delivered a lecture giving full details of the proof. After the lecture Apéry himself took to the podium to explain the source of some of his ideas.
Apéry's proof
Apéry's original proof was based on the well-known irrationality criterion from Peter Gustav Lejeune Dirichlet, which states that a number is irrational if there are infinitely many coprime integers p and q such that
for some fixed c, δ > 0.
The starting point for Apéry was the series representation of as
Roughly speaking, Apéry then defined a sequence which converges to about as fast as the above series, specifically
He then defined two more sequences and that, roughly, have the quotient . These sequences were
and
The sequence converges to fast enough to apply the criterion, but unfortunately is not an integer after . Nevertheless, Apéry showed that even after multiplying and by a suitable integer to cure this problem the convergence was still fast enough to guarantee irrationality.
Later proofs
Within a year of Apéry's result an alternative proof was found by Frits Beukers, who replaced Apéry's series with integrals involving the shifted Legendre polynomials . Using a representation that would later be generalized to Hadjicostas's formula, Beukers showed that
for some integers An and Bn (sequences and ). Using partial integration and the assumption that was rational and equal to , Beukers eventually derived the inequality
which is a contradiction since the right-most expression tends to zero as , and so must eventually fall below .
A more recent proof by Wadim Zudilin is more reminiscent of Apéry's original proof, and also has similarities to a fourth proof by Yuri Nesterenko. These later proofs again derive a contradiction from the assumption that is rational by constructing sequences that tend to zero but are bounded below by some positive constant. They are somewhat less transparent than the earlier proofs, since they rely upon hypergeometric series.
Higher zeta constants
See also
Apéry and Beukers could simplify their proofs to work on as well thanks to the series representation
Due to the success of Apéry's method a search was undertaken for a number with the property that
If such a were found then the methods used to prove Apéry's theorem would be expected to work on a proof that is irrational. Unfortunately, extensive computer searching has failed to find such a constant, and in fact it is now known that if exists and if it is an algebraic number of degree at most 25, then the coefficients in its minimal polynomial must be enormous, at least , so extending Apéry's proof to work on the higher odd zeta constants does not seem likely to work.
Work by Wadim Zudilin and Tanguy Rivoal has shown that infinitely many of the numbers must be irrational, and even that at least one of the numbers , , , and must be irrational. Their work uses linear forms in values of the zeta function and estimates upon them to bound the dimension of a vector space spanned by values of the zeta function at odd integers. Hopes that Zudilin could cut his list further to just one number did not materialise, but work on this problem is still an active area of research. Higher zeta constants have application to physics: they describe correlation functions in quantum spin chains.
References
External links
Zeta and L-functions
Theorems in number theory | Apéry's theorem | Mathematics | 1,104 |
25,920,713 | https://en.wikipedia.org/wiki/Official%20community%20plan | In Canada, an official community plan is a comprehensive plan created by an incorporated municipality which dictates public policy in terms of transportation, utilities, land use, recreation, and housing. OCPs typically encompass large geographical areas, a broad range of topics, and cover a long-term time horizon. The process of creating an OCP is today often referred to as a Community Vision.
In the United States such a plan is known as a comprehensive plan.
In some large jurisdictions and metropolitan areas experiencing significant growth, regional transportation plans are made that work in conjunction with municipal OCPS.
Official community plans is the formal term for documents created by an incorporated municipality and filed with the provincial government, usually the Ministry of Municipal Affairs.
OCPs have to be periodically updated to remain relevant. For example, the City of North Vancouver created an Official Community Plan in 1980, 1992, and again in 2002. When objectives of the plan have been achieved new objectives are set. For example, the City of North Vancouver in Metro Vancouver states as its achievements the construction of 5,000 units of housing in the city center, commercial and institutional development, a balanced mix of transportation modes, modern telecommunications infrastructure, a high percentage of multifamily housing, an accessible waterfront, and a balance between jobs and labour force.
Community planning may also be done on a smaller scale. The resulting plan is not an official community plan but is known as a neighborhood plan. In Vancouver such Neighborhood Plans are also known as community visions. The primary motive for these neighborhood plans in Vancouver was to find ways to accommodate more housing (or new housing choices) in existing neighborhoods in a way sensitive and responsive to the concerns of existing residents. The official statement being more vague:
"THAT Council and Departments use the ... Community vision directions to
help guide policy decisions, corporate work, priorities, budgets and capital plans in
this community."
References
External links
List of Official Community Plans in Metro Vancouver:
City of Maple Ridge
City of North Vancouver
Urban planning | Official community plan | Engineering | 405 |
16,936,273 | https://en.wikipedia.org/wiki/Kappadione | Kappadione is a vitamin K derivative used for the treatment of side effects of vitamin K antagonists such as warfarin, prophylaxis and treatment of vitamin K deficiency bleeding, and hypoprothrombinemia due to various causes. It was manufactured by Eli Lilly and Company. Chemically, it is menadiol sodium diphosphate. It was approved by the US Food and Drug Administration prior to 1982 and marketed by Lilly. It has since been discontinued and is not available in North America.
References
Drugs developed by Eli Lilly and Company
Organophosphates
Naphthol esters
Organic sodium salts | Kappadione | Chemistry | 128 |
66,683,662 | https://en.wikipedia.org/wiki/ABL%20Space%20Systems | ABL Space Systems is an American aerospace and launch service provider, based in El Segundo, California, that manufactures deployable launch vehicles and infrastructure for missile defense, formerly for sending commercial small satellites into orbit. The company manufactures its components in the United States.
ABL Space Systems manufactures the RS1, a two-stage expendable launch vehicle, and GS0, a deployable launch pad.
History
ABL Space Systems was founded in 2017 by Harry O'Hanley and Dan Piemont, former SpaceX and Morgan Stanley employees. Their RS-1 rocket has two stages. It offers a maximum capacity of to low Earth orbit (LEO).
In 2018, ABL Space Systems signed a lease with Camden County, Georgia, for future operations in Spaceport Camden.
In 2019, the company signed with Spaceport America in New Mexico to locate some ABL testing operations and facilities there. As of October 2022, the company makes no mention of this location on their facility list.
In 2021 ABL leased facilities at the Port of Long Beach formerly occupied by Sea Launch.
In 2023, ABL was working on a larger rocket to compete for National Security Space Launch contracts.
In 2024 ABL had raised more than $500 million for the development and operation of their rocket. The sum was made up from both venture funding and from secured launch contracts with major clients.
In November 2024, after a string of failures ABL announced it was exiting the commercial space orbital launch market, and announced a pivot towards military applications, potentially leveraging their previous launch vehicles and engines to be used in missile defense technologies. ABL has also closed down their El Segundo office and Mojave Test site, relocating entirely to Long Beach facility.
See also
References
External links
RS-1 rocket details
launch system details
Private spaceflight companies
Commercial launch service providers
Rocket engine manufacturers of the United States
Aerospace companies of the United States
Companies based in El Segundo, California
Aerospace technologies
Rocket engines | ABL Space Systems | Technology | 402 |
49,748,838 | https://en.wikipedia.org/wiki/DcuC%20family | The C4-dicarboxylate uptake C family or DcuC family (TC# 2.A.61) is a family of transmembrane ion transporters found in bacteria. A representative list of proteins belonging to the DcuC family can be found in the Transporter Classification Database.
An anaerobic C4-dicarboxylate transporter (DcuC) of E. coli (TC# 2.A.61.1.1) has 14 putative transmembrane regions, is induced only under anaerobic conditions, and is not repressed by glucose. DcuC may therefore function as a succinate efflux system during anaerobic glucose fermentation. However, when overexpressed, it can replace either DcuA or DcuB in catalysing fumarate-succinate exchange and fumarate uptake. DcuC shows the same transport modes as DcuA and DcuB (exchange, uptake, and presumably efflux of C4-dicarboxylates).
The reactions probably catalyzed by the E. coli DcuC protein are:
C4-dicarboxylate (out) + nH+ (out) → C4-dicarboxylate (in) + nH+ (in)
C4-dicarboxylate1 (out) + C4-dicarboxylate2 (in) ⇌ C4-dicarboxylate1 (in) + C4-dicarboxylate2 (out).
See also
Dicarboxylate
Dcu family
References
Protein families
Transmembrane transporters | DcuC family | Biology | 341 |
31,764,761 | https://en.wikipedia.org/wiki/Act%20as%201%20Campaign | Act as 1 was a domestic violence prevention campaign led by the Queensland Government in Australia. The campaign contained the core message that domestic violence and family violence affect women, men, children, families, neighbours, workplaces and communities and is estimated to cost the Queensland economy $2.7 to $3.2 billion annually.
Act as 1 gathered community support to bring family violence out from behind closed doors. The Act as 1 encouraged neighbours, friends, family members, colleagues and community members to take a stand against family violence and support those affected.
The campaign pointed out that we may all know someone who is experiencing family violence and suggests that we could be the "1" to spark a change and make a difference, suggesting that the more who Act as 1, the more powerful the message.
Many community groups supported the government campaign including SunnyKids which developed a television campaign raising awareness of family violence and its impact on children in particular.
The campaign also identified five ways to "Act as 1"
Support someone
Follow the campaign on Facebook and Twitter
Attend or Hold Events
Talk about it
Educate Yourself
The Act as 1 campaign also raised awareness about elder abuse during the month of May. The Domestic and Family Violence Prevention Act in Queensland recognises elder abuse as a form of Domestic and Family Violence.
References
Abuse
Domestic violence-related organizations
Violence against women in Australia
Child abuse-related organizations
Organisations based in Queensland | Act as 1 Campaign | Biology | 278 |
56,102,852 | https://en.wikipedia.org/wiki/Vitasti | A vitasti (, ) is an ancient Indian unit of length approximating to 21 centimeters.
Etymology
The Sanskrit word vitasti, meaning "span", is an ancient Indo-Iranian term. It is derived from the Proto-Indo-Iranian term *witasti- and is related to Avestan vītasti, Kurdish bist and Persian bidast, all meaning "span".
Measurement
According to the Vāstuśāstra, a vitasti is equal to 12 aṅgulas. It is defined as the long span between the extended thumb and the little finger or as the distance between the wrist and the fingertips.
Equivalence to other units of length
8 Paramāṇu = 1 Rathadhūli (chariot-dust)8 Rathadhūli = 1 Vālāgra (hair-end)8 Vālāgra = 1 Likṣā (nit)8 Likṣā = 1 Yūkā (louse)8 Yūkā = 1 Yava (barley)8 Yava = 1 Aṅgula (finger)12 Aṅgula = 1 Vitasti (span) 2 Vitasti = 1 Kiṣku (cubit)
References
Units of measurement | Vitasti | Mathematics | 232 |
1,140,830 | https://en.wikipedia.org/wiki/Consistency%20%28database%20systems%29 | In database systems, consistency (or correctness) refers to the requirement that any given database transaction must change affected data only in allowed ways. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This does not guarantee correctness of the transaction in all ways the application programmer might have wanted (that is the responsibility of application-level code) but merely that any programming errors cannot result in the violation of any defined database constraints.
In a distributed system, referencing CAP theorem, consistency can also be understood as after a successful write, update or delete of a Record, any read request immediately receives the latest value of the Record.
As an ACID guarantee
Consistency is one of the four guarantees that define ACID transactions; however, significant ambiguity exists about the nature of this guarantee. It is defined variously as:
The guarantee that database constraints are not violated, particularly once a transaction commits.
The guarantee that any transactions started in the future necessarily see the effects of other transactions committed in the past.
As these various definitions are not mutually exclusive, it is possible to design a system that guarantees "consistency" in every sense of the word, as most relational database management systems in common use today arguably do.
As a CAP trade-off
The CAP theorem is based on three trade-offs, one of which is "atomic consistency" (shortened to "consistency" for the acronym), about which the authors note, "Discussing atomic consistency is somewhat different than talking about an ACID database, as database consistency refers to transactions, while atomic consistency refers only to a property of a single request/response operation sequence. And it has a different meaning than the Atomic in ACID, as it subsumes the database notions of both Atomic and Consistent." In the CAP theorem, you can only have two of the following three properties: consistency, availability, or partition tolerance. Therefore, consistency may have to be traded off in some database systems.
See also
Consistency model
CAP theorem
Referential integrity
Eventual consistency
References
Data management
Transaction processing | Consistency (database systems) | Technology | 416 |
17,633,579 | https://en.wikipedia.org/wiki/Topological%20combinatorics | The mathematical discipline of topological combinatorics is the application of topological and algebro-topological methods to solving problems in combinatorics.
History
The discipline of combinatorial topology used combinatorial concepts in topology and in the early 20th century this turned into the field of algebraic topology.
In 1978 the situation was reversed—methods from algebraic topology were used to solve a problem in combinatorics—when László Lovász proved the Kneser conjecture, thus beginning the new field of topological combinatorics. Lovász's proof used the Borsuk–Ulam theorem and this theorem retains a prominent role in this new field. This theorem has many equivalent versions and analogs and has been used in the study of fair division problems.
In another application of homological methods to graph theory, Lovász proved both the undirected and directed versions of a conjecture of András Frank: Given a k-connected graph G, k points , and k positive integers that sum up to , there exists a partition of such that , , and spans a connected subgraph.
In 1987 the necklace splitting problem was solved by Noga Alon using the Borsuk–Ulam theorem. It has also been used to study complexity problems in linear decision tree algorithms and the Aanderaa–Karp–Rosenberg conjecture. Other areas include topology of partially ordered sets and Bruhat orders.
Additionally, methods from differential topology now have a combinatorial analog in discrete Morse theory.
See also
Sperner's lemma
Discrete exterior calculus
Topological graph theory
Combinatorial topology
Finite topological space
References
.
Further reading
.
.
.
.
.
.
.
Combinatorics
Topology
Algebraic topology | Topological combinatorics | Physics,Mathematics | 340 |
5,026,180 | https://en.wikipedia.org/wiki/Nu%20Canis%20Majoris | The Bayer designation Nu Canis Majoris (ν CMa / ν Canis Majoris) is shared by three star systems, in the constellation Canis Major:
ν1 Canis Majoris
ν2 Canis Majoris
ν3 Canis Majoris
They are separated by the different asterism in Chinese astronomy. ν1 Canis Majoris was not any member of asterism. ν2 Canis Majoris was stand alone in asterism 野雞 (Yě Jī), Wild Cockerel. ν3 Canis Majoris was member of asterism 軍市 (Jūn Shì), Market for Soldiers. Both of asterisms were lied in Well mansion.
References
Canis Major
Canis Majoris, Nu | Nu Canis Majoris | Astronomy | 150 |
3,396,190 | https://en.wikipedia.org/wiki/Pinch%20%28unit%29 | A pinch is a small, indefinite amount of a substance, typically a powder like salt, sugar, spice, or snuff. It is the "amount that can be taken between the thumb and forefinger".
Some manufacturers of measuring spoons and some U.S. cookbooks give more precise equivalents, typically US customary teaspoon; some sources define it as or teaspoon. There is no generally accepted standard.
In the United Kingdom, a pinch is traditionally UK salt spoon, the equivalence of UK teaspoon. UK salt spoon is an amount of space that can accommodate 15 British imperial minims ( British imperial fluid drachm or British imperial fluid ounce; about 14·41 US customary minims (0·24 US customary fluid dram or 0·03 US customary fluid ounce) or 0·89 millilitres) of liquid.
References
Customary units of measurement in the United States
Cooking weights and measures
Units of volume
Imperial units | Pinch (unit) | Mathematics | 195 |
15,294,203 | https://en.wikipedia.org/wiki/Paragyromitra%20ambigua | Paragyromitra ambigua is an ascomycete species of fungus in the family Discinaceae, and related to the false morel G. esculenta. The species is found in North America, where it produces fruit bodies (mushrooms) that grow on the ground. The edibility of the fruit bodies is not known with certainty, and it is not recommended for consumption.
See also
Gyromitrin, a toxic chemical found in Gyromitra fungi
Morchella, the true morels
References
Discinaceae
Fungi described in 1881
Fungi of North America
Fungus species | Paragyromitra ambigua | Biology | 122 |
58,832,892 | https://en.wikipedia.org/wiki/XZ%20Andromedae | XZ Andromedae (also known as XZ And) is a binary star in the constellation Andromeda. Its maximum apparent visual magnitude is 9.91, but drops down to 12.45 every 1.357 days. Its variability matches the behaviour of Algol variable stars.
System
The primary star of the system has a mass of 3.2 and has a spectral type A4IV-V, meaning that it has intermediate characteristics between a main sequence star and a subgiant one. The secondary is less massive (1.3 ) but larger than the primary, so it's an evolved subgiant star and its spectral type is G5IV. The secondary component will likely evolve into a white dwarf before the primary leaves the main sequence. Since 2019, it is suspected that the eclipsing binary is orbited by an additional two similar stars in a 1:3 mean-motion resonance with periods 33.43 and 100.4 years.
Variability
The variability of XZ Andromedae was discovered by Henrietta Levitt by examining photographs taken from 1916 to 1919. Variability was confirmed by Arville D. Walker and Priscilla Fairfield. The discovery was announced by Harlow Shapley in 1923. The star, originally known as BD+41 376, received the variable star designation XZ Andromedae in 1924.
Photometric periods of Algol variables matches the orbital period of the system. However, in XZ Andromedae have been observed slight period variations that can be reproduced with three different cycles of 137.5, 36.8 and 11.2 years, respectively. Each of them could be the effect of another faint body orbiting the binary system, but one of the two shorter cycles could also be an effect of magnetic interaction between stars (the Applegate mechanism).
Other research states that the long cycle is instead a long-term period increase caused by mass transfer from the secondary (that fills its Roche lobe) to the primary component.
References
Andromeda (constellation)
Andromedae, XZ
Durchmusterung objects
J01565151+4206021
Algol variables
4 | XZ Andromedae | Astronomy | 436 |
49,214,865 | https://en.wikipedia.org/wiki/Pholiota%20marthae | Pholiota marthae is a species of agaric fungus in the family Strophariaceae. Found in Argentina, it was described as new to science by mycologist Rolf Singer in 1969.
See also
List of Pholiota species
References
External links
Fungi described in 1969
Fungi of Argentina
Strophariaceae
Taxa named by Rolf Singer
Fungus species | Pholiota marthae | Biology | 72 |
1,255,277 | https://en.wikipedia.org/wiki/Figure%E2%80%93ground%20%28perception%29 | Figure–ground organization is a type of perceptual grouping that is a vital necessity for recognizing objects through vision. In Gestalt psychology it is known as identifying a figure from the background. For example, black words on a printed paper are seen as the "figure", and the white sheet as the "background".
Gestalt psychology
The Gestalt theory was founded in the 20th century in Austria and Germany as a reaction against the associationist and structural schools' atomistic orientation. In 1912, the Gestalt school was formed by Max Wertheimer, Wolfgang Köhler, and Kurt Koffka. The word "gestalt" is a German word translated to English as "pattern" or "configuration." Gestalt concepts can also be referred to as "holism." Gestalt Psychologists were attempting to humanize what was considered a sterile approach. Gestalt psychology establishes that the whole of anything is greater than its parts. The concepts explored by Wertheimer, Köhler, and Koffka in the 20th century established the foundation for the modern study of perception.
"The Gestalt concept is that "not only movement, or process as such, but also the direction and distribution of process is determined dynamically by interaction." Sensory organization is not dependent upon isolated stimuli and local stimulation, but upon the relative properties of stimulation and the dynamical context."
Wertheimer described holism as the "fundamental formula" of Gestalt psychology: "There are wholes, the behavior of which is not determined by that of their individual elements, but where the part-processes are themselves determined by the intrinsic nature of the whole."
Examples
The Rubin vase faces–vase drawing that Danish psychologist Edgar Rubin described exemplifies one of the key aspects of figure–ground organization, edge-assignment and its effect on shape perception. In the faces–vase drawing, the perceived shape depends critically on the direction in which the border (edge) between the black and white regions is assigned. If the edges between the black and white regions are assigned inward, then the central white region is seen as a vase shape in front of a black background. No faces are perceived in this case. On the other hand, if the edges are assigned outward, then the two black profile faces are perceived on a white background, and no vase shape is perceived. The human visual system will settle on either of the interpretations of the Rubin vase and alternate between them, a phenomenon known as multistable perception. Functional brain imaging shows that, when people see the Rubin image as a face, there is activity in the temporal lobe, specifically in the face-selective region.
An additional example is the "My Wife and My Mother-in-Law" illusion drawing. The image is famous for being reversible. "The viewer may either observe a young girl with her head turned to the right or an old woman with a large nose and protruding chin, depending on one's perspective."
The Flag of Canada has also been cited as an example of figure–ground reversal, in which the background edges of the maple leaf can also be seen as two faces arguing.
Development
Figure–ground perception precedes all other visual perceptual skills and is one of the first to develop in a young baby. The development of perceptual organization develops as early as infancy in human beings. In regards to nature versus nurture, concepts such as "lightness" and "proximity" may develop as early as birth, but recognizing "form similarity" may not be functional until activated by particular experiences.
Three- to four-month olds respond to differences in lightness rather than differences in form similarity. It is suggested that scaffolding (the development of new skills over time based on the building of other skills) is responsible for the development of perceptual organization. Environment plays a major role in the development of figure-ground perception.
The development of figure–ground perception begins the day the baby can focus on an object. The faces of caregivers, parents, and familiar objects are the first to be focused on and understood. As babies develop, they learn to distinguish the objects they desire from their surroundings. Sitting up, crawling, and walking present ample opportunity to develop the skill during development. Between the ages of 2–4 the skill can be further cultivated by teaching the child to group or sort items.
Perceptual process
The perceptual decision in which the brain decides which item is the figure and which are part of the ground in a visual scene can be based on many cues, all of which are of a probabilistic nature. For instance, size assists in distinguishing between the figure and the ground, as smaller regions are often (but not always) figures. Object shape can assist in distinguishing figure from ground because figures tend to be convex. Movement also helps; the figure may be moving against a static environment. Color is also a cue because the background tends to continue as one color behind potentially multiple foreground figures, whose colors may vary. Edge assignment also helps; if the edge belongs to the figure, it defines the shape while the background exists behind the shape. However, it is sometimes difficult to distinguish between the two because the edge that would separate figure from ground is part of neither, equally defining both the figure and the background.
The LOC (lateral occipital cortex) is highly important for figure–ground perception. This region of the visual cortex (located lateral to the fusiform gyrus and extending anteriorly and ventrally) has consistently shown stronger activation in response to objects versus non-objects."
Evidently, the process of distinguishing figure from ground (sometimes called figure–ground segmentation) is inherently probabilistic, and the best that the brain can do is to take all relevant cues into account to generate a probabilistic best-guess. In this light, Bayesian figure–ground segmentation models have been proposed to simulate the probabilistic inference by which the brain may distinguish figure from ground.
Subjective factors can also influence figure–ground perception. For instance, if a viewer has the intention to perceive one of the two regions as the figure, it will likely alter their ability to analyze the two regions objectively. In addition, if a viewer's gaze is fixated on a particular region, the viewer is more likely to view the fixated region as the figure. Although subjective factors can alter the probability of seeing the figure on one particular side of an edge, they tend not to overpower compositional cues.
Artistic applications
Figure–ground organization is used to help artists and designers in composition of a 2D piece. Figure–ground reversal may be used as an intentional visual design technique in which an existing image's foreground and background colors are purposely swapped to create new images.
Non-visual
Figure–ground perception can be expanded from visual perception to include non-visual concepts such as melody/harmony, subject/background and positive/negative space. The concept of figure and ground fully depends on the observer and not on the item itself.
In the typical sonic scenarios people encounter, auditory figure and ground signals often overlap in time as well as in frequency content. In these situations, auditory objects are established by integrating sound components both over time and frequency. A 2011 study suggests that the auditory system possesses mechanisms that are sensitive to such cross-frequency and cross-time correlations. Results of this study demonstrated significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure–ground decomposition.
In crowded rooms or parties, a person is able to zero in on the conversation they are having with one person (figure) while drowning out the background noise (ground). This can also be referred to as the "cocktail party effect."
Figure–ground segregation in hearing is not automatic; rather, it requires attention and draws on resources that are shared across vision and audition.
Types of figure–ground problems
There are three types of figure–ground problems:
The figure and the ground compete.
The figure should be the ground and the ground should be the figure.
The figure and ground create an optical illusion.
See also
Composition (visual arts)
Ma (negative space)
Negative space
White space (visual arts)
References
External links
Figure Ground, a puzzle game plays on the figure–ground illusion.
Design
Optical illusions
Dichotomies
Perception
Cognitive psychology | Figure–ground (perception) | Physics,Engineering,Biology | 1,725 |
49,817,168 | https://en.wikipedia.org/wiki/Germanium-vacancy%20center%20in%20diamond | The germanium-vacancy center (Ge-V) is an optically active defect in diamond, which can be created by doping germanium into diamond during its growth or by implanting germanium ions into diamond after its growth. Its properties are similar to those of the silicon-vacancy center in diamond (SiV). Ge-V can behave as a single-photon source and shows potential for quantum and nanoscience applications due to its narrow zero-phonon line (ZPL) and minimal phononic-sideband (compared to that of the nitrogen-vacancy center (NV)).
Properties
Ge-V is predicted to consist of one germanium atom situated between two adjacent lattice vacancies and have the same D3d point group symmetry as SiV. It has a single ZPL at 602 nm (2.059 eV) at room-temperature, which splits into two components separated by 0.67 meV at low-temperatures (10 K). The Ge-V has an excited state lifetime of 1.4–5.5 ns.
Formation
Ge-V can be created during the diamond growth, or by ion implantation and subsequent annealing at 800 °C. The former way results in lower lattice strain, as revealed by the spread in the position and width of the Ge-V ZPL.
References
Diamond
Spintronics
Spectroscopy
Crystallographic defects
Quantum information science | Germanium-vacancy center in diamond | Physics,Chemistry,Materials_science,Engineering | 287 |
76,556,199 | https://en.wikipedia.org/wiki/Urinary%20anti-infective%20agent | Urinary anti-infective agent, also known as urinary antiseptic, is medication that can eliminate microorganisms causing urinary tract infection (UTI). UTI can be categorized into two primary types: cystitis, which refers to lower urinary tract or bladder infection, and pyelonephritis, which indicates upper urinary tract or kidney infection.
Escherichia coli (E. Coli) is the predominant microbial trigger of UTIs, accounting for 75% to 95% of reported cases. Other pathogens such as Proteus mirabilis, Klebsiella pneumoniae, and Staphylococcus saprophyticus can also cause UTIs.
The use of antimicrobial therapy to treat UTIs started in the 20th century. Nitrofurantoin, trimethoprim-sulfamethoxazole (TMP/SMX), fosfomycin, and pivmecillinam are currently the first-line agents for empiric therapy of simple cystitis. On the other hand, the choice of empiric antimicrobial therapy for pyelonephritis depends on the severity of illness, specific host factors, and the presence of resistant bacteria. Ceftriaxone is often considered for parenteral treatment, while oral or parenteral fluoroquinolones, such as levofloxacin and ciprofloxacin, are suitable alternatives for treating pyelonephritis.
Antimicrobial therapy should be tailored to the individual, considering factors like the severity of illness, specific host factors, and pathogen resistance in the local community.
Types of urinary anti-infective agent
Urinary antiseptics are medications that target bacteria in the urinary tract. They can be divided into two groups: bactericidal agents, and bacteriostatic agents. These antiseptics help prevent infections by effectively eliminating UTI symptoms through their action on microorganisms.
Urinary bactericidal agents
Nitrofurantoin
Nitrofurantoin is regarded as the first-line agent for simple cystitis, with an efficacy rate ranging from 88% to 92%. It can also be a prophylactic agent to prevent long-term UTIs. This antibacterial medication is effective against both gram-positive and gram-negative bacteria. Nitrofurantoin exhibits its bactericidal activity through various mechanisms, including inhibiting ribosomal translation, causing bacterial DNA damage and interfering with the citric acid cycle. However, the specific role of each mechanism remains to be further explored.
When nitrofurantoin is metabolized, it converts into a reactive intermediate that attacks bacterial ribosomes, inhibiting bacterial protein synthesis. This medication is typically taken orally and has minimal systemic absorption, reducing potential side effects. Common adverse reactions associated with nitrofurantoin include brown urine discoloration, nausea, vomiting, loss of appetite, rash, and peripheral neuropathy.
Fosfomycin
Fosfomycin is a phosphonic acid bactericidal agent. It is commonly used as the first-line treatment for acute simple cystitis, demonstrating a 91% cure rate. It is administered orally as a single dose; In more complicated UTIs, the dose is adjusted to be repeated every three days to achieve successful eradication.
The bactericidal effect of fosfomycin is attributed to its capability to inhibit bacterial wall synthesis by inactivating an enzyme called pyruvyl transferase, which is responsible for microbial cell wall synthesis. Fosfomycin acts against gram-positive and gram-negative bacteria. Administration of fosfomycin may lead to side effects such as headache, dizziness, nausea, vomiting, and abdominal cramps.
Beta-lactam antibiotics
Beta-lactam antibiotics are often considered as a second-line option for treating UTIs due to their lower effectiveness compared to other antibiotics and their potential adverse effects. Commonly used beta-lactam antibiotics for UTIs include cephalosporins and penicillin. By binding to penicillin-binding proteins through their beta-lactam rings, beta-lactam antibiotics disrupt the normal function of these proteins, inhibiting bacterial cell wall synthesis, ultimately resulting in cell death.
Cephalosporins are a subclass of beta-lactam family with broad-spectrum activity against gram-positive and gram-negative bacteria. They are categorized into five generations. First and third-generation cephalosporins, like cefalexin and ceftriaxone, are more commonly used in clinical practice. Common adverse effects associated with cephalosporins include hypersensitivity, rash, anaphylaxis, and seizures.
Penicillin is another widely used subclass that effectively targets various bacteria. However, it is not regarded as the first-line treatment for uncomplicated cystitis because of the high prevalence of penicillin-resistant E. coli strains. Within the penicillin class, pivmecillinam is considered the first-line empiric treatment for acute cystitis due to its wide spectrum of activity against gram-negative bacteria and its specific efficacy in the urinary tract. It has consistently demonstrated a high cure rate of over 85% for UTIs and a low resistance rate among E. coli strains. Amoxicillin-clavulanate combination, which enhances the effectiveness of amoxicillin, is often used as an alternative for cystitis treatment when other options cannot be used.
Fluoroquinolones
Fluoroquinolones are a class of antimicrobial agents known for their high efficacy and broad spectrum activity against aerobic gram-positive and gram-negative bacteria. These potent antibiotics exert their bactericidal effects by selectively inhibiting the activity of type II DNA topoisomerases, which effectively halt the replication of bacterial DNA, leading to bacterial death.
Among the fluoroquinolones, ciprofloxacin and levofloxacin are used more frequently for the treatment of UTIs. These agents are well-absorbed orally and achieve significant concentrations in urine and various tissues. However, fluoroquinolones administration carries risk of GI symptoms, confusion, hypersensitivity, tendinopathy, and neuropathy. Additionally, the extensive use of fluoroquinolones has contributed to the prevalence of antimicrobial resistance in some areas. As a result, fluoroquinolones are generally reserved for more serious UTIs or when there are no better anti urinary-infective agent options.
Bacterial static agent
Sulfonamide
Sulfonamide is a bacteriostatic agent that competitively inhibits the bacterial enzyme dihydropteroate synthase. By acting as a substrate analog of para-aminobenzoic acid, sulfonamide inhibits folic acid production. TMP/SMX is a combination of two antibacterial agents that work synergistically to combat a wide range of urinary tract pathogens. TMP/SMX is commonly used due to its ability to achieve high concentrations in urinary tract tissues and urine. This antibiotic combination demonstrates notable efficacy in both the treatment and prophylaxis of recurrent urinary tract infections. Common adverse effects include nausea, vomiting, rash,pruritus, and photosensitivity.
Renal dysfunction
Kidney disease can affect drug elimination, absorption, and distribution in the body, leading to altered serum drug concentrations. This can increase the risk of drug toxicity or suboptimal therapeutic effects. As a result, dosage adjustments are necessary for patients who fail to achieve the desired therapeutic serum drug levels.
Management
The choice of urinary anti-infective agents for patients with renal dysfunction is generally similar to that for individuals with normal kidney function. However, in cases where the patient's glomerular filtration rate (GFR) decreases to less than 20 mL/min, drug dosages adjustment is necessary because achieving the desired therapeutic serum drug levels becomes challenging in such patients.
Medication safety
Some drugs need to be used with caution in patients with renal dysfunction. The use of nitrofurantoin is contraindicated in patients with an estimated GFR of less than 30 mL/min/1.73m2 as drug accumulation can lead to increased side effects and impaired recovery of the urinary tract, increasing the risk of treatment failure. The use of TMP/SMX also raises concerns in patients with kidney disease. In patients with creatinine clearance less than 50 mL/min, the urine concentrations of SMX may decrease to subtherapeutic levels. Therefore, in patients with low creatinine clearance, it is recommended to prescribe a reduced dosage of TMP alone.
Pregnancy
Pregnant women with UTIs are at a higher risk of experiencing recurrent bacteriuria and developing pyelonephritis compared to non-pregnant individuals. Untreated UTIs during pregnancy can lead to adverse outcomes, including preterm birth and low birth weight infants.
Management
Antimicrobial treatment should be adjusted for UTIs in pregnant women to avoid potential side effects brought to fetus. For acute cystitis and pyelonephritis in pregnant women, empiric antibiotic treatment is often initiated. Commonly used antibiotics for uncomplicated cystitis include amoxicillin-clavulanate and fosfomycin, while parenteral beta-lactams are preferred for acute pyelonephritis. These options are chosen because they are considered safer in pregnancy and have a relatively broad spectrum of activity. Typically, an antimicrobial course of five to seven days is given. This duration is chosen to minimize fetal exposure to antimicrobials while ensuring optimal treatment outcomes.
Medication safety
The type of urinary anti-infective agents should be carefully chosen for pregnant women with UTIs due to the potential impact on fetal development. Penicillins, cephalosporins, and fosfomycin are safe options during pregnancy. Nitrofurantoin is typically avoided during the first trimester due to uncertain associations with congenital anomalies. TMP/SMX should also be avoided as it may be associated with impaired folate metabolism, which increases the risk of neural tube defects. However, when all alternative antibiotics are contraindicated, nitrofurantoin and TMP/SMX become the last resort at the expense of the fetus. Fluoroquinolones should be avoided during pregnancy as they are associated with bone and cartilage toxicity in developing fetuses.
Pediatrics
Urinary tract infection in pediatric patients is a significant clinical issue, affecting approximately 7% of fevered infants and children. If left untreated, the infection can ascend from the bladder to the kidneys, resulting in acute pyelonephritis, which leads to hypertension, kidney scarring, and end-stage kidney disease.
Management
The choice of urinary anti-infective agents used in pediatric patients and the duration of therapy depend on the types of UTIs they are suffering from. It is important to note that the dosage of antibiotics used in children is typically weight-dependent. Generally, oral or parenteral cephalosporins are recommended as the first-line agent for children older than two months. Second-line therapy should be considered for patients who have poor response to first-line treatment. Alternative choices include amoxicillin-clavulanate, nitrofurantoin, TMP/SMX, and ciprofloxacin.
For the treatment of simple cystitis in children, a five-day oral course of cephalexin is the preferred choice. As for children with suspected pyelonephritis, a ten-day treatment regimen is recommended. In such cases, a third-generation cephalosporin, such as cefdinir, is suggested as an appropriate option. If second-line therapy is initiated in pediatric patients with suspected pyelonephritis, ciprofloxacin should be the preferred option among the four alternatives. Nitrofurantoin may not be adequate in treating upper urinary tract infections, while TMP/SMX and amoxicillin-clavulanate should be used with caution due to the risk of kidney scarring in these patients.
Medication safety
The choice of urinary anti-infective agents in pediatric patients may differ from that in adults due to the potential harm they can cause to children. For example, the systemic use of fluoroquinolones is not appropriate in pediatric patients due to the potential risk of musculoskeletal toxicity.
History
The discovery of antimicrobial agents contributed significantly to UTI management during the 20th century. Nitrofurantoin emerged as the first practical and safe urinary antimicrobial agent, but it was with limited spectrum of activity. Subsequently, in the 1970s, beta-lactam antibiotics and TMP/SMX became available for UTI therapy. Antimicrobial resistance was developed to these agents due to their widespread and extensive usage, which restricted their clinical efficacy in UTI management. Fluoroquinolones emerged during the 1980s and were recommended as an alternative when resistance to TMP/SMX reaches 10% or higher. The evolving landscape of drug resistance will continue to influence the development and application of antimicrobial agents in UTI therapy.
See also
UTI vaccine
Uromune
Methenamine
LACTIN-V
TOL-463
References
External links
Medicine
Infection-control measures
Urologic procedures
Urology
Medical treatments
Bactericides
Antiseptics | Urinary anti-infective agent | Biology | 2,863 |
379,234 | https://en.wikipedia.org/wiki/Sensory%20nervous%20system | The sensory nervous system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory neurons (including the sensory receptor cells), neural pathways, and parts of the brain involved in sensory perception and interoception. Commonly recognized sensory systems are those for vision, hearing, touch, taste, smell, balance and visceral sensation. Sense organs are transducers that convert data from the outer physical world to the realm of the mind where people interpret the information, creating their perception of the world around them.
The receptive field is the area of the body or environment to which a receptor organ and receptor cells respond. For instance, the part of the world an eye can see, is its receptive field; the light that each rod or cone can see, is its receptive field. Receptive fields have been identified for the visual system, auditory system and somatosensory system.
Senses and receptors
While debate exists among neurologists as to the specific number of senses due to differing definitions of what constitutes a sense, Gautama Buddha and Aristotle classified five 'traditional' human senses which have become universally accepted: touch, taste, smell, vision, and hearing. Other senses that have been well-accepted in most mammals, including humans, include pain, balance, kinaesthesia, and temperature. Furthermore, some nonhuman animals have been shown to possess alternate senses, including magnetoreception and electroreception.
Receptors
The initialization of sensation stems from the response of a specific receptor to a physical stimulus. The receptors which react to the stimulus and initiate the process of sensation are commonly characterized in four distinct categories: chemoreceptors, photoreceptors, mechanoreceptors, and thermoreceptors. All receptors receive distinct physical stimuli and transduce the signal into an electrical action potential. This action potential then travels along afferent neurons to specific brain regions where it is processed and interpreted.
Chemoreceptors
Chemoreceptors, or chemosensors, detect certain chemical stimuli and transduce that signal into an electrical action potential. The two primary types of chemoreceptors are:
Distance chemoreceptors are integral to receiving stimuli in gases in the olfactory system through both olfactory receptor neurons and neurons in the vomeronasal organ.
Direct chemoreceptors that detect stimuli in liquids include the taste buds in the gustatory system as well as receptors in the aortic bodies which detect changes in oxygen concentration.
Photoreceptors
Photoreceptors are neuron cells and are specialized units that play the main role in initiating vision function. Photoreceptors are light-sensitive cells that capture different wavelengths of light. Different types of photoreceptors are able to respond to the varying light wavelengths in relation to color, and transduce them into electrical signals. Photoreceptors are capable of phototransduction, a process which converts light (electromagnetic radiation) into, among other types of energy, a membrane potential. There are five compartments that are present in these cells. Each compartment corresponds to differences in function and structure. The first compartment is the outer segment (OS), where it is responsible for capturing light and transducing it. The second compartment is the inner segment (IS), which includes the necessary organelles that function in cellular metabolism and biosynthesis. Mainly, these organelles include mitochondria, Golgi apparatus and endoplasmic reticulum as well as among others. The third compartment is the connecting cilium (CC). As its name suggests, CC works to connect the OS and the IS regions together for the purpose of essential protein trafficking. The fourth compartment contains the nucleus and is a continuation of the IS region, known as the nuclear region. Finally, the fifth compartment is the synaptic region, where it acts as a final terminal for the signal, consisting of synaptic vesicles. In this region, glutamate neurotransmitter is transmitted from the cell to secondary neuron cells. The three primary types of photoreceptors are:
cones are photoreceptors which respond significantly to color. In humans, the three different types of cones correspond with a primary response to short wavelength (blue), medium wavelength (green), and long wavelength (yellow/red).
Rods are photoreceptors which are very sensitive to the intensity of light, allowing for vision in dim lighting. The concentrations and ratio of rods to cones is strongly correlated with whether an animal is diurnal or nocturnal. In humans, rods outnumber cones by approximately 20:1, while in nocturnal animals, such as the tawny owl, the ratio is closer to 1000:1.
Ganglion cells reside in the adrenal medulla and retina where they are involved in the sympathetic response. Of the ~1.3 million ganglion cells present in the retina, 1-2% are believed to be photosensitive ganglia. These photosensitive ganglia play a role in conscious vision for some animals, and are believed to do the same in humans.
Mechanoreceptors
Mechanoreceptors are sensory receptors which respond to mechanical forces, such as pressure or distortion. While mechanoreceptors are present in hair cells and play an integral role in the vestibular and auditory systems, the majority of mechanoreceptors are cutaneous and are grouped into four categories:
Slowly adapting type 1 receptors have small receptive fields and respond to static stimulation. These receptors are primarily used in the sensations of form and roughness.
Slowly adapting type 2 receptors have large receptive fields and respond to stretch. Similarly to type 1, they produce sustained responses to a continued stimuli.
Rapidly adapting receptors have small receptive fields and underlie the perception of slip.
Pacinian receptors have large receptive fields and are the predominant receptors for high-frequency vibration.
Thermoreceptors
Thermoreceptors are sensory receptors which respond to varying temperatures. While the mechanisms through which these receptors operate is unclear, recent discoveries have shown that mammals have at least two distinct types of thermoreceptors:
The end-bulb of Krause or bulboid corpuscle detects temperatures above body temperature.
Ruffini's end organ detects temperatures below body temperature.
TRPV1 is a heat-activated channel that acts as a small heat detecting thermometer in the membrane which begins the polarization of the neural fiber when exposed to changes in temperature. Ultimately, this allows us to detect ambient temperature in the warm/hot range. Similarly, the molecular cousin to TRPV1, TRPM8, is a cold-activated ion channel that responds to cold. Both cold and hot receptors are segregated by distinct subpopulations of sensory nerve fibers, which shows us that the information coming into the spinal cord is originally separate. Each sensory receptor has its own "labeled line" to convey a simple sensation experienced by the recipient. Ultimately, TRP channels act as thermosensors, channels that help us to detect changes in ambient temperatures.
Nociceptors
Nociceptors respond to potentially damaging stimuli by sending signals to the spinal cord and brain. This process, called nociception, usually causes the perception of pain. They are found in internal organs, as well as on the surface of the body. Nociceptors detect different kinds of damaging stimuli or actual damage. Those that only respond when tissues are damaged are known as "sleeping" or "silent" nociceptors.
Thermal nociceptors are activated by noxious heat or cold at various temperatures.
Mechanical nociceptors respond to excess pressure or mechanical deformation.
Chemical nociceptors respond to a wide variety of chemicals, some of which are signs of tissue damage. They are involved in the detection of some spices in food.
Sensory cortex
All stimuli received by the receptors listed above are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area of the brain. While the term sensory cortex is often used informally to refer to the somatosensory cortex, the term more accurately refers to the multiple areas of the brain at which senses are received to be processed. For the five traditional senses in humans, this includes the primary and secondary cortices of the different senses: the somatosensory cortex, the visual cortex, the auditory cortex, the primary olfactory cortex, and the gustatory cortex. Other modalities have corresponding sensory cortex areas as well, including the vestibular cortex for the sense of balance.
The human sensory system consists of the following subsystems:
Visual system (Vision)
Auditory system (Hearing)
Somatosensory system (Touch/Temperature/Kinesthesia/Pain)
Gustatory system (Taste)
Olfactory system (Smell)
Vestibular system (Balance)
Somatosensory cortex
Located in the parietal lobe, the primary somatosensory cortex is the primary receptive area for the sense of touch and proprioception in the somatosensory system. This cortex is further divided into Brodmann areas 1, 2, and 3. Brodmann area 3 is considered the primary processing center of the somatosensory cortex as it receives significantly more input from the thalamus, has neurons highly responsive to somatosensory stimuli, and can evoke somatic sensations through electrical stimulation. Areas 1 and 2 receive most of their input from area 3. There are also pathways for proprioception (via the cerebellum), and motor control (via Brodmann area 4). See also: S2 Secondary somatosensory cortex.
Visual cortex
The visual cortex refers to the primary visual cortex, labeled V1 or Brodmann area 17, as well as the extrastriate visual cortical areas V2-V5. Located in the occipital lobe, V1 acts as the primary relay station for visual input, transmitting information to two primary pathways labeled the dorsal and ventral streams. The dorsal stream includes areas V2 and V5, and is used in interpreting visual 'where' and 'how.' The ventral stream includes areas V2 and V4, and is used in interpreting 'what.' Increases in task-negative activity are observed in the ventral attention network, after abrupt changes in sensory stimuli, at the onset and offset of task blocks, and at the end of a completed trial.
Auditory cortex
Located in the temporal lobe, the auditory cortex is the primary receptive area for sound information. The auditory cortex is composed of Brodmann areas 41 and 42, also known as the anterior transverse temporal area 41 and the posterior transverse temporal area 42, respectively. Both areas act similarly and are integral in receiving and processing the signals transmitted from auditory receptors.
Primary olfactory cortex
Located in the temporal lobe, the primary olfactory cortex is the primary receptive area for olfaction, or smell. Unique to the olfactory and gustatory systems, at least in mammals, is the implementation of both peripheral and central mechanisms of action. The peripheral mechanisms involve olfactory receptor neurons which transduce a chemical signal along the olfactory nerve, which terminates in the olfactory bulb. The chemoreceptors in the receptor neurons that start the signal cascade are G protein-coupled receptors. The central mechanisms include the convergence of olfactory nerve axons into glomeruli in the olfactory bulb, where the signal is then transmitted to the anterior olfactory nucleus, the piriform cortex, the medial amygdala, and the entorhinal cortex, all of which make up the primary olfactory cortex.
In contrast to vision and hearing, the olfactory bulbs are not cross-hemispheric; the right bulb connects to the right hemisphere and the left bulb connects to the left hemisphere.
Gustatory cortex
The gustatory cortex is the primary receptive area for taste. The word taste is used in a technical sense to refer specifically to sensations coming from taste buds on the tongue. The five qualities of taste detected by the tongue include sourness, bitterness, sweetness, saltiness, and the protein taste quality, called umami. In contrast, the term flavor refers to the experience generated through integration of taste with smell and tactile information. The gustatory cortex consists of two primary structures: the anterior insula, located on the insular lobe, and the frontal operculum, located on the frontal lobe. Similarly to the olfactory cortex, the gustatory pathway operates through both peripheral and central mechanisms. Peripheral taste receptors, located on the tongue, soft palate, pharynx, and esophagus, transmit the received signal to primary sensory axons, where the signal is projected to the nucleus of the solitary tract in the medulla, or the gustatory nucleus of the solitary tract complex. The signal is then transmitted to the thalamus, which in turn projects the signal to several regions of the neocortex, including the gustatory cortex.
The neural processing of taste is affected at nearly every stage of processing by concurrent somatosensory information from the tongue, that is, mouthfeel. Scent, in contrast, is not combined with taste to create flavor until higher cortical processing regions, such as the insula and orbitofrontal cortex.
Quiescent state
Most sensory systems have a quiescent state, that is, the state that a sensory system converges to when there is no input.
This is well-defined for a linear time-invariant system, whose input space is a vector space, and thus by definition has a point of zero. It is also well-defined for any passive sensory system, that is, a system that operates without needing input power. The quiescent state is the state the system converges to when there is no input power.
It is not always well-defined for nonlinear, nonpassive sensory organs, since they can't function without input energy. For example, a cochlea is not a passive organ, but actively vibrates its own sensory hairs to improve its sensitivity. This manifests as otoacoustic emissions in healthy ears, and tinnitus in pathological ears. There is still a quiescent state for the cochlea, since there is a well-defined mode of power input that it receives (vibratory energy on the eardrum), which provides an unambiguous definition of "zero input power".
Some sensory systems can have multiple quiescent states depending on its history, like flip-flops, and magnetic material with hysteresis. It can also adapt to different quiescent states. In complete darkness, the retinal cells become extremely sensitive, and there is noticeable "visual snow" caused by the retinal cells firing randomly without any light input. In brighter light, the retinal cells become much less sensitive, consequently decreasing visual noise.
Quiescent state is less well-defined when the sensory organ can be controlled by other systems, like a dog's ears that turn towards the front or the sides as the brain commands. Some spiders can use their nets as a large touch-organ, like weaving a skin for themselves. Even in the absence of anything falling on the net, hungry spiders may increase web thread tension, so as to respond promptly even to usually less noticeable, and less profitable prey, such as small fruit flies, creating two different "quiescent states" for the net.
Things become completely ill-defined for a system which connects its output to its own input, thus ever-moving without any external input. The prime example is the brain, with its default mode network.
Diseases
Amblyopia
Anacusis
Color blindness
Deafness
See also
Multisensory integration
Neural adaptation
Neural coding
Sensor
Sensory augmentation
Sensory neuroscience
Sensory systems in fish
References
External links
Nervous system
Sensory systems
Sensory organs | Sensory nervous system | Biology | 3,291 |
77,207,305 | https://en.wikipedia.org/wiki/AF-353 | AF-353 is a drug which acts as an antagonist of the P2X3 homotrimeric and P2X2/3 heterotrimeric forms of the P2X purinoreceptors. It has been found to block taste responses and has been proposed as an agent for masking the bitter taste of medications.
See also
PPADS
References
Pyrimidines
Hydroquinones
Iodoarenes
Isopropyl compounds
Methoxy compounds
Diamines | AF-353 | Chemistry | 100 |
4,061,767 | https://en.wikipedia.org/wiki/Heaviside%E2%80%93Lorentz%20units | Heaviside–Lorentz units (or Lorentz–Heaviside units) constitute a system of units and quantities that extends the CGS with a particular set of equations that defines electromagnetic quantities, named for Oliver Heaviside and Hendrik Antoon Lorentz. They share with the CGS-Gaussian system that the electric constant and magnetic constant do not appear in the defining equations for electromagnetism, having been incorporated implicitly into the electromagnetic quantities. Heaviside–Lorentz units may be thought of as normalizing and , while at the same time revising Maxwell's equations to use the speed of light instead.
The Heaviside–Lorentz unit system, like the International System of Quantities upon which the SI system is based, but unlike the CGS-Gaussian system, is rationalized, with the result that there are no factors of appearing explicitly in Maxwell's equations. That this system is rationalized partly explains its appeal in quantum field theory: the Lagrangian underlying the theory does not have any factors of when this system is used. Consequently, electromagnetic quantities in the Heaviside–Lorentz system differ by factors of in the definitions of the electric and magnetic fields and of electric charge. It is often used in relativistic calculations, and are used in particle physics. They are particularly convenient when performing calculations in spatial dimensions greater than three such as in string theory.
Motivation
In the mid-late 19th century, electromagnetic measurements were frequently made in either the so-named electrostatic (ESU) or electromagnetic (EMU) systems of units. These were based respectively on Coulomb's and Ampere's Law. Use of these systems, as with to the subsequently developed Gaussian CGS units, resulted in many factors of appearing in formulas for electromagnetic results, including those without any circular or spherical symmetry.
For example, in the CGS-Gaussian system, the capacitance of sphere of radius is while that of a parallel plate capacitor is , where is the area of the smaller plate and is their separation.
Heaviside, who was an important, though somewhat isolated, early theorist of electromagnetism, suggested in 1882 that the irrational appearance of in these sorts of relations could be removed by redefining the units for charges and fields.
In his 1893 book Electromagnetic Theory,
Heaviside wrote in the introduction:
Length–mass–time framework
As in the Gaussian system (), the Heaviside–Lorentz system () uses the length–mass–time dimensions. This means that all of the units of electric and magnetic quantities are expressible in terms of the units of the base quantities length, time and mass.
Coulomb's equation, used to define charge in these systems, is in the Gaussian system, and in the HL system. The unit of charge then connects to , where 'HLC' is the HL unit of charge. The HL quantity describing a charge is then times larger than the corresponding Gaussian quantity. There are comparable relationships for the other electromagnetic quantities (see below).
The commonly used set of units is the called the SI, which defines two constants, the vacuum permittivity () and the vacuum permeability (). These can be used to convert SI units to their corresponding Heaviside–Lorentz values, as detailed below. For example, SI charge is . When one puts , , , and , this evaluates to , the SI-equivalent of the Heaviside–Lorentz unit of charge.
Comparison of Heaviside–Lorentz with other systems of units
This section has a list of the basic formulas of electromagnetism, given in the SI, Heaviside–Lorentz, and Gaussian systems.
Here and are the electric field and displacement field, respectively,
and are the magnetic fields,
is the polarization density,
is the magnetization,
is charge density,
is current density,
is the speed of light in vacuum,
is the electric potential,
is the magnetic vector potential,
is the Lorentz force acting on a body of charge and velocity ,
is the permittivity,
is the electric susceptibility,
is the magnetic permeability, and
is the magnetic susceptibility.
Maxwell's equations
The electric and magnetic fields can be written in terms of the potentials and .
The definition of the magnetic field in terms of , , is the same in all systems of units, but the electric field is in the SI system, but in the HL or Gaussian systems.
Other basic laws
Dielectric and magnetic materials
Below are the expressions for the macroscopic fields , , and in a material medium. It is assumed here for simplicity that the medium is homogeneous, linear, isotropic, and nondispersive, so that the susceptibilities are constants.
Note that The quantities , and are dimensionless, and they have the same numeric value. By contrast, the electric susceptibility is dimensionless in all the systems, but has for the same material:
The same statements apply for the corresponding magnetic quantities.
Advantages and disadvantages of Heaviside–Lorentz units
Advantages
The formulas above are clearly simpler in units compared to either or Gaussian units. As Heaviside proposed, removing the from the Gauss law and putting it in the Force law considerably reduces the number of places the appears compared to Gaussian CGS units.
Removing the explicit from the Gauss law makes it clear that the inverse-square force law arises by the field spreading out over the surface of a sphere. This allows a straightforward extension to other dimensions. For example, the case of long, parallel wires extending straight in the direction can be considered a two-dimensional system. Another example is in string theory, where more than three spatial dimensions often need to be considered.
The equations are free of the constants and that are present in the SI system. (In addition and are overdetermined, because .)
The below points are true in both Heaviside–Lorentz and Gaussian systems, but not SI.
The electric and magnetic fields and have the same dimensions in the Heaviside–Lorentz system, meaning it is easy to recall where factors of go in the Maxwell equation. Every time derivative comes with a , which makes it dimensionally the same as a space derivative. In contrast, in SI units is .
Giving the and fields the same dimension makes the assembly into the electromagnetic tensor more transparent. There are no factors of that need to be inserted when assembling the tensor out of the three-dimensional fields. Similarly, and have the same dimensions and are the four components of the 4-potential.
The fields , , , and also have the same dimensions as and . For vacuum, any expression involving can simply be recast as the same expression with . In SI units, and have the same units, as do and , but they have different units from each other and from and .
Disadvantages
Despite Heaviside's urgings, it proved difficult to persuade people to switch from the established units. He believed that if the units were changed, "[o]ld style instruments would very soon be in a minority, and then disappear ...". Persuading people to switch was already difficult in 1893, and in the meanwhile there have been more than a century's worth of additional textbooks printed and voltmeters built.
Heaviside–Lorentz units, like the Gaussian CGS units by which they generally differ by a factor of about 3.5, are frequently of rather inconvenient sizes. The ampere (coulomb/second) is reasonable unit for measuring currents commonly encountered, but the ESU/s, as demonstrated above, is far too small. The Gaussian CGS unit of electric potential is named a statvolt. It is about , a value which is larger than most commonly encountered potentials. The henry, the SI unit for inductance is already on the large side compared to most inductors; the Gaussian unit is 12 orders of magnitude larger.
A few of the Gaussian CGS units have names; none of the Heaviside–Lorentz units do.
Textbooks in theoretical physics use Heaviside–Lorentz units nearly exclusively, frequently in their natural form (see below), system's conceptual simplicity and compactness significantly clarify the discussions, and it is possible if necessary to convert the resulting answers to appropriate units after the fact by inserting appropriate factors of and . Some textbooks on classical electricity and magnetism have been written using Gaussian CGS units, but recently some of them have been rewritten to use SI units. Outside of these contexts, including for example magazine articles on electric circuits, Heaviside–Lorentz and Gaussian CGS units are rarely encountered.
Translating formulas between systems
To convert any formula between the SI, Heaviside–Lorentz system or Gaussian system, the corresponding expressions shown in the table below can be equated and hence substituted for each other. Replace by or vice versa. This will reproduce any of the specific formulas given in the list above.
As an example, starting with the equation
and the equations from the table
Moving the factor across in the latter identities and substituting, the result is
which then simplifies to
Notes
References
Special relativity
Electromagnetism
Hendrik Lorentz | Heaviside–Lorentz units | Physics | 1,939 |
1,008,247 | https://en.wikipedia.org/wiki/Klenow%20fragment | The Klenow fragment is a large protein fragment produced when DNA polymerase I from E. coli is enzymatically cleaved by the protease subtilisin. First reported in 1970, it retains the 5' → 3' polymerase activity and the 3’ → 5’ exonuclease activity for removal of precoding nucleotides and proofreading, but loses its 5' → 3' exonuclease activity.
The other smaller fragment formed when DNA polymerase I from E. coli is cleaved by subtilisin retains the 5' → 3' exonuclease activity but does not have the other two activities exhibited by the Klenow fragment (i.e. 5' → 3' polymerase activity, and 3' → 5' exonuclease activity).
Research
Because the 5' → 3' exonuclease activity of DNA polymerase I from E. coli makes it unsuitable for many applications, the Klenow fragment, which lacks this activity, can be very useful in research. The Klenow fragment is extremely useful for research-based tasks such as:
Synthesis of double-stranded DNA from single-stranded templates
Filling in receded 3' ends of DNA fragments to make 5' overhang blunt
Digesting away protruding 3' overhangs
Preparation of radioactive DNA probes
The Klenow fragment was also the original enzyme used for greatly amplifying segments of DNA in the polymerase chain reaction (PCR) process, before being replaced by thermostable DNA polymerases such as Taq polymerase.
The exo-Klenow fragment
Just as the 5' → 3' exonuclease activity of DNA polymerase I from E.coli can be undesirable, the 3' → 5' exonuclease activity of Klenow fragment can also be undesirable for certain applications. This problem can be overcome by introducing mutations in the gene that encodes Klenow. This results in forms of the enzyme being expressed that retain 5' → 3' polymerase activity, but lack any exonuclease activity (5' → 3' or 3' → 5'). This form of the enzyme is called the exo-Klenow fragment.
The exo-Klenow fragment is used in some fluorescent labeling reactions for microarray, and also in dA and dT tailing, an important step in the process of ligating DNA adapters to DNA fragments, frequently used in preparing DNA libraries for Next-Gen sequencing. (for instance see )
References
External links
Diagram at vivo.colostate.edu
DNA replication | Klenow fragment | Biology | 556 |
18,937,540 | https://en.wikipedia.org/wiki/Technical%20Service%20Council | The Technical Service Council was set up to combat the "brain drain" of Canadian engineers to the United States, when over 20% of the graduating classes were emigrating. Ireland, India, New Zealand and even Switzerland have had similar problems.
In 1927, Canadian industry financed the council, whose directors concluded that a non-profit employment service that was free to graduates might minimize emigration. The service survived the Depression, played a part in recruiting scientists and engineers for war work, pioneered outplacement and expanded to include other professional occupations. It financed major studies of the supply of and demand for engineers and offered free-job-hunting courses to professionals.
Although started in Toronto, the Council eventually had offices in Montreal, Winnipeg, Calgary, Edmonton and Vancouver before becoming bankrupt in 1994. It may have reduced the brain drain during its first 20 or 25 years, but it's not possible to judge its later record.
History
Beginning
In the 1920s over 20% of Canadian engineering graduates emigrated to the United States. At that time, jobs in the U.S. were both much more numerous and more varied than in Canada. Meanwhile, the number of graduates soared, Canadian employers were unconvinced of the value of engineering degrees and new graduates complained of the lack of jobs.
Robert A. Bryce, president of Macassa Mines Ltd. and Prof. H.E.T. Haultain of the University of Toronto resolved to act. In April 1927 they and Rev.Canon H.J. Cody, chairman of the board of governors of the University of Toronto, invited the chief executives of major firms to a dinner at the National Club in Toronto. After hearing how the loss of talent could hamper industry, each of the 12 executives promised $1,000 to fund a non-profit organization to combat the "brain drain". The brain drain, the selling of science to employers and Canadian nationalism were tightly intertwined ideas. The firm was called the Technical Service Council.
Rolsa Eric Smythe was hired to run the council. Appropriately, he was a Canadian engineer who had been working in Detroit. (3)
After a study of placement operations in other countries and consultation with employers, the directors decided that engineers would not respond to urges to stay in Canada. Instead the Technical Service Council would find jobs for them by operating a free (to graduates) placement service. Employers would be invited to donate to the service, although later some companies used the service without contributing.
Objectives
The objectives were: To retain for Canada young Canadians educated along technical and scientific lines; to bring graduates of universities and technical institutions into practical contact with Canadian industry; to submit to universities the recommendations of industry concerning scientific courses and to aid industry in technical and scientific employment problems.
Early operations
A small office was opened in Toronto in 1928 with $30,000 "seed money" from 30 firms to finance a three-year experiment. Between July and December, 159 job hunters registered, 185 jobs were listed by employers and 81 engineers were placed by the staff of two.
The Great Depression soon arrived, wiping out many jobs. Some graduates were placed in welcome, but undemanding jobs, like street car conductor. Raising money was difficult and the Council survived only because of grants from the government of Ontario in 1932-34 and sometimes, Smythe's forgoing his salary. It was decided to ask those who had found jobs through the TSC to make donations. This produced some money, but the organization had a hand-to-mouth existence until 1957, apart from the World War II years. (3)
By June 30, 1933, over 1,180 personnel had been placed, 110 of whom were repatriated Canadians. Expenses for the first five years of operations
were $44,988. In 1933, 111 men and women were placed by the council's staff of two.
Even then it was clear that engineers needed business knowledge. The Council persuaded the University of Western Ontario to offer a diploma course in management for engineers. Then such a course was novel, if not unique. In 1951 numerous employers and graduates in ceramic engineering were surveyed on behalf of the University of Saskatchewan to estimate future demand. Some time later a similar survey was made for the University of Toronto. As a result of these studies, both universities discontinued their ceramic engineering programs.
By 1938, in response employers' demand for "one-stop service", the Council expanded to include executives, accountants, marketing, production and personnel staff. A year later, the economy had improved, but the council's placements were mainly in Ontario and Quebec, where Canada's industry was concentrated.
Wartime years
Job vacancies soared with the start of World War II. Shipyards, steel mills, armaments and munitions factories, aircraft manufacturers and construction companies urgently needed engineers. Few engineers even considered emigrating to the United States because of patriotic reasons and the plethora of jobs.
The Technical Service Council was the only placement service allowed to operate during the war. Its bank of professionals was such an important national resource that 15 recruiters from Defence Industries Ltd., the major munitions manufacturer, were loaned to the council.
Post-war activities
After the war, veterans were entitled to free university tuition. Therefore, record numbers of engineers were graduated in 1949 and 1950. Graduates of Western and Maritime universities, both in areas with limited industry, greatly outnumbered local vacancies. Many engineers moved to Ontario, Quebec and the United States. About 2,500 professional men emigrated to the United States in 1950 alone. Nevertheless, one study showed that the exodus of technically trained graduates dropped from 27% of the graduating classes in 1927 to under 10% in 1951.
Pioneering work was done on group interviews and recruitment advertising in 1950–52. The latter study showed how employers could increase response to their ads. The Federal government engaged the council to write a handbook on the job market for immigrants while the Ontario Government asked the council to appraise opportunities for prospective immigrants from Great Britain.
Canadian industry contributed more than $300,000 to the Council between 1927 and 1953. During the same period, employers listed 16,533 job vacancies. 6,817 men with special training were placed in key positions in business and industry. The Council registered and interviewed 24,607 men with higher education. The qualifications of each were carefully cross-indexed and maintained for employers. An additional 100,000 individuals were interviewed to assess qualifications and give free vocational advice. The average cost per placement rose from $50 to $100 between 1948 and 1954.
Between 1951 and 1956, 3,072 engineers, equivalent to 31% of graduating classes in engineering, emigrated to the United States. They could have staffed the largest missile centre in the Western World. In 1951 the equivalent of 11% of the graduating classes in engineering left for the United States. In 1956, as immigrants were less likely to be drafted, the percentage had soared to 46%.
In 1957 the Council almost collapsed, but it was revived by new management who increased placement fees.
Shortages of engineers and scientists in Canada often coincided with equally acute shortages in the United States. American companies then recruited actively in Canada, as they did following the 1959 cancellation of Canada's much-vaunted Avro Arrow jet fighter. In addition, Canadians completing post-graduate training in the U.S. often found getting a job locally easier than searching for one in distant Canada.
In 1962 a branch in Montreal called Technical Service Council/Le Conseil de Placement Professionnel was opened. It was followed by others in Winnipeg, Calgary, Edmonton and Vancouver.
Outplacement and other activities
The council was one of the pioneers of outplacement (then called relocation counselling) in Canada. Its first contract in 1970 eventually developed into a significant activity. In addition to individual counselling, free office services and other benefits, clients were given How to Job Hunt Effectively, a substantial hand and work book. The book was also available to the public and over 5,000 copies were sold.
From 1967 regular one-day employment interviewing courses for line managers were run in major cities. Students received written critiques of their interviews with actors.
By 1971 out-of-work university graduates were so numerous that free "How to Job Hunt" courses were held in several cities. As another public service, over $200,000. was spent researching and publishing ten-year forecasts of the supply of and demand for engineering graduates in 1975 and again in 1988. Both studies were intended to improve understanding of the job market, candidate mobility and help minimize "mismatch". They were provided free to Canadian universities and sold at below cost to employers.
In the same year, an executive search division, Bryce, Haultain & Associates, was opened and named after two of the council's co-founders.
Later placement activities
By 1976 the council had placed over 16,000 men and women. An equal number were estimated to have rejected job offers from the council's client companies. Studies showed that 25% of job listings were never filled from any source. Employers' reasons included budget cuts, inability to find someone who filled the job specifications, candidates' high asking salaries, reorganizations and a belated realization that existing staff could do the job.
In 1976 573 firms were members. Annual membership fees were mainly $100 to $500., depending upon company size and usage. Placement fees were kept low in order to attract job listings. The greater the choice of vacancies, the more likely candidates were to stay in Canada. Placement fees for member companies were 4% to 5% of the placement's annual income. Commercial employment agencies charged 20% to 30%.
Over 17,000 engineers and scientists emigrated from Canada to the United States between 1960 and 1979. The number of engineers emigrating declined from 1,209 in 1967 to only 289 in 1977, and the number of chemists emigrating dropped from 156 to 58 during the same period. However, engineers and scientists emigrating increased from 727 in 1982 to 1,433 in 1985.
Active job listings reached 4,328 in June 1981, an astonishing figure for such a clearing house. Orders plummeted when Prime Minister Pierre Trudeau's highly controversial National Energy Program took effect. About half the council's staff was laid off.
Nevertheless, between 1928 and 1988, over 46,000 men and women had received job offers from about 1,700 of the council's employer clients.
Frequent dramatic swings in the job market caused the council to build a financial reserve equal to two times' annual operating expenses. The reserve was over three times' expenses in December, 1991, but the council was declared bankrupt in September, 1994.
Evaluation of results
From 1928 to 1939, job vacancies were mainly advertised locally so job hunters had difficulty learning of distant jobs. The Maritimes and West had so little industry that their substantial engineering graduating classes had to seek positions elsewhere. Employers seldom sought professionals through the Federal employment service while universities had tiny or non-existent placement services. This lack of job information made the council's numerous industrial contacts especially important to job hunters. The exodus of technically trained Canadians is said to have dropped from 27% of graduating classes in 1927 to under 10% in 1951 and 5% in 1967.
Any evaluation of the later years is difficult. The number of job vacancies and job hunters both increased. But often supply and demand were out of sync, encouraging emigration. Universities devoted more resources to placing their graduates, but often gave little attention to experienced graduates. Commercial employment agencies expanded, but few lasted five years because of the erratic job market. Eventually, one and then another national newspaper spread news of distant vacancies. Although graduates had better information than ever, the council was still busy. Despite this, 2,500 professional men moved to the United States in 1950 alone. It is impossible to judge the council's impact on the "brain drain" since then.
The Ministry of State, Science and Technology asked the council to study the feasibility of a National Register of Canadians in research-oriented occupations who are working or studying out of the country. The study found that 65% of employers contacted had a strong or moderate interest in a register. It was estimated that only one or two per cent of candidates would find jobs through the register. Neither the register nor a free handbook on job hunting in Canada would get at the reasons why many Canadians do not return: a perceived lack of opportunities in their specialty and lack of research support in Canada.
The study noted that efforts by the Association of Medical Colleges of Canada and the Association of Universities and Colleges of Canada to recruit Canadians in the U.S. had failed. In 1986 twenty British firms advertised for British-trained engineers in North America. The ads produced 6,500 replies and about 1,800 job offers. Only 89 offers were accepted at what was considered an uneconomical cost.
References
16. Cuddihy, Basil Robert. "How to Give Phased-out Managers a New Chance", Harvard Business Review, Vol.54, No.4. Jul.-Aug. 1974. Neither the Technical Service Council nor the other consultant used is named. (This entry replaces No.16 above.)
22. Technical Service Council advertisement in Ontario Technologist and about six other Canadian technical publications. Summer, 1981.
Organizations disestablished in 1994
Human migration
Non-profit organizations based in Toronto
Brain drain
Engineering organizations
Organizations established in 1927
Emigration policy | Technical Service Council | Engineering | 2,725 |
31,837,770 | https://en.wikipedia.org/wiki/Urban%20resilience | Urban resilience is defined as the "measurable ability of any urban system, with its inhabitants, to maintain continuity through all shocks and stresses, while positively adapting and transforming towards sustainability". It has conventionally been used to describe the change in structure and function of urban areas. Urban resilience is the ability of a city to survive, adapt, and grow in the face of chronic stresses and acute shocks. It refers to the capacity of a city's systems, institutions, businesses, communities, and individuals to withstand and respond to these stresses and shocks.
A resilient city is one that assesses, plans and acts to prepare for and respond to hazards, regardless of whether they are natural or human-made, sudden or slow-onset, expected or unexpected. Resilient cities are better positioned to protect and enhance people's lives, secure development gains, and drive positive change.
History
According to urban historian Roger W. Lotchin, World War II had a profound environmental impact on urban areas in the USA. By 1945 Pittsburgh and other cities along the Mississippi River experienced levels of air pollution comparable to the Dust Bowl. The environmental impact of World War II turned urban areas around the world into shock cities. Examples of impacted cities include Hiroshima, Chongqing, Stalingrad, and Dresden. Environmental history first emerged as an academic research topic in the 1970s, focusing initially on rural areas. Pioneers of urban environmental history include Martin Melosi, Christine Rosen, Joel A. Tarr, Peter Brimblecombe, Bill Luckin, and Christopher Hamlin.
In recent years, urban resilience concerns in the urban planning of cities have become more visible. Social scientists have taken an increased interest in ecological resilience, because the links between social-ecological systems are being examined. Urban resilience is no longer the preserve of academics, and urban policy groups around the globe are putting forward proposals to enhance the urban resilience of cities. The definition of urban resilience varies, but is no longer limited to the speed at which an urban system recovers after a shock.
Academic research focus
Academic discussion of urban resilience has focused primarily on three threats: climate change, natural disasters, and terrorism. Accordingly, resilience strategies have tended to be conceived of in terms of counter-terrorism, other disasters (earthquakes, wildfires, tsunamis, coastal flooding, solar flares, etc.), and infrastructure adoption of sustainable energy.
More recently, there has been increased attention to the evolution of urban resilience and the capability of urban systems to adapt to changing conditions. This branch of resilience theory builds on the notion of cities as highly complex adaptive systems. As a result, academic discussions of urban planning include plans informed by network science, involving less interference in the functioning of cities. Network science provides a way of linking city size to the forms of networks that are likely to enable cities to function. These perspectives can provide further insights into the potential effectiveness of various urban policies. This requires a better understanding of the types of practices and tools that contribute to building urban resilience. Genealogical approaches explore the evolution of these practices over time, including the values and power relations underpinning them.
Investment decisions
Building resilience in cities relies on making investment decisions that prioritize spending on activities that offer alternatives that can perform well in different scenarios. Such decisions need to take into account future risks and uncertainties as risk can never be fully eliminated; emergency and disaster planning is crucial. Improvements in disaster risk management, for example, offer practical opportunities for enhancing resilience.
Since 2007, more than half of the world's human population has lived in cities, and urbanization is calculated to rise to 80% by 2050. The growing urbanization over the past century has been associated with a considerable increase in urban sprawl. Resilience efforts address not only how individuals, communities and businesses cope with multiple shocks and stresses, but also exploit opportunities for transformational development.
One way that national and local governments address disaster risk in urban areas, often supported by international funding agencies, is to consider resettlement. This can be preventative, or occur after a disaster. While resettlement reduces people's exposure to hazards, it can lead to other problems, which can leave people more vulnerable or worse off than they were before. Resettlement needs to be understood as part of long-term sustainable development, not just as a means for disaster risk reduction.
Sustainable Development Goal 11
In September 2015, world leaders adopted the 17 Sustainable Development Goals (SDGs) as part of the 2030 Agenda for Sustainable Development. The goals, which build on and replace the Millennium Development Goals, officially came into force on 1 January 2016 and are expected to be achieved within the next 15 years. While the SDGs are not legally binding, governments are expected to take ownership and establish national frameworks for their achievement. Countries also have the primary responsibility for follow-up and review of progress based on accessible, timely, and high quality data. National reviews of regional progress will provide information on global progress of the initiative.
UN-Habitat's city resilience profiling tool
As the UN Agency for Human Settlements, UN-Habitat is working to support local governments and their stakeholders in building urban resilience through the City Resilience Profiling Tool (CRPT). When applied, UN-Habitat's holistic approach to increasing resiliency can improve local government's ability to ensure the well being of citizens, protect development gains, and maintain functionality in the face of hazards. UN-Habitat supports cities to maximize the impact of CRPT implementation. The CRPT follows various stages, including the following:
Getting started: Local governments and UN-Habitat connect to evaluate the needs, opportunities and context of the city and evaluate the possibility of implementing the tool in their city. They consider the stakeholders that need to be involved in implementation, including civil society organizations, national governments, and the private sector.
Engagement: By signing an agreement with a UN agency, the local government is better able to work with the necessary stakeholders to assess risk and build in resilience across the city.
Diagnosis: The CRPT provides a framework for cities to collect the right data about the city to better evaluate their resilience and identify potential vulnerabilities in their urban system. Diagnosis considers all elements of the urban system, including potential hazards and stakeholders. Effective action requires understanding of the entire urban system.
Resilience Actions: The main output of the CRPT is a unique Resilience Action Plan (RAP) for each engaged city. The RAP sets out short-, medium- and long-term strategies based on the diagnosis. Actions are prioritized, assigned inter-departmentally, and integrated into existing government policies and plans. The process is iterative; after resilience actions have been implemented, local governments use the tool to monitor impact and identify any necessary next steps.
Taking it further: Resilience actions require the buy-in of all stakeholders and, in many cases, additional funding. However, with the detailed diagnosis resulting from the tool, local governments can leverage the support of national governments, donors, and other international organizations to work towards sustainable urban development.
To date, this approach has been adopted in Barcelona (Spain), Asuncion (Paraguay), Maputo (Mozambique), Port Vila (Vanuatu), Bristol (United Kingdom), Lisbon (Portugal), Yakutsk (Russia), and Dakar (Senegal). The biennial publication, Trends in Urban Resilience, is tracking the most recent efforts to build urban resilience as well as the actors behind these actions and a number of case studies.
Medellin Collaboration for Urban Resilience
The Medellin Collaboration for Urban Resilience (MCUR) was launched in 2014 at the 7th session of the World Urban Forum in Medellín, Colombia. As a pioneering partnership platform, the MCUR gathers the most prominent actors committed to building resilience globally, including the United Nations Office for Disaster Risk Reduction (UNDRR), The World Bank Group, Global Facility for Disaster Reduction and Recovery, Inter-American Development Bank, Rockefeller Foundation, 100 Resilient Cities, C40, ICLEI and Cities Alliance, and it is chaired by UN-Habitat.
MCUR aims to jointly collaborate on strengthening the resilience of all cities and human settlements around the world by supporting local, regional and national governments through provision of knowledge and research, facilitating access to local-level finance, and raising global awareness on urban resilience through policy advocacy and adaptation diplomacy efforts. Its work is devoted to achieving the main international development agendas set out in the Sustainable Development Goals, the New Urban Agenda, the Paris Agreement on Climate Change and the Sendai Framework for Disaster Risk Reduction.
The MCUR helps local governments and municipal professionals understand the primary utility of the vast array of tools and diagnostics designed to assess, measure, monitor and improve city-level resilience. For example, some tools are intended as rapid assessments to establish a general understanding and baseline of a city's resilience and can be self-deployed, while others are intended as a means to identify and prioritise areas for investment. The Collaboration has produced a guidebook to illustrate how cities are responding to current and future challenges by thinking strategically about design, planning, and management for building resilience. Currently, it is working in a collaborative model in six pilot cities: Accra, Bogotá, Jakarta, Maputo, Mexico City and New York City.
100 Resilient cities and the City Resilience Index (CRI)
The Rockefeller Foundation, rates 100 cities for resilience. The Rockefeller Foundation states that: "Urban resilience is the capacity of individuals, communities, institutions, businesses, and systems within a city to survive, adapt, and grow no matter what kinds of chronic stresses and acute shocks they experience."
A central program contributing to the achievement of SDG 11 is the Rockefeller Foundation's 100 Resilient Cities. In December 2013, The Rockefeller Foundation launched the 100 Resilient Cities initiative, which is dedicated to promoting urban resilience, defined as "the capacity of individuals, communities, institutions, businesses, and systems within a city to survive, adapt, and grow no matter what kinds of chronic stresses and acute shocks they experience".
The professional services firm Arup has helped the Rockefeller Foundation develop the City Resilience Index (CRI) based on extensive stakeholder consultation across a range of cities globally. The CRI is intended as a planning and decision-making tool to help guide urban investments toward results that facilitate sustainable urban growth and the well-being of citizens. The hope is that city officials will utilize the tool to identify areas of improvement, systemic weaknesses and opportunities for mitigating risk. Its generalizable format also allows cities to learn from each other.
The CRI is a holistic articulation of urban resilience premised on the finding that there are 12 universal factors or drivers that contribute to city resilience. The factors vary in importance and are organized into four core dimensions of urban resilience
A total of 100 cities across six continents have signed up for the Rockefeller Center's urban resilience challenge. All 100 cities have developed individual City Resilience Strategies with technical support from a Chief Resilience Officer (CRO). The CRO ideally reports directly to the city's chief executive and helps coordinate all the resilience efforts in a single city.
Medellin in Colombia qualified for the urban resilience challenge in 2013. In 2016, it won the Lee Kuan Yew World City Prize.
Urban governance
A core factor enabling progress on all dimensions of urban resilience is urban governance. Sustainable, resilient and inclusive cities are often the product of good governance, particularly including effective leadership, inclusive citizen participation, and efficient financing. Public officials also require access to robust data, enabling evidence-based decision making. Open data improves the ability of local governments to share information with citizens, deliver services, and monitor performance. Increased public access to information facilitates more direct citizen involvement in decision-making.
Digital technologies
As part of their resilience strategies, city governments are increasingly relying on digital technology as part of a city's infrastructure and service delivery systems. On the one hand, reliance on digital technologies and electronic service delivery has made cities more vulnerable to phone hacking and cyber-attacks. On the other hand, information technologies have often had a positive impact by supporting innovation and promoting efficiencies in urban infrastructure, thus leading to lower-cost city services. The deployment of new technologies in the initial construction of infrastructure have in some cases even allowed urban economies to leapfrog stages of development. An unintended outcome of the growing digitization of cities is the emergence of a digital divide, which can exacerbate inequality between well-connected affluent neighborhoods and business districts and under-serviced and under-connected low-income neighborhoods. In response, a number of cities have introduced digital inclusion programs to ensure that all citizens have the necessary tools to thrive in an increasingly digitized world.
Climate change
The urban impacts of climate change vary widely geographically and among levels of development. A recent study of 616 cities (home to 1.7 billion people, with a combined GDP of US$35 trillion, half of the world's total economic output), found that floods endanger more city residents than any other natural peril, followed by earthquakes and storms. Below is an attempt to define and discuss the challenges of heat waves, droughts and flooding. Resilience-boosting strategies will be introduced and outlined.
Heat waves and droughts
Heat waves are becoming increasingly prevalent as the global climate changes. The 1980 United States heat wave and drought killed 10,000 people. In 1988 a similar heat wave and drought killed 17,000 American citizens. In August 2003 Europe saw record breaking summer temperatures with average temperatures persistently rising above 32°C. In the UK, nearly 3,000 deaths were attributed to the heat wave during this period, with an increase of 42% in London alone, and the heat wave claimed more than 40,000 lives across Europe. Research indicates that by 2040 over 50% of summers will be warmer than 2003 and by 2100 those same summer temperatures will be considered cool. The 2010 northern hemisphere summer heat wave was also disastrous, with nearly 5,000 deaths occurring in Moscow. In addition to deaths, these heat waves also cause other significant problems. Extended periods of heat and droughts also cause widespread crop losses, spikes in electricity demand, forest fires, air pollution and reduced biodiversity in vital land and marine ecosystems. Agricultural losses from heat and drought might not occur directly within the urban area, but certainly affect the lives of urban dwellers. Crop shortages can lead to spikes in food prices, food scarcity, civic unrest and even starvation in extreme cases. Direct fatalities from heat waves and droughts tend to be concentrated in urban areas, not just because of increased population density, but because of social factors and the urban heat island effect.
Urban heat islands
Urban heat island (UHI) refers to the presence of an inner-city micro-climate in which temperatures are higher than those in surrounding rural areas. Recent studies have shown that summer daytime temperatures can be up to 10°C hotter in a city center than in rural areas and between 5–6°C warmer at night. The causes of UHI are no mystery, and are mostly due to simple energy balances and geometrics. The building materials commonly present in urban areas (concrete and asphalt) absorb and store heat much more effectively than the surrounding natural environment. The black color of asphalt surfaces (roads, parking lots and highways) is able to absorb significantly more electromagnetic radiation, further encouraging the rapid and effective capture and storage of heat throughout the day. Geometrics come into play as well, as tall buildings provide large surfaces that both absorb and reflect sunlight and its heat energy onto other absorbent surfaces. These tall buildings also block the wind, which limits convective cooling. The large size of the buildings also blocks surface heat from naturally radiating back into the cool sky at night. These factors, combined with the heat generated from vehicles, air conditioners, and industry ensure that cities create, absorb and hold heat very effectively.
Social factors for heat vulnerability
The physical causes of heat waves and droughts and the exacerbation of the UHI effect are only part of the equation in terms of fatalities; social factors play a role as well. Statistically, senior citizens represent the majority of heat (and cold) related deaths within urban areas and this is often due to social isolation. In rural areas, seniors are more likely to live with family or in care homes, whereas in cities they are often concentrated in subsidized apartment buildings and in many cases have little to no contact with the outside world. Like other urban dwellers with little or no income, most urban seniors are unlikely to own an air conditioner. This combination of factors leads to thousands of tragic deaths every season, and the incidence is increasing each year.
Adapting for heat and drought resilience
Greening, reflecting and whitening urban spaces
Greening urban spaces is among the most frequently mentioned strategies to address heat effects. The idea is to increase the amount of natural cover within the city. This cover can be made up of grasses, bushes, trees, vines, water, rock gardens; any natural material. Covering as much surface as possible with greenery will both reduce the total quantity of thermally absorbent artificial material, and by creating shade, will reduce the amount of light and heat that reaches the concrete and asphalt that cannot be replaced by greenery.
Trees are among the most effective greening tool within urban environments because of their coverage/footprint ratio. Trees require a very small physical area for planting, but when mature, they provide a much larger coverage area. Trees absorb solar energy for photosynthesis (improving air quality and mitigating global warming), reducing the amount of energy being trapped and held within artificial surfaces, and also cast much-needed shade on the city and its inhabitants. Shade itself does not lower the ambient air temperature, but it greatly reduces the perceived temperature and comfort of those seeking its refuge.
An increasingly popular method of preventing the so called urban heat island (UHI) is to increase the albedo (light reflectiveness). This can be done by using reflective paints or materials where appropriate, or white and light colored paints. Glazing can also be added to windows to reduce the amount of heat that buildings or roofs generate and store.
Green roofs also help reduce the urban heat island effect and improve the resilience to urban flooding. Restoring ponds and lakes and other types of urban open water can also help as shown by Beijing, China's "Dragon-shaped Lake". Depaving urban footpaths and roads has also been found to be effective in urban flood control, and may be a more cost-efficient approach.
Social strategies
There are various strategies to increase the resilience of those most vulnerable to urban heat waves, primarily socially isolated seniors, but also young children (especially those facing abject poverty or living in informal housing), people with underlying health problems, the infirm or disabled, and the homeless. Accurate and early prediction of heat waves is of fundamental importance, as it gives time for the government to issue extreme heat alerts. Urban areas must prepare and be ready to implement heat-wave emergency response initiatives. Seasonal campaigns aimed to educate the public on the risks associated with heat waves will help prepare the broad community, but in response to impending heat events more direct action is required.
Local government must quickly communicate with the groups and institutions that work with heat-vulnerable populations. Cooling centers should be opened in libraries, community centers and government buildings. These centers ensure free access to air conditioning and water. In partnership with government and non-government social services, paramedics, police, firefighters, nurses and volunteers; the above-mentioned groups working with vulnerable populations should carry out regular door-to-door visits during these extreme heat scenarios. These visits should provide risk assessment, advice, bottled water (for areas without potable tap water) and the offer of free transportation to local cooling centers.
Food and water supplies
Heat waves and droughts can cause massive damage to agricultural areas vital to providing food staples to urban populations. Reservoirs and aquifers quickly dry up due to increased demand on water for drinking, industrial and agricultural purposes. The result can be food shortages and price spikes, and increasingly, shortages of drinking water as observed with increasing severity seasonally in China and throughout most of the developing world. From an agricultural standpoint, farmers can be encouraged to plant more heat and drought-resistant crops. Agricultural practices can also be modified to higher levels of hydrological efficiency. Reservoirs should be expanded and new reservoirs and water towers should be constructed in areas facing critical shortages. Grander schemes of damming and redirecting rivers should also be considered if possible. For saltwater coastal cities, desalination plants provide a possible solution to water shortages. Infrastructure improvements may also enhance resilience, as in many areas aging pipelines result in leakage and possible contamination of drinking water. In Kenya’s major cities, Nairobi and Mombasa, between 40 and 50% of drinking water is lost through leakage. In such cases, replacements and repairs are clearly needed.
Flooding
Flooding, either from weather events, rising sea levels or infrastructure failures are a major cause of death, disease and economic losses throughout the world. Climate change and rapidly expanding urban settlements are two factors that increase occurrence and severity of urban flooding, especially in the developing world. Storm surges can affect coastal cities and are caused by low pressure weather systems, like cyclones and hurricanes. Flash floods and river floods can affect any city within a floodplain or with inadequate drainage infrastructure. These can be caused by large quantities of rain or heavy rapid snow melt. With all forms of flooding, cities are more vulnerable because of the large quantity of paved and concrete surfaces; these impermeable surfaces cause massive amounts of runoff that can quickly overwhelm the limited infrastructure of storm drains, flood canals and intentional floodplains. Many cities in the developing world simply have no infrastructure whatsoever to redirect floodwaters. Around the world, floods kill thousands of people every year and are responsible for billions of dollars in damages and economic losses. In cities with poor or absent drainage infrastructure, flooding can also lead to the contamination of drinking water sources (aquifers, wells, inland waterways) with salt water, chemical pollution, and most frequently, viral and bacterial contaminants. Flooding, much like heat waves and droughts, can also wreak havoc on agricultural areas, quickly destroying large amounts of crops.
Flood flow in urban environment
The flood flow in urbanized areas constitutes a hazard to population and infrastructure. Some recent catastrophes included the inundations of Vaison-la-Romaine (France) in 1992, Nîmes (France) in 1998, New Orleans (USA) in 2005, and the flooding in Rockhampton, Bundaberg, and Brisbane in Queensland (Australia) during the summer of 2010–2011. Flood flows in urban environments have been studied only relatively recently despite many centuries of flood events. Several studies looked into the flow patterns and redistribution in streets during storm events and the implication in terms of flood modelling.
Some research considered the criteria for safe evacuation of individuals in flooded areas. But some recent field measurements during the 2010–2011 Queensland floods showed that any criterion solely based upon the flow velocity, water depth or specific momentum cannot fully account for all the hazards due to flooding as they do not take into account risks associated with large debris entrained by the flow.
Adapting for flood resilience
Urban greening
Replacing as much non-porous surface as possible with greenery will allow the ground and plants to help absorb excess water. Green roofs are gaining popularity; they vary from very thin layers of soil or rockwool supporting a variety of low or no-maintenance mosses or sedum species to large, deep, intensive roof gardens capable of supporting large plants and trees but requiring regular maintenance and significant structural support. The deeper the soil, the more rainwater it can absorb and therefore the more potential floodwater it can prevent from reaching the ground.
One of the best strategies, if possible, is to simply create enough space for the excess water by expanding areas of parkland in or adjacent to the zone where flooding is most likely to occur. Excess water is diverted into these areas when necessary, as in Cardiff, Wales around the new Millennium Stadium and at the main Olympic site in Beijing, China .
Floodplain clearance is another greening strategy that involves removing structures and pavement built on floodplains and returning the area to its natural habitat which is capable of absorbing massive quantities of water that otherwise would have flooded the built-up urban area.
Flood-water control
Levees and other flood barriers are indispensable for cities on floodplains or along rivers and coasts. In areas with lower financial and engineering capital, there are cheaper and simpler options for flood barriers. UK engineers are currently conducting field tests of a new technology called the SELOC (Self-Erecting Low-Cost Barrier). The barrier itself lies flat on the ground, and as the water rises, the SELOC floats up, with its top edge rising with the water level. A restraint holds the barrier in the vertical position. This simple, inexpensive flood barrier has great potential for increasing urban resilience to flood events and shows significant promise for developing nations with its low cost and simple, fool-proof design. The creation or expansion of flood canals and/or drainage basins can help direct excess water away from critical areas and the utilization of innovative porous paving materials on city streets and car parks allow for the absorption and filtration of excess water.
During the January 2011 flood of the Brisbane River (Australia), some unique field measurements about the peak of the flood showed very substantial sediment fluxes in the Brisbane River flood plain, consistent with the murky appearance of floodwaters.
Structural resilience
In most developed nations, all new developments are assessed for flood risks. The aim is to ensure flood risk is taken into account in all stages of the planning process to avoid inappropriate development in areas of high risk. When development is required in areas of high risk, structures should be built to flood-resistant standards and living or working areas should be raised well above the worst-case scenario flood levels. For existing structures in high-risk areas, funding should be allocated to remediation, for example raising electrical wiring/sockets so any water that enters the home can not reach the electrics. Other solutions are to raise structures to appropriate heights or make them floating; as a last resort, considerations should be made to relocate or rebuild structures on higher ground. A house in Mexico Beach, Florida which survived Hurricane Michael is an example of a house built to survive tidal surge.
The pre-Incan Uru people of Lake Titicaca in Peru have lived on floating islands made of reeds for hundreds of years. The practice began as an innovative form of protection from competition for land by various groups, and it continues to support the Uru homeland. The manual technique is used to build homes resting on hand-made islands all from simple reeds from the totora plant. Similarly, in the southern wetlands of Iraq, the Marsh Arabs (Arab al-Ahwār) have lived for centuries on floating islands and in arched buildings all constructed exclusively from local qasab reeds. Without any nails, wood, or glass, buildings are assembled by hand as quickly as within a day; such homes can also be disassembled in a day, transported, and reassembled.
Emergency response
As with all disasters, flooding requires a specific set of disaster response plans. Various levels of contingency planning should be established, from basic medical and selective evacuation provisions involving local emergency responders all the way up to full military disaster relief plans involving air-based evacuations, search and rescue teams, and provisions for relocation of entire urban populations. Clear lines of responsibility and chains of command must be laid out, and tiered priority response levels should be established to address the immediate needs of the most vulnerable citizens first. Sufficient emergency funding should be set aside for post-flooding repair and reconstruction.
World education and research relating to urban resilience
The United States
Urban resilience as an educational topic in the USA has experienced an unprecedented level of growth due in large part to a series of natural disasters including the 2004 Indian Ocean earthquake and tsunami, 2005 Hurricane Katrina, the 2011 Tohoku earthquake and tsunami, and Hurricane Sandy in 2012. Two of the more well-recognized programs are Harvard Graduate School of Design's Master's program in Risk and Resilience, and Tulane University's Disaster Resilience Leadership Academy. There are also several workshops available related to the U.S. Federal Emergency Management Agency and the Department of Homeland Security.
China
China's resilient cities research started relatively late, involving theories, scholars, and disciplines mostly from the United States. However, with the establishment of China's Ministry of Emergency Management and the country's deepening awareness of and emphasis on earthquake prevention and mitigation, related research and institutions have developed rapidly. A number of universities, including Zhejiang University's Ren Center for Resilience, have made significant contributions to the promotion and application of resilient cities concepts in China.
Challenges with further mainstreaming of urban resilience approaches
There are at least three key challenges to further mainstreaming innovative approaches to urban resilience. First, urban development systems have tended to see urban resilience schemes as public projects entailing a significant burden on the state to finance, plan and manage them. This is a classic problem of externalities, in which private developers are too often not required to bear the costs of remediating the consequences of their activities. Second, urban planning regulations typically do not require urban resilience measures in the same way they require fire detection and suppression or road access. Third, too many professionals in urban design, engineering and the environmental sciences lack awareness of innovative approaches to resilience and so cannot practice them.
See also
Co-benefits of climate change mitigation
Energy security
New Urbanism
Sustainable urbanism
Urban vitality
References
Urban planning
Disaster management | Urban resilience | Engineering | 6,196 |
33,584,199 | https://en.wikipedia.org/wiki/Comet%20Swift%E2%80%93Tuttle | Comet Swift–Tuttle (formally designated 109P/Swift–Tuttle) is a large periodic comet with a 1995 (osculating) orbital period of 133 years that is in a 1:11 orbital resonance with Jupiter. It fits the classical definition of a Halley-type comet, which has an orbital period between 20 and 200 years. The comet was independently discovered by Lewis Swift on 16 July 1862 and by Horace Parnell Tuttle on 19 July 1862.
Its nucleus is in diameter. Swift–Tuttle is the parent body of the Perseid meteor shower, perhaps the best known shower and among the most reliable in performance.
The comet made a return appearance in 1992, when it was rediscovered by Japanese astronomer Tsuruhiko Kiuchi and became visible with binoculars. It was last observed in April 1995 when it was from the Sun. In 2126, it will be a bright naked-eye comet reaching an apparent magnitude of about 0.7.
Historical observations
Chinese records indicate that, in 188, the comet reached apparent magnitude 0.1. Observation was also recorded in 69 BCE, and it was probably visible to the naked eye in 322 BCE.
In the discovery year of 1862, the comet was as bright as Polaris.
After the 1862 observations, it was incorrectly theorized that the comet would return between 1979 and 1983. However, it had been suggested in 1902 that this was the same comet as that observed by Ignatius Kegler on 3 July 1737 and on this basis Brian Marsden calculated correctly that it would return in 1992.
Orbit
The comet's perihelion is just under that of Earth, while its aphelion is just over that of Pluto. An unusual aspect of its orbit is that it was recently captured into a 1:11 orbital resonance with Jupiter; it completes one orbit for every 11 of Jupiter. It was the first comet in a retrograde orbit to be found in a resonance. In principle this would mean that its proper long-term average period would be 130.48 years, as it librates about the resonance. Over the short term, between epochs 1737 and 2126 the orbital period varies between 128 and 136 years. However, it only entered this resonance about 1000 years ago, and will probably exit the resonance in several thousand years.
Threat to Earth
The comet is on an orbit that makes repeated close approaches to the Earth–Moon system, and has an Earth-MOID (Minimum orbit intersection distance) of .<
Upon its September 1992 rediscovery, the comet's date of perihelion passage was off from the 1973 prediction by 17 days. It was then noticed that if its next perihelion passage (July 2126) was also off by another 15 days (July 26), the comet could impact the Earth or the Moon on 14 August 2126 (IAUC 5636: 1992t).
Given the size of the nucleus of Swift–Tuttle, this was of some concern. This prompted amateur astronomer and writer Gary W. Kronk to search for previous apparitions of this comet. He found the comet was most likely observed by the Chinese at least twice, first in 69 BCE and later in 188 CE; these two sightings were quickly confirmed by Brian Marsden and added to the list of perihelion passages at the Minor Planet Center. Around 25 July 188 CE the comet passed about from Earth.
This information and subsequent observations have led to recalculation of its orbit, which indicates the comet's orbit is sufficiently stable that there is absolutely no threat over the next two thousand years. It is now known that the comet will pass from Earth on August 5, 2126. and within from Earth on August 24, 2261.
A close encounter with Earth is predicted for the comet's return to the inner Solar System in the year 3044, with the closest approach estimated to be . Another close encounter is predicted for the year 4479, around Sept. 15; the close approach is estimated to be less than 0.05 AU, with a probability of impact of 1 in a million. Subsequent to 4479, the orbital evolution of the comet is more difficult to predict; the probability of Earth impact per orbit is estimated as 2 (0.000002%).
Comet Swift–Tuttle is by far the largest near-Earth object (Apollo or Aten asteroid or short-period comet) to cross Earth's orbit and make repeated close approaches to Earth. With a relative velocity of 60 km/s, an Earth impact would have an estimated energy of ~27 times that of the Cretaceous–Paleogene impactor. The comet has been described as "the single most dangerous object known to humanity". In 1996, the long-term possibility of Comet Swift–Tuttle impacting Earth was compared to 433 Eros and about 3000 other kilometer-sized objects of concern.
See also
Lists of comets
List of interstellar comets
List of comets by type
List of non-periodic comets
List of periodic comets
Notes
References
Bibliography
External links
109P/Swift-Tuttle at the Minor Planet Center's Database
Periodic comets
109P
109P
0109
109P
1860s in science
18620716
Recovered astronomical objects
109P | Comet Swift–Tuttle | Astronomy | 1,060 |
391,830 | https://en.wikipedia.org/wiki/Groklaw | Groklaw was a website that covered legal news of interest to the free and open source software community. Started as a law blog on May 16, 2003, by paralegal Pamela Jones ("PJ"), it covered issues such as the SCO-Linux lawsuits, the EU antitrust case against Microsoft, and the standardization of Office Open XML.
Jones described Groklaw as "a place where lawyers and geeks could explain things to each other and work together, so they'd understand each other's work better".
Its name derives from "grok", roughly meaning "to understand completely", which had previously entered geek slang.
Other topics covered included software patents, DMCA, the actions of the RIAA against alleged illegal file sharers, and actions against free and open software such as Android and Linux.
Origins
According to a 2003 interview with Jones, the blog was started to cover legal news and to explain it to the tech community.
The first article was entitled "The Grokster Decision – Ode To Thomas Jefferson". It was about the effect of P2P on the music industry, and the then-recent court decision in MGM Studios, Inc. v. Grokster, Ltd., by Judge Steven Wilson in favor of the defendants. It also covered the previous Napster decision, and why it was different, causing Napster to be shut down. The article included a quote from Thomas Jefferson and references to David Boies, who was Napster's attorney.
The second post, on May 17, 2003, also covered legal issues – the SCO v. IBM lawsuit – entitled "SCO Falls Downstairs, Hitting its Head on Every Step". It criticized Caldera Systems for the way they were handling the suit outside of court, and included quotes from Bruce Perens, Richard Stallman, Steve Ballmer, and Linus Torvalds. It ended:
David Boies has agreed to represent SCO. I am trying to remind myself that our legal system is predicated on lawyers sometimes representing people they don't personally admire, and the system really does depend on someone being willing to take on unpopular clients. I know Boies doesn't use email, or at least he didn't the last time I checked. So maybe he doesn't quite get the tech ... ah, hang it all, there's no way around it: I feel bad he's chosen to represent them, especially after I posted an Ode singing his praises, and I hope he loses.
The blog soon became popular with the free software and open source communities and others, and attracted a community of volunteers and commenters. Its popularity caused it to outgrow Radio Userland, and on November 22, 2003, the standalone Groklaw website, hosted by ibiblio and running Geeklog software, was up and running.
Main focus
The main focus of Jones's writing became the Caldera Systems v. IBM litigation (Caldera Systems changed its name to The SCO Group during this time). Other issues were explored, including intellectual property and patent issues (for example, Microsoft IP claims against Linux, and the drafting of the GPL version 3). Groklaw was known for its contributors' ability to explain complex legal issues in simple terms and the research used in putting together articles. Members of the Groklaw community attended court hearings and interviewed movers and shakers in the software/IP world.
The site became a community effort. While Jones understood law, she was not a programmer. Many readers were techies, however, and when technical issues arose they provided relevant comments. This enabled Groklaw to solicit guest commentary on issues such as:
Linux Kernel coding practices
C Language programming
Operating systems programming
Operating systems history
Standards Organizations
Each of these issues appeared to have some application to the SCO v. IBM case, and most were revisited many times. Additional topics included later lawsuits by The SCO Group against Daimler Chrysler, Autozone, and Novell, the countersuit by Red Hat, and their implications and Microsoft's attempt to fast track OOXML as an International Organization for Standardization (ISO) standard.
Awards
Groklaw was cited by the attorneys for several firms in law journal articles. It also won awards:
2012 – ABA Journal Blawg 100
2010 – The Electronic Frontier Foundation (EFF) Pioneer Award
2009 – Top 200 Tech Blogs: The Datamation 2009 List "The famed Groklaw is still going strong, far past the SCO case that first brought the blog to prominence."
2008 – The Award for Projects of Social Benefit – The Free Software Foundation (FSF)
2007 – Knowledge Masters Award for Innovation – Knowledge Trust and the Louis Round Wilson Academy
2007 – Best FUD Fighter – Google-O'Reilly Open Source Awards
2005 – Best News Site – ConsortiumInfo*.org – Pamela Jones/Groklaw: Best Community Site or Blog (Non-Profit)
2005 – Best Blogger of the Year – Dana Blankenhorn, Corante
2004 – Best Website of 2004 – The Inquirer
2004 – Best Independent Tech Blog – TechWeb Network: Readers Choice Award
2004 – Best Nontechnical or Community Website – Linux Journal: Editors' Choice Award
2003 – Best News Site – OSDir.com: Editor's Choice Winner
Editorial stance
Groklaw was the personal creation of Jones, and it published articles (both news and opinion) from a self-described pro-FOSS, anti-FUD perspective.
While articles meticulously followed SCO's litigation activities, they were accompanied by reader-submitted comments that were "overwhelmingly pro-Linux and anti-SCO."
Media controversy
Jones was widely respected by journalists and people inside the Linux community. Steven J. Vaughan-Nichols wrote, "Jones has made her reputation as a top legal IT reporter from her work detailing the defects with SCO's case against IBM and Linux. Indeed, it is no exaggeration to say that her work has contributed enormously to everyone's coverage of SCO's cases."
Despite the high regard of Jones' peer journalists and the Linux community (or possibly in part because of it) , a number of prominent attacks against Groklaw and Jones occurred. These attacks were documented and addressed in detail, on Groklaw and other web sites and also in court as part of the SCO litigation .
During the first week of May 2005, Maureen O'Gara, writing in Linux World, wrote an exposé claiming to unmask Jones. Two weeks before O'Gara's publication, McBride said that SCO was investigating Jones' identity. The article included alleged, but unverified, personal information about Jones, including a photo of Jones' supposed house and purported addresses and telephone numbers for Jones and her mother. After a flood of complaints to the publisher, lobbying of the site's advertisers, and claims of a denial-of-service attack launched against the Sys-Con domain, Linux Business News' publisher Sys-Con issued a public apology, and said they dropped O'Gara and her LinuxGram column. Despite this assertion, O'Gara remained with Sys-Con; as of 2009, she is the Virtualization News Desk editor at Sys-Con Media, who describe her as "[o]ne of the most respected technology reporters in the business" and has her work published in multiple magazines owned by Sys-Con Media.
SCO executives Darl McBride and Blake Stowell also denigrated Jones, and claimed that she worked for IBM. Jones denied this allegation, as did IBM in a court filing. During an SCO conference call on April 13, 2005, McBride said, "The reality is the web site is full of misinformation, including the people who are actually running it" when talking about Groklaw, adding also "What I would say is that it is not what it is purported to be". Later developments in the court cases showed that McBride's statements to the press regarding the SCO litigation had limited credibility; very few such statements were ever substantiated and most were shown to be false. For example, McBride claimed that SCO owned the copyrights to UNIX, and SCO filed suit to try to enforce these claims. The outcome went against McBride's claims. The jury found that SCO had not purchased these copyrights.
SCO appealed this ruling and lost. McBride also made a claim to the press that there was a "mountain of code" misappropriated to create Linux. When SCO finally presented their evidence of infringement, which centered on nine lines of error name and number similarities in the file errno.h, Judge Wells famously said "Is this all you've got?" Professor Randall Davis of MIT later made a convincing demonstration that there were no elements of UNIX which might be copyright protectable present in the Linux source code.
Additional projects
Anticipating further legal threats against GNU, Linux, and the free software community, Jones launched Grokline, a Unix ownership timeline project, in May 2004. One notable result of the Groklaw/Grokline effort was obtaining and publishing the 1994 settlement in USL v. BSDi, which for over a decade had been sealed by the parties. The document was obtained through a California freedom of information statute (the University of California, being a publicly funded institution, is required by law to make almost all of its documents public), and the release of the settlement answered many questions as to the ownership of the Unix intellectual property.
The Linux documentation project Grokdoc wiki was started in 2004, with the stated goal "to create a useful manual on basic tasks that new users will find simple and clear and easy to follow."
Groklaw extensively covered patent problems with software and hardware, use of the DMCA against free software ideals, Open standards, digital rights management, GPLv3, and published The Daemon, the GNU & the Penguin, a series of articles by Peter Salus covering the history of Unix, Linux and the GNU project.
It covered the Oracle v. Google in which Oracle alleged that Google's Android platform infringed copyrights and patents related to Java.
Later history
In January 2009, Groklaw entered a second phase, focusing on consolidation and cleanup of the legal history collected on the site.
In April 2010, Groklaw was selected by the Library of Congress for its web archival project, in the category of Legal Blogs.
On April 9, 2011, Jones announced that Groklaw would stop publishing new articles on May 16, 2011, its 8th anniversary, as it had accomplished its original mission of revealing the truth behind the SCO lawsuits.
On May 16, 2011, Jones reaffirmed her desire to step down from writing daily articles and announced that the new editor would be Mark Webbink.
Subsequent to this decision, new patent and copyright based attacks on the Android operating system led to Jones resuming an editorial role, and along with Mark Webbink she moderated and edited the site.
On August 20, 2013, a final article appeared on Groklaw, explaining that due to pervasive government monitoring of the Internet, there could no longer be an expectation of the sort of privacy online that was necessary to collaborate on sensitive topics. Citing the closure of Lavabit earlier that month, Jones wrote "I can't do Groklaw without your input.... and there is now no private way, evidently, to collaborate." and "What I do know is it's not possible to be fully human if you are being surveilled 24/7... I hope that makes it clear why I can't continue. There is now no shield from forced exposure."
During 2020, the site was intermittently unavailable. , the home page and parts of the content were still available. As of Oct 2024, although the domain name is still registered, the landing page displays a GoDaddy domain-parking page.
See also
SCO-Linux controversies
Weblog
Darl McBride
Ralph Yarro III
Canopy Group
Software patents and free software
References
External links
Groklaws Defunct Radio UserLand Page
Grokline
Grokdoc
Michael J. Jordan (July 31, 2003). Interview with Pamela Jones Linux Online.
Richard Hillesley (November 26, 2007). Q&A: Pamela Jones IT Pro.
Brenda Sandburg (September 9, 2005). Lawyers Flock to Mystery Web Site's Coverage of SCO v. IBM Suit Law.Com
Groklaw (2003) Open letter to SCO from Members of The Open Source/Free Software Community at Groklaw
An accompanying research document for the Open Letter
Works about computer law
Creative Commons-licensed websites
Free software websites
American legal websites
SCO–Linux disputes
Works about intellectual property law
Law blogs
Internet properties established in 2003
Internet properties disestablished in 2013 | Groklaw | Technology | 2,639 |
637,198 | https://en.wikipedia.org/wiki/Great%20Smoky%20Mountains | The Great Smoky Mountains (, Equa Dutsusdu Dodalv) are a mountain range rising along the Tennessee–North Carolina border in the southeastern United States. They are a subrange of the Appalachian Mountains and form part of the Blue Ridge Physiographic Province. The range is sometimes called the Smoky Mountains, and the name is commonly shortened to the Smokies. The Smokies are best known as the home of the Great Smoky Mountains National Park, which protects most of the range. The park was established in 1934 and, with over 11 million visits per year, is the most visited national park in the United States.
The Smokies are part of an International Biosphere Reserve. The range is home to an estimated of old-growth forest, constituting the largest such stand east of the Mississippi River. The coves hardwood forests in the range's lower elevations are among the most diverse ecosystems in North America, and the Southern Appalachian spruce–fir forest that covers the upper elevations is the largest of its kind. The Smokies are home to the densest black bear population in the Eastern United States and the most diverse salamander population outside of the tropics.
Along with the biosphere reserve, the Great Smoky Mountains have been designated a UNESCO World Heritage Site. The U.S. National Park Service preserves and maintains 78 structures within the national park that were once part of the numerous small Appalachian communities scattered throughout the range's river valleys and coves. The park contains five historic districts and nine individual listings on the National Register of Historic Places.
The name "Smoky" comes from the natural fog that often hangs over the range and presents as large smoke plumes from a distance. This fog is caused by the vegetation emitting volatile organic compounds, chemicals that have a high vapor pressure and easily form vapors at normal temperature and pressure.
Geography
The Great Smoky Mountains stretch from the Pigeon River in the northeast to the Little Tennessee River in the southeast. The northwestern half of the range gives way to a series of elongate ridges known as the "Foothills," the outermost of which include Chilhowee Mountain and English Mountain. The range is roughly bounded on the south by the Tuckasegee River and to the southeast by Soco Creek and Jonathan Creek. The Smokies comprise parts of Blount County, Sevier County, and Cocke County in Tennessee and Swain County and Haywood County in North Carolina.
The sources of several rivers are located in the Smokies, including the Little Pigeon River, the Oconaluftee River, and Little River. Streams in the Smokies are part of the Tennessee River watershed and are thus entirely west of the Eastern Continental Divide. The largest stream entirely within the park is Abrams Creek, which rises in Cades Cove and empties into the Chilhowee Lake impoundment of the Little Tennessee River near Chilhowee Dam.
Other major streams include Hazel Creek and Eagle Creek in the southwest, Raven Fork near Oconaluftee, Cosby Creek near Cosby, and Roaring Fork near Gatlinburg. The Little Tennessee River passes through five impediments along the range's southwestern boundary, namely Tellico Lake, Chilhowee Lake, Calderwood Lake, Cheoah Lake, and Fontana Lake.
Notable peaks
The highest point in the Smokies is Kuwohi, the tallest mountain in Tennessee, which rises to an elevation of . It has the range's highest topographic prominence at . Mount Le Conte is the tallest (i.e., from immediate base to summit) mountain in the range, rising from its base in Gatlinburg to its summit.
Climate
The Smokies rise prominently above the surrounding low terrain. For example, Mount Le Conte () rises more than a mile (1.6 km) above its base. Because of their prominence, the Smokies receive a heavy annual amount of precipitation. Annual precipitation amounts range from , and snowfall in the winter can be heavy, especially on the higher slopes. For comparison, the surrounding terrain has annual precipitation of around . Flash flooding often occurs after heavy rain.
In 2004, the remnants of Hurricane Frances caused major flooding, landslides, and high winds, which were soon followed by Hurricane Ivan, making the situation worse. Other post-hurricanes, including Hurricane Hugo in 1989, have caused similar damage in the Smokies.
As for temperatures, the average temperature difference between the mountains (Newfound Gap around MSL and the valleys (Park Headquarters around MSL in the Great Smoky Mountains National Park is between 10 and 13 °F (6 and 7 °C) with highs and between 3 and 6 °F (2 and 3 °C) with lows. The difference between high temperatures is similar to the moist adiabatic lapse rate (3.3 degrees Fahrenheit per 1,000 ft), while the smaller difference between low temperatures is the result of frequent temperature inversions developing in the morning, especially during autumn.
Strong damaging winds (, or sometimes greater) are observed a few times each year around the Smoky Mountains during the cool season (October through April), as a result of a phenomenon known as mountain waves. These mountain wave winds are strongest in a narrow area along the foothills and can create extensive areas of fallen trees and roof damage, especially around Cades Cove in the Great Smoky Mountains National Park and the Camp Creek community. Strong winds created by mountain waves were a contributing factor with the devastating Gatlinburg fire on November 28, 2016 during the 2016 Great Smoky Mountains wildfires. Damaging winds can also be generated by strong thunderstorms, with tornadoes and strong thunderstorm complexes (also known as mesoscale convective systems) occasionally affecting the Smoky Mountains.
The climate of Kuwohi is hemiboreal (Köppen Dfb, Trewartha Dcb). As with much of the southern Blue Ridge, the area qualifies as part of the Appalachian Rainforest. On the other hand, Gatlinburg's climate is four-season subtropical (Cfa) under Köppen (oceanic Do under Trewartha), typical of Tennessee.
Geology
Most of the rocks in the Great Smoky Mountains consist of Late Precambrian rocks that are part of a formation known as the Ocoee Supergroup. The Ocoee Supergroup consists primarily of slightly metamorphosed sandstones, phyllites, schists, and slate. Early Precambrian rocks, which include the oldest rocks in the Smokies, comprise the dominant rock type in the Raven Fork Valley (in the Oconaluftee valley) and lower Tuckasegee River between Cherokee and Bryson City. They consist primarily of metamorphic gneiss, granite, and schist. Cambrian sedimentary rocks are found among the outer reaches of the Foothills to the northwest and in limestone coves such as Cades Cove.
The Precambrian gneiss and schists—the oldest rocks in the Smokies—formed over a billion years ago from the accumulation of marine sediments and igneous rock in a primordial ocean. In the Late Precambrian period, this ocean expanded, and the more recent Ocoee Supergroup rocks formed from accumulations of the eroding land mass onto the ocean's continental shelf.
By the end of the Paleozoic era, the ancient ocean had deposited a thick layer of marine sediments which left behind sedimentary rocks such as limestone. During the Ordovician period, the North American and African plates collided, destroying the ancient ocean and initiating the Alleghenian orogeny—the mountain-building epoch that created the Appalachian range. The Mesozoic era saw the rapid erosion of the softer sedimentary rocks from the new mountains, re-exposing the older Ocoee Supergroup formations.
Around 20,000 years ago, subarctic glaciers advanced southward across North America, and although they never reached the Smokies, the advancing glaciers led to colder mean annual temperatures and an increase in precipitation throughout the range. Trees were unable to survive at the higher elevations and were replaced by tundra vegetation. Spruce-fir forests occupied the valleys and slopes below approximately . The persistent freezing and thawing during this period created the large blockfields that are often found at the base of large mountain slopes.
Between 16,500 and 12,500 years ago, the glaciers to the north retreated and mean annual temperatures rose. The tundra vegetation disappeared, and the spruce-fir forests retreated to the highest elevations. Hardwood trees moved into the region from the coastal plains, replacing the spruce-fir forests in the lower elevations. The temperatures continued warming until around 6,000 years ago, when they began to gradually grow cooler.
Flora
Heavy logging in the late 19th century and early 20th century devastated much of the forests of the Smokies, but the National Park Service estimates of old-growth forest remains, comprising the largest old-growth stand in the Eastern United States. Most of the forest is a mature second-growth hardwood forest. The range's 1,600 species of flowering plants include over 100 species of native trees and 100 species of native shrubs. The Smokies are also home to over 450 species of non-vascular plants and 2,000 species of fungi.
The forests of the Smokies are typically divided into three zones:
The cove hardwood forests in the stream valleys, coves, and lower mountain slopes
The northern hardwood forests on the higher mountain slopes
The spruce-fir or boreal forest at the very highest elevations
Appalachian balds—patches of land where trees are unexpectedly absent or sparse—are interspersed through the mid-to-upper elevations in the range. Balds include grassy balds, which are highland meadows covered primarily by thick grasses, and heath balds, which are dense thickets of rhododendron and mountain laurel typically occurring on narrow ridges. Mixed oak-pine forests are found on dry ridges, especially on the south-facing North Carolina side of the range. Stands dominated by the Eastern hemlock (Tsuga canadensis) are occasionally found along streams and broad slopes above .
Cove hardwood forest
Cove hardwood forests, which are native to southern Appalachia, are among the most diverse forest types in North America. The cove hardwood forests of the Smokies are mostly second-growth, although some are still old-growth. The Albright Grove along the Maddron Bald Trail (between Gatlinburg and Cosby) is an accessible old-growth forest with some of the oldest and tallest trees in the entire range.
Over 130 species of trees are found among the canopies of the cove hardwood forests in the Smokies. The dominant species include yellow birch (Betula alleghaniensis), basswood (Tilia americana), yellow buckeye (Aesculus flava), tulip tree (Liriodendron tulipifera; commonly called "tulip poplar"), silverbells (Halesia carolina), sugar maple (Acer saccharum), cucumber magnolia (Magnolia acuminata), shagbark hickory (Carya ovata), Carolina hemlock (Tsuga caroliniana) and eastern hemlock (Tsuga canadensis). The American chestnut (Castanea dentata), which was arguably the most beloved tree of the range's pre-park inhabitants, was killed off by the introduced Chestnut blight in the 1920s.
The understories of the cove hardwood forest contain dozens of species of shrubs and vines. Dominant species in the Smokies include the Eastern redbud (Cercis canadensis), flowering dogwood (Cornus florida), Catawba rhododendron (Rhododendron catawbiense), mountain laurel (Kalmia latifolia), and smooth hydrangea (Hydrangea arborescens).
Northern hardwood forest
The mean annual temperatures in the higher elevations in the Smokies are cool enough to support forest types more commonly found in the northern United States. The northern hardwood forests constitute the highest broad-leaved forest in the eastern United States. About are old-growth.
In the Smokies, the northern hardwood canopies are dominated by yellow birch (Betula alleghaniensis) and American beech (Fagus grandifolia). White basswood (Tilia heterophylla), mountain maple (Acer spicatum) and striped maple (Acer pensylvanicum), and yellow buckeye (Aesculus flava) are also present. The understory is home to diverse species such as coneflower, skunk goldenrod, Rugels ragwort, bloodroot, hydrangea, and several species of grasses and ferns.
A unique community is the beech gap, or beech orchard. Beech gaps consist of high mountain gaps that have been monopolized by beech trees. The beech trees are often twisted and contorted by the high winds that occur in these gaps. Why other tree types such as the red spruce fail to encroach into the beech gaps is unknown.
Spruce-fir forest
The Southern Appalachian spruce–fir forest—also called the "boreal" or "Canadian" forest—is a relict of the ice ages, when mean annual temperatures in the Smokies were too cold to support a hardwood forest. While the rise in temperatures between 12,500 and 6,000 years ago allowed the hardwoods to return, the spruce-fir forest has managed to survive on the harsh mountain tops, typically above . About of the spruce-fir forest are old-growth.
The spruce-fir forest consists primarily of two conifer species—red spruce (Picea rubens) and Fraser fir (Abies fraseri). The Fraser firs, which are native to southern Appalachia, once dominated elevations above in the Smokies. Most of these firs were killed, however, by an infestation of the balsam wooly adelgid, which arrived in the Smokies in the early 1960s. Thus, red spruce is now the dominant species in the range's spruce-fir forest. Large stands of dead Fraser firs remain atop Kuwohi and on the northwestern slopes of Old Black. While much of the red spruce stands were logged in the 1910s, the tree is still common throughout the range above . Some of the red spruces are believed to be 300 years old, and the tallest rise to over .
The main difference between the Southern Appalachian spruce–fir forest and the spruce-fir forests in northern latitudes is the dense broad-leaved understory of the former, which are home to catawba rhododendron, mountain ash, pin cherry, thornless blackberry, and hobblebush. The herbaceous and litter layers are poorly lit year-round and are thus dominated by shade-tolerant plants such as ferns, namely mountain wood fern and northern lady fern, and over 280 species of mosses.
Wildflowers
Many wildflowers grow in mountains and valleys, including bee balm, Solomon's seal, Dutchman's breeches, various trilliums, the Dragon's Advocate and even hardy orchids. There are two native species of rhododendron in the area. The catawba rhododendron has purple flowers in May and June, while the rosebay rhododendron has longer leaves and white or light pink blooms in June and July.
The orange- to sometimes red-flowered and deciduous flame azalea closely follows along with the catawbas. The closely related mountain laurel blooms in between the two, and all of the blooms progress from lower to higher elevations. The reverse is true in autumn, when nearly bare mountaintops covered in rime ice (frozen fog) can be separated from green valleys by very bright and varied leaf colors. The rhododendrons are broadleafs, whose leaves droop in order to shed wet and heavy snows that come through the region during winter.
Fauna
The Great Smoky Mountains are home to 66 species of mammals, over 240 species of birds, 43 species of amphibians, 60 species of fish, and 40 species of reptiles. The range has the densest black bear population east of the Mississippi River. The black bear has come to symbolize wildlife in the Smokies, and the animal frequently appears on the covers of the Great Smoky Mountains National Park's literature. Most of the range's adult eastern black bears weigh between and , although some grow to more than .
Other mammals include the white-tailed deer, the population of which drastically expanded with the creation of the national park. The bobcat is the only remaining wild cat species, although sightings of cougars, which once thrived in the area, are still occasionally reported. The coyote is not believed to be native to the range but has moved into the area in recent years and is treated as a native species. Two species of fox (red fox and the gray fox) are found within the Smokies, with red foxes being documented at all elevations.
European boar, introduced as game animals in the early 20th century, thrive in southern Appalachia but are considered a nuisance because of their tendency to root up and destroy plants. The boars are seen as taking food resources away from bears as well, and the park service has sponsored a program that pays individuals to hunt and kill boars and leave their bodies in locations frequented by bears.
The Smokies are home to over two dozen species of rodents, including the endangered northern flying squirrel and 10 species of bats, including the endangered Indiana bat. The National Park Service has successfully reintroduced river otters and elk. An attempt to reintroduce the red wolf in the early 1990s ultimately failed. These wolves were removed from the park and relocated to the Alligator River National Wildlife Refuge in North Carolina.
The Smokies are home to a diverse bird population due to the presence of multiple forest types. Species that thrive in southern hardwood forests, such as the red-eyed vireo, wood thrush, wild turkey, northern parula, ruby-throated hummingbird, and tufted titmouse, are found throughout the lower elevations and cove hardwood forests. Species more typical of cooler climates, such as the raven, winter wren, black-capped chickadee, yellow-bellied sapsucker, dark-eyed junco, and Blackburnian, chestnut-sided, and Canada warblers, are found in the sruce-fir and northern hardwood zones.
Ovenbirds, whip-poor-wills, and downy woodpeckers live in the drier pine-oak forests and heath balds. Bald eagles and golden eagles have been spotted at all elevations in the park. Peregrine falcon sightings are also not uncommon, and a peregrine falcon eyrie is known to have existed near Alum Cave Bluffs throughout the 1930s. Red-tailed hawks, the most common hawk species, have been sighted at all elevations. Owl species include the barred owl, eastern screech owl, and northern saw-whet owl.
Timber rattlesnakes—one of two venomous snake species in the Smokies—are found at all elevations. The other venomous snake, the copperhead, is typically found at lower elevations. Other reptiles include the eastern box turtle, the eastern fence lizard, the black rat snake, and the northern water snake.
The Smokies are home to one of the world's most diverse salamander populations. Five of the world's nine families of salamanders are found in the range, consisting of up to 31 species. The red-cheeked salamander is found only in the Smokies. The imitator salamander is found only in the Smokies and the nearby Plott Balsams and Great Balsam Mountains. Two other species—the southern gray-cheeked salamander and the southern Appalachian salamander—occur only in the general region. Other species include the shovelnose salamander, blackbelly salamander, eastern red-spotted newt, and spotted dusky salamander. The hellbender inhabits swift streams. Other amphibians include the American toad and the American bullfrog, wood frog, upland chorus frog, northern green frog, and spring peeper.
Fish include trout, lamprey, darter, shiner, bass, and sucker. The brook trout is the only trout species native to the range, although northwestern rainbow trout and European brown trout were introduced in the first half of the 20th century. The larger rainbow and brown trout outcompete the native brook trout for food and habitat at lower elevations. As such, most of the brook trout found in the park today are in streams above 3,000 feet in elevation. Trout generally smaller than in different locales. Protected fish species include the smoky madtom and yellowfin madtom, the spotfin chub, and the duskytail darter.
The firefly Photinus carolinus, whose synchronized flashing light displays occur in mid-June, is native to the Smoky Mountains with a population epicenter near Elkmont, Tennessee.
Ecosystem threats
Air pollution is contributing to increased red spruce tree mortality at higher elevations and oak decline at lower elevations, while invasive hemlock woolly adelgids attack hemlocks, and balsam woolly adelgids attack Fraser firs. Pseudoscymnus tsugae, a type of beetle in the ladybug family, Coccinellidae, has been introduced in an attempt to control the pests.
Visibility is dramatically reduced by smog from both the Southeastern United States and the Midwest, and smog forecasts are prepared daily by the Environmental Protection Agency for both nearby Knoxville, Tennessee and Asheville, North Carolina.
Environmental threats are the concern of many non-profit environmental stewardship groups, especially The Friends of the Smokies. Formed in 1993, the group assists the National Park Service in its mission to preserve and protect the Great Smoky Mountains National Park by raising funds and public awareness, and providing volunteers for needed projects.
History
Prehistory
Native Americans have likely been hunting in the Great Smoky Mountains for 14,000 years. Numerous Archaic period (–1000 B.C.) artifacts have been found within the national park's boundaries, including projectile points uncovered along likely animal migration paths. Woodland period ( - 1000 A.D.) sites found within the park contained 2,000-year-old ceramics and evidence of primitive agriculture.
The increasing reliance upon agriculture during the Mississippian period (–1600 A.D.) lured Native Americans away from the game-rich forests of the Smokies and into the fertile river valleys on the outer fringe of the range. Substantial Mississippian-period villages were uncovered at Citico and Toqua (named after the Cherokee villages that later thrived at these sites) along the Little Tennessee River in the 1960s. Fortified Mississippian-period villages have been excavated at the McMahan Indian Mounds in Sevierville and as well as mounds in Townsend.
Most of these villages were part of a minor chiefdom centered on a large village known as Chiaha, which was located on an island now submerged by Douglas Lake. The 1540 expedition of Hernando de Soto and the 1567 expedition of Juan Pardo passed through the French Broad River valley north of the Smokies, both spending a considerable amount of time at Chiaha. The Pardo expedition followed a trail across the flanks of Chilhowee Mountain to the Mississippian-period villages at Chilhowee and Citico (Pardo's notary called them by their Muskogean names, "Chalahume" and "Satapo").
Cherokee
By the time the first English explorers arrived in southern Appalachia in the late 17th century, the Cherokee controlled much of the region, and the Great Smoky Mountains lay at the center of their territory. One Cherokee legend tells of a magical lake hidden deep within the range but inaccessible to humans. Another tells of a captured Shawnee medicine man named Aganunitsi who, in exchange for his freedom, travels to the remote sections of the range in search of the Uktena. The Cherokee called Gregory Bald Tsitsuyi ᏥᏧᏱ, or "rabbit place," and believed the mountain to be the domain of the Great Rabbit. Other Cherokee place names in the Smokies included Duniskwalgunyi ᏚᏂᏍᏆᎫᏂ, or "forked antlers", which refers to the Chimney Tops, and kuwahi ᎫᏩᎯ, or "mulberry place".
Most Cherokee settlements were located in the river valleys on the outer fringe of the Smokies, which along with the Unicoi Mountains provided the main bulwark dividing the Overhill Cherokee villages in modern Tennessee from the Cherokee Middle towns in modern North Carolina. The Overhill town of Chilhowee was situated at the confluence of Abrams Creek and the Little Tennessee, and the Overhill town of Tallassee was located just a few miles upstream near modern Calderwood (both village sites are now under Chilhowee Lake). A string of Overhill villages, including Chota and Tanasi, dotted the Little Tennessee valley north of Chilhowee.
The Cherokee Middle towns included the village of Kittowa (which the Cherokee believed to be their oldest village) along the Tuckasegee River near Bryson City. The village of Oconaluftee, which was situated along the Oconaluftee River near the modern Oconaluftee Visitor Center, was the only known permanent Cherokee village located within the national park's boundaries. Sporadic or seasonal settlements were located in Cades Cove and the Hazel Creek valley.
European settlement
European explorers and settlers began arriving in Western North Carolina and East Tennessee in the mid-18th century. The influx of settlers at the end of the French and Indian War brought conflict with the Cherokee, who still held legal title to much of the land. When the Cherokee aligned themselves with the British at the outbreak of the American Revolution in 1776, American forces launched an invasion of Cherokee territory.
The Middle towns, including Kittuwa, were burned by General Griffith Rutherford, and several of the Overhill towns were burned by John Sevier. By 1805, the Cherokee had ceded control of the Smokies to the U.S. government. Although much of the tribe was forced west along the Trail of Tears in 1838, a few—largely through the efforts of William Holland Thomas—managed to retain their land on the Qualla Boundary and today comprise the Eastern Band of Cherokee Indians.
In the 1780s, several frontier outposts had been established along the outskirts of the Smokies, namely Whitson's Fort in what is now Cosby and Wear's Fort in what is now Pigeon Forge. Permanent settlers began arriving in these areas in the 1790s. In 1801, the Whaley brothers, William and John, moved from North Carolina to become the first settlers in what is now the Greenbrier section of the park.
In 1802, Edgefield, South Carolina resident William Ogle arrived in White Oak Flats where he cut and prepared logs for cabin construction. Although Ogle died shortly after returning to Edgefield, his wife, Martha Jane Huskey, eventually returned with her family and several other families to White Oak Flats, becoming the first permanent settlers in what would eventually become Gatlinburg. Their children and grandchildren spread out southward into the Sugarlands and Roaring Fork areas.
Cades Cove was settled largely by families who had purchased lots from land speculator William "Fighting Billy" Tipton. The first of these settlers, John and Lucretia Oliver, arrived in 1818. Two Cades Cove settlers, Moses and Patience Proctor, crossed over to the North Carolina side of the Smokies in 1836 to become the first Euro-American settlers in the Hazel Creek area. The Cataloochee area was first settled by the Caldwell family, who migrated to the valley in 1834.
Like most of southern Appalachia, the early 19th-century economy of the Smokies relied on subsistence agriculture. The average farm consisted of roughly , part of which was cultivated and part of which was woodland. Early settlers lived in x log cabins, although these were replaced by more elaborate log houses and eventually, as lumber became available, by modern frame houses. Most farms included at least one barn, a springhouse (used for refrigeration), a smokehouse (used for curing meat), a chicken coop (protected chickens from predators), and a corn crib (kept corn dry and protected it from rodents). Some of the more industrious farmers operated gristmills, general stores, and sorghum presses. Religion was a central theme in the lives of the early residents of the Smokies, and community life was typically centered on churches. Christian Protestantism—especially Primitive Baptists, Missionary Baptists, Methodists, and Presbyterians; dominated the religious culture of the region.
American Civil War
While both Tennessee and North Carolina joined the Confederacy at the outbreak of the American Civil War in 1861, Union sentiment in the Great Smoky Mountains was much stronger relative to other regions in these two states. Generally, the communities on the Tennessee side of the Smokies supported the Union, while communities on the North Carolina side supported the Confederates. On the Tennessee side, 74% of Cocke Countians, 80% of Blount Countians, and 96% of Sevier Countians voted against secession. In the North Carolina Smokies—Cherokee, Haywood, Jackson, and Macon counties—about 46% of the population favored secession.
While no major engagements took place in the Smokies, minor skirmishes were fairly common. Cherokee chief William Holland Thomas formed a Confederate legion made up mostly of Cherokee soldiers. Thomas' Legion crossed the Smokies in 1862 and occupied Gatlinburg for several months to protect saltpeter mines atop Mount Le Conte. Residents of predominantly Union Cades Cove and predominantly Confederate Hazel Creek routinely crossed the mountains to steal one another's livestock. Residents of Cosby and Cataloochee did likewise. One notable Civil War incident was the murder of long-time Cades Cove resident Russell Gregory (for whom Gregory Bald is named), which was carried out by bushwhackers in 1864 shortly after Gregory had led an ambush that routed a band of Confederates seeking to wreak havoc in the cove. Another incident was George Kirk's raid on Cataloochee, in which Kirk killed or wounded 15 Union soldiers recovering at a makeshift hospital.
Logging
While selective logging occurred in the Great Smoky Mountains throughout the 19th century, the general inaccessibility of the forests prevented major logging operations, and lumber firms relied on the lowland forests in the northeastern United States and the Mississippi Delta in the southeast. As timber resources in these regions became exhausted, and as the demand for lumber skyrocketed after the Civil War, entrepreneurs began looking for ways to reach the virgin forests of southern Appalachia. The first logging operations in the Smokies, which began in the 1880s, used splash dams or booms to float logs down rivers to lumber mills in nearby cities. Notable splash dam and boom operations included the English Lumber Company on Little River, the Taylor and Crate operations along Hazel Creek, and the ambitious operations of Alexander Arthur on the Pigeon River. All three of these operations failed within their first few years, however, after their dams and boom systems were destroyed by floods.
Innovations in logging railroads and band saw technology in the late 19th century made large-scale logging possible in the mountainous areas of southern Appalachia. The largest logging operation in the Smokies was the Little River Lumber Company, which logged the Little River watershed between 1901 and 1939. The company also established company towns at Townsend (named for the company's chief owner and manager, Wilson B. Townsend), Elkmont, and Tremont.
The second-largest operation was the Ritter Lumber Company, which logged the Hazel Creek watershed between 1907 and 1928. Ruins of Ritter's lumbering operations are still visible along the Hazel Creek Trail. Other lumbering operations included Three M Lumber and Champion Fibre, both of which logged the Oconaluftee watershed. By the time all operations ceased in the 1930s, logging firms had removed two-thirds of the virgin forests from the Smokies. According to the National Park Service, 80% of the Smokies was clear cut in the early 20th century.
National park
Wilson B. Townsend, the head of Little River Lumber, began advertising Elkmont as a tourist destination in 1909. Within a few years, the Wonderland Hotel and the Appalachian Club had been established to cater to elite Knoxvillians seeking summer mountain getaways. In the early 1920s, several Appalachian Club members, among them Knoxville businessman Colonel David Chapman, began seriously considering a movement to establish a national park in the Smokies. As head of the Great Smoky Mountains Park Commission, Chapman was largely responsible for raising funds for land purchases and coordinating park efforts between local, state, and federal entities.
The creation of the Great Smoky Mountains National Park proved much more complex than the creation of its predecessors, such as Yellowstone and Yosemite, which were already federally owned. Along with convincing logging firms to sell lucrative lumber rights, the Park Commission had to negotiate the purchase of thousands of small farms and remove entire communities. The commission also had to deal with the Tennessee and North Carolina legislatures, which at times were opposed to spending taxpayer money on park efforts. In spite of these difficulties, the Park Commission had completed most major land purchases by 1932. The national park officially opened in 1934, with President Franklin D. Roosevelt presiding over the opening ceremony at Newfound Gap.
Culture and tourism
The culture of the area is that of southern Appalachia, and previously the Cherokee people. Tourism is key to the area's economy, particularly in cities like Pigeon Forge, Gatlinburg, and Cherokee. In 2006, the Great Smoky Mountains Heritage Center opened in Townsend with the mission of preserving various aspects of the region's culture. The Tennessee Smokies baseball team plays in Sevierville and is named for the mountain range.
The Great Smoky Mountains are home to The Gatlinburg SkyLift and SkyBridge.
The SkyBridge is the longest skybridge in the United States, at about 680 feet long.
See also
List of subranges of the Appalachian Mountains
Footnotes
References
External links
Species Mapper
Great Smoky Mountains Association—official nonprofit partner of the park, maps, guides, photos, and videos
National Weather Service Southern Appalachian Precipitation study (NWS Morristown, TN)
Great Smoky Mountains Institute at Tremont
Cornell University study on invasive balsam woolly adelgid control
Geologic Map of the Great Smoky Mountains National Park Region, Tennessee and North Carolina United States Geological Survey
History and maps
The Great Smoky Mountains Regional Project—a collection of documents and early photographs regarding the Great Smokies and surrounding communities
Southern Appalachian English: Transcripts—sound file samples of interviews of long-time residents of the Great Smokies conducted in 1939
Smokies Road Trip, circa 1938
Appalachian culture
East Tennessee
Geography of Appalachia
Landforms of Blount County, Tennessee
Landforms of Cocke County, Tennessee
Landforms of Haywood County, North Carolina
Landforms of Jackson County, North Carolina
Landforms of Sevier County, Tennessee
Landforms of Swain County, North Carolina
Mountain ranges of North Carolina
Mountain ranges of Tennessee
Old-growth forests
Southern Sixers
Subranges of the Appalachian Mountains
Western North Carolina | Great Smoky Mountains | Biology | 7,274 |
50,961,871 | https://en.wikipedia.org/wiki/Candida%20humilis | Kazachstania humilis (prev. Candida humilis) is a species of yeast in the genus Kazachstania. It commonly occurs in sourdough and kefir cultures, along with different species of lactic acid bacteria (e.g., Limosilactobacillus fermentum, Companilactobacillus paralimentarius, Lactiplantibacillus plantarum, and Fructilactobacillus sanfranciscensis). K. humilis is the most representative yeast species found in type I sourdough ecosystems. The effects of electric field strength, pulse width and frequency, or pulse shape is significant on the membranes of Candida humilis, but not very noticeable.
K. humilis was separated from C. milleri in The Yeasts (fifth edition) in September 2016, although this is not universally accepted and they are still considered synonymous.
References
Fungi described in 1968
Yeasts
humilis
Fungus species | Candida humilis | Biology | 209 |
52,505,750 | https://en.wikipedia.org/wiki/Negative%20testing | Negative testing is a method of testing an application or system to improve the likelihood that an application works as intended/specified and can handle unexpected input and user behavior. Invalid data is inserted to compare the output against the given input. Negative testing is also known as failure testing or error path testing. When performing negative testing exceptions are expected. This shows that the application is able to handle improper user behavior. Users input values that do not work in the system to test its ability to handle incorrect values or system failure.
Purpose
The purpose of negative testing is to prevent the application from crashing and it also helps improve the quality of an application by detecting defects.
Negative testing helps you to improve the testing coverage of the application.
Negative testing makes the application more stable and reliable.
Negative testing together with positive testing allows users to test the application with any valid (or invalid) input data.
Benefits of negative testing
Negative testing is done to check that the product deals properly with the circumstance for which it is not programmed. The fundamental aim of this testing is to check how bad data is taken care of by the systems, and appropriate errors are shown to the client when bad data is entered. Both positive and negative testing play an important role. Positive testing ensures that the application does what it is implied for and performs each function as expected. Negative testing is opposite of positive testing. Negative testing discovers diverse approaches to make the application crash and handle the crash effortlessly.
Example
If there is a text box that can only take numeric values but the user tries to type a letter, the correct behavior would be to display a message such as "(Incorrect data) Please enter a number".
If the user is to fill the name field and there are ground rules that the name text is mandatory to fill, but that the name box shouldn't have values other than letters (no numeric values and special characters). Negative test cases could be a name containing numeric values or special characters. The correct behavior of the system would be to not display those invalid characters.
Parameters for writing Negative test cases
There are two basic techniques that help to write the sufficient test cases to cover most of the functionalities of the system. Both these techniques are used in positive testing as well. The two parameters are:
Boundary-value analysis
Boundary indicates a limit to something. In this parameter, test scenarios are designed in such a way that it covers the boundary values and validates how the application behaves on these boundary values.
Example
If there is an application that accepts Ids ranging from 0–255. Hence in this scenario, 0,255 will form the boundary values. The values within the range of 0–255 will constitute the positive testing. Any inputs going below 0 or above 255 will be considered invalid and will constitute negative testing.
Equivalence Partitioning
The input data may be divided into many partitions. Values from each partition must be tested at least once. Partitions with valid values are used for positive testing. While partitions with invalid values are used for negative testing.
Example
Numeric values from minus ten to ten are divided into two partitions: from minus ten to zero and from one to ten. If we need to test positive numeric values, then the first partition (from minus ten to zero) is used in negative testing.
References
Data analysis
Software testing | Negative testing | Engineering | 666 |
55,273,827 | https://en.wikipedia.org/wiki/NGC%206753 | NGC 6753 is a massive unbarred spiral galaxy, seen almost exactly face-on, in the southern constellation of Pavo. It was discovered by the English astronomer John Herschel on July 5, 1836. The galaxy is located at a distance of 142 million light years from the Milky Way, and is receding with a heliocentric radial velocity of . It does not display any indications of a recent interaction with another galaxy or cluster.
The morphological class of NGC 6753 is (R)SA(r)b, indicating it is a spiral without an inner bar feature (SA), displaying outer (R) and inner (r) ring structures, and moderately wound spiral arms. It is being viewed nearly face-on with a galactic plane inclination by 30° to the line of sight from the Earth. The galaxy is flocculent in appearance with a prominent central region. The virial mass of the galaxy is , while the stellar mass is . It has a star formation rate of ·yr−1, which is confined to a radius of around the core. The most active region of star formation is the inner ring. It has a hot, X-ray luminous corona that extends out to a radius of .
Supernovae
Three supernovae have been discovered in NGC 6753:
SN 2000cj was discovered by Robert Evans on May 14, 2000. It was positioned against a spiral arm at an offset east and south of the galaxy nucleus. The spectrum showed this to be a type Ia supernova.
On May 13, 2005, type Ic supernovae SN 2005cb was spotted by the Brazilian Supernovae Search team. It was offset west and north of the nucleus and reached a peak magnitude of 15.6.
The type II-P supernova SN 2019mhm was discovered by the BOSS team on August 2, 2019. This transient was spotted close to maximum with a magnitude of 16.6, but showed no radio emission.
See also
List of NGC objects (6001–7000)
References
External links
6753
Unbarred spiral galaxies
Pavo (constellation) | NGC 6753 | Astronomy | 428 |
1,265,819 | https://en.wikipedia.org/wiki/American%20Megatrends | American Megatrends International, LLC, doing business as AMI, is an international hardware and software company, specializing in PC hardware and firmware. The company was founded in 1985 by Pat Sarma and Subramonian Shankar. It is headquartered in Building 800 at 3095 Satellite Boulevard in unincorporated Gwinnett County, Georgia, United States, near the city of Duluth, and in the Atlanta metropolitan area.
The company started as a manufacturer of complete motherboards, positioning itself in the high-end segment. Its first customer was PCs Limited, later known as Dell Computer.
As hardware activity moved progressively to Taiwan-based original design manufacturers, AMI continued to develop BIOS firmware for major motherboard manufacturers. The company produced BIOS software for motherboards (1986), server motherboards (1992), storage controllers (1995) and remote-management cards (1998).
In 1993, AMI produced MegaRAID, a storage controller card. AMI sold its RAID assets to LSI Corporation in 2001, with only one employee from the RAID-division remaining with the AMI core team.
AMI continued to focus on OEM and ODM business and technology. Its product line includes or has previously included AMIBIOS (a BIOS), Aptio (a successor to AMIBIOS8 based on the UEFI standard), diagnostic software, AMI EC (embedded controller firmware), MG-Series SGPIO backplane controllers (for SATA, SAS and NVMe storage devices), driver/firmware development, and MegaRAC (BMC firmware).
Founding
American Megatrends Inc. (AMI) was founded in 1985 by Subramonian Shankar and Pat Sarma with funds from a previous consulting venture, Access Methods Inc. (also AMI). Access Methods was a company run by Pat Sarma and his partner. After Access Methods successfully launched the AMIBIOS, there were legal issues among the owners of the company, resulting in Sarma buying out his partners. Access Methods still owned the rights to the AMIBIOS. Sarma had already started a company called Quintessential Consultants Inc. (QCI), and later set up an equal partnership with Shankar.
By this time the AMIBIOS had become established and there was a need to keep the initials AMI. The partners renamed QCI as American Megatrends Inc., with the same initials as Access Methods Inc.; the renamed company then purchased AMIBIOS from Access Methods. Shankar became the president and Sarma the executive vice-president of this company. This partnership continued until 2001, when LSI Logic purchased the RAID Division of American Megatrends; American Megatrends then purchased all shares of the company owned by Sarma, making Shankar the majority owner.
Products
AMIDiag
AMIDiag is a family of PC diagnostic utilities sold to OEMs only. The AMIDiag Suite was introduced in 1991 and made available for MS-DOS, Microsoft Windows and Unified Extensible Firmware Interface (UEFI) platforms. It includes both the Windows and DOS PC diagnostics programs. Later versions of AMIDiag support UEFI, which allows diagnostics to be performed directly on the hardware components, without having to use operating system drivers or facilities.
Aptio
AMI's UEFI firmware solutions. Aptio V is AMI's current main UEFI firmware product. Aptio CommunityEdition is an open source UEFI firmware product. Aptio 4 is a now-discontinued previous version that has been succeeded by Aptio V.
MegaRAC
MegaRAC is a product line of baseboard management controller firmware packages and formerly Service Processors providing Out-of-band, or Lights-out remote management of computer systems. These baseboard management controllers running MegaRAC firmware packages or service processors operate independently of the Operating System status or location, to manage and troubleshoot computers.
Former products
AMIBIOS
AMIBIOS (also written as AMI BIOS) is the IBM PC-compatible BIOS that was formerly developed and sold by American Megatrends since 1986. In 1994, the company claimed that 75% of PC clones used AMIBIOS. It is used on motherboards made by AMI and by other companies.
American Megatrends had a strict OEM business model for AMIBIOS: it sold source code to motherboard manufacturers or customized AMIBIOS for each OEM individually, whichever business model they require. AMI does not sell to end users, and itself produces no end-user documentation or technical support for its BIOS firmware, leaving that to licensees. However, the company published two books on its BIOS in 1993 and 1994, written by its engineers.
During powerup, the BIOS firmware displays an ID string in the lower-left-hand corner of the screen. This ID string comprises various pieces of information about the firmware, including when it was compiled, what configuration options were selected, the OEM license code, and the targeted chipset and motherboard. There are 3 ID string formats, the first for older AMIBIOS, and the second and third for the newer AMI Hi-Flex ("high flexibility") BIOS. These latter are displayed when the Insert key is pressed during power-on self-test.
The original AMI BIOS did not encrypt the machine startup password, which it stored in non-volatile RAM. Therefore, any utility capable of reading a PC's NVRAM was able to read and to alter the password. The AMI WinBIOS encrypts the stored password, using a simple substitution cipher.
By pressing the Delete key during power-on self-test when a prompt is displayed, the BIOS setup utility program is invoked. Some earlier AMIBIOS versions also included a cut-down version of the AMIDIAG utility that AMI also sold separately, but most later AMI BIOSes do not include this program as the BIOS DMI already incorporates detailed diagnostics.
AMIBIOS was formerly sold through distributors, not directly available from the manufacturer or from eSupport.
AMI supplies both DOS and Windows firmware upgrade utilities for its own motherboards. eSupport only supplies a Windows upgrade utility.
StorTrends/ManageTrends
The StorTrends family of network-based backup and storage management software and hardware includes several NAS and iSCSI-based SAN servers with 4, 12, or 16 drive bays.
AMI couples off-the-shelf hardware with the StorTrends iTX storage management firmware platform. StorTrends offers synchronous, asynchronous and snap-assisted replication, thin provisioning, high-availability grouping and advanced caching.
Reliability and performance is the key for any storage server. StorTrends iTX 2.8 is designed to support Storage Bridge Bay specification that provide Auto-Failover capability to ensure that any interruption is handled without affecting data. It supports High-availability cluster, redundancy, scalability, replication, disaster recovery and multiple site backups.
DuOS-M
DuOS-M was commercial software developed by American Megatrends for Intel x86-based computers using the Microsoft Windows operating system to provide a "dual operating system" environment in which the user can simultaneously deploy the Android operating system in tandem with Microsoft Windows.
Because DuOS-M has the capability to run both Windows and Android simultaneously, the user can switch between the two operating systems without having to dual boot or suspend operation of one operating system in order to utilize the other.
DuOS-M supports key hardware peripherals in Windows including cameras, audio, microphone and sensors such as ambient light sensor, accelerometer, gyrometer, compass and orientation sensors. It also supports various screen sizes, resolutions, and screen orientation (portrait and landscape) along with 3D acceleration and HD video playback.
The first version of DuOS-M was released in June 2014. The software is available for download for a free 30-day trial, and is available for purchase for a complete license.
On March 7, 2018, American Megatrends officially announced that it ceased development of DuOS-M. No further updates were being released at this time, including bug fixes and security patches.
Technical problems
On November 13, 1993, some PCs with AMIBIOS firmware began bootup playing the tune to "Happy Birthday". The PC would remain halted, and the song would continue playing until a key was pressed, after which bootup would resume. The problem was caused by a virus-free Trojan, which was later resolved with firmware updates.
The AMI WinBIOS was a 1994 update to AMIBIOS, with a graphical user interface setup screen that mimicked the appearance of Windows 3.1 and supported mouse navigation, unusual at the time. WinBIOS was viewed favorably by Anand Lal Shimpi at AnandTech, but described by Thomas Pabst at Tom's Hardware as a "big disappointment", in part because of problems with distributing IRQ signals to every PCI and ISA expansion slot.
In July 2008 Linux developers discovered issues with ACPI tables on certain AMIBIOS BIOSes supplied by Foxconn, ASUS, and MSI. The problem was related to the ACPI _OSI method, which is used by ACPI to determine the OS version (in case an ACPI patch only applies to one specific OS). In some cases, the OSI method caused problems on Linux systems, skipping code that was only executed on Windows systems. Foxconn and AMI worked together to develop a solution, which was included in later revisions of AMIBIOS. The issue affected motherboards with Intel Socket 775. Actual system behavior differed based on BIOS version, system hardware and Linux distribution.
In October 2021 an issue was discovered where some Baseboard Management Controllers were shipped with a license/royalty sticker that had the company name misspelled as "American Megatrands".
Worldwide offices
United States
Headquarters: Duluth, Georgia
Field offices: San Jose, California; Austin, Texas
Beijing, People's Republic of China
Kunshan, Jiangsu, People's Republic of China
Shenzhen, Guangdong, People's Republic of China
Taipei, Taiwan
Munich, Germany
Chennai, Tamil Nadu, India
Chiyoda, Tokyo, Japan
Seoul, South Korea
Formerly had an office in DuPont, Washington, United States
See also
BIOS features comparison
Insyde Software
Phoenix Technologies
Award Software, now part of Phoenix
List of companies of Taiwan
References
Further reading
External links
American Megatrends
Introduction to AMIBIOS8: Overview of key features in the latest AMIBIOS
Companies based in Gwinnett County, Georgia
Software companies based in Georgia (U.S. state)
BIOS
Software companies established in 1985
Software companies of the United States
1985 establishments in Georgia (U.S. state)
Computer hardware companies
Computer companies of the United States | American Megatrends | Technology | 2,217 |
13,568,942 | https://en.wikipedia.org/wiki/Prometon | Prometon is a herbicide for annual and perennial broad-leaf weed, brush and grass control mainly in non-cropping situations.
References
Prometon Risk Assessments; Notice of Availability, and Risk Reduction Options. Federal Register: November 7, 2007 (Volume 72, Number 215)
Scorecard – CHEMICAL PROFILES – Chemical Profile – Chemical: PROMETON – CAS Number: 1610-18-0
External links
Herbicides
Triazines
Isopropylamino compounds | Prometon | Biology | 96 |
48,489 | https://en.wikipedia.org/wiki/Magic%20%28supernatural%29 | Magic, sometimes spelled magick, is the application of beliefs, rituals or actions employed in the belief that they can manipulate natural or supernatural beings and forces. It is a category into which have been placed various beliefs and practices sometimes considered separate from both religion and science.
Connotations have varied from positive to negative at times throughout history. Within Western culture, magic has been linked to ideas of the Other, foreignness, and primitivism; indicating that it is "a powerful marker of cultural difference" and likewise, a non-modern phenomenon. During the late nineteenth and early twentieth centuries, Western intellectuals perceived the practice of magic to be a sign of a primitive mentality and also commonly attributed it to marginalised groups of people.
Aleister Crowley (1875–1947), a British occultist, defined "magick" as "the Science and Art of causing Change to occur in conformity with Will", adding a 'k' to distinguish ceremonial or ritual magic from stage magic. In modern occultism and neopagan religions, many self-described magicians and witches regularly practice ritual magic. This view has been incorporated into chaos magic and the new religious movements of Thelema and Wicca.
Etymology
The English words magic, mage and magician come from the Latin term magus, through the Greek μάγος, which is from the Old Persian maguš. (𐎶𐎦𐎢𐏁|𐎶𐎦𐎢𐏁, magician). The Old Persian magu- is derived from the Proto-Indo-European megʰ-*magh (be able). The Persian term may have led to the Old Sinitic *Mγag (mage or shaman). The Old Persian form seems to have permeated ancient Semitic languages as the Talmudic Hebrew magosh, the Aramaic amgusha (magician), and the Chaldean maghdim (wisdom and philosophy); from the first century BCE onwards, Syrian magusai gained notoriety as magicians and soothsayers.
During the late-sixth and early-fifth centuries BCE, the term goetia found its way into ancient Greek, where it was used with negative connotations to apply to rites that were regarded as fraudulent, unconventional, and dangerous; in particular they dedicate themselves to the evocation and invocation of daimons (lesser divinities or spirits) to control and acquire powers. This concept remained pervasive throughout the Hellenistic period, when Hellenistic authors categorised a diverse range of practices—such as enchantment, witchcraft, incantations, divination, necromancy, and astrology—under the label "magic".
The Latin language adopted this meaning of the term in the first century BCE. Via Latin, the concept became incorporated into Christian theology during the first century CE. Early Christians associated magic with demons, and thus regarded it as against Christian religion. In early modern Europe, Protestants often claimed that Roman Catholicism was magic rather than religion, and as Christian Europeans began colonizing other parts of the world in the sixteenth century, they labelled the non-Christian beliefs they encountered as magical. In that same period, Italian humanists reinterpreted the term in a positive sense to express the idea of natural magic. Both negative and positive understandings of the term recurred in Western culture over the following centuries.
Since the nineteenth century, academics in various disciplines have employed the term magic but have defined it in different ways and used it in reference to different things. One approach, associated with the anthropologists Edward Tylor (1832–1917) and James G. Frazer (1854–1941), uses the term to describe beliefs in hidden sympathies between objects that allow one to influence the other. Defined in this way, magic is portrayed as the opposite to science. An alternative approach, associated with the sociologist Marcel Mauss (1872–1950) and his uncle Émile Durkheim (1858–1917), employs the term to describe private rites and ceremonies and contrasts it with religion, which it defines as a communal and organised activity. By the 1990s many scholars were rejecting the term's utility for scholarship. They argued that the label drew arbitrary lines between similar beliefs and practices that were alternatively considered religious, and that it constituted ethnocentric to apply the connotations of magic—rooted in Western and Christian history—to other cultures.
Branches or types
High and low
Historians and anthropologists have distinguished between practitioners who engage in high magic, and those who engage in low magic. High magic, also known as theurgy and ceremonial or ritual magic, is more complex, involving lengthy and detailed rituals as well as sophisticated, sometimes expensive, paraphernalia. Low magic and natural magic are associated with peasants and folklore with simpler rituals such as brief, spoken spells. Low magic is also closely associated with sorcery and witchcraft. Anthropologist Susan Greenwood writes that "Since the Renaissance, high magic has been concerned with drawing down forces and energies from heaven" and achieving unity with divinity. High magic is usually performed indoors while witchcraft is often performed outdoors.
White, gray and black
Historian Owen Davies says the term "white witch" was rarely used before the 20th century. White magic is understood as the use of magic for selfless or helpful purposes, while black magic was used for selfish, harmful or evil purposes. Black magic is the malicious counterpart of the benevolent white magic. There is no consensus as to what constitutes white, gray or black magic, as Phil Hine says, "like many other aspects of occultism, what is termed to be 'black magic' depends very much on who is doing the defining." Gray magic, also called "neutral magic", is magic that is not performed for specifically benevolent reasons, but is also not focused towards completely hostile practices.
Witchcraft
The historian Ronald Hutton notes the presence of four distinct meanings of the term witchcraft in the English language. Historically, the term primarily referred to the practice of causing harm to others through supernatural or magical means. This remains, according to Hutton, "the most widespread and frequent" understanding of the term. Moreover, Hutton also notes three other definitions in current usage; to refer to anyone who conducts magical acts, for benevolent or malevolent intent; for practitioners of the modern Pagan religion of Wicca; or as a symbol of women resisting male authority and asserting an independent female authority. Belief in witchcraft is often present within societies and groups whose cultural framework includes a magical world view.
Those regarded as being magicians have often faced suspicion from other members of their society. This is particularly the case if these perceived magicians have been associated with social groups already considered morally suspect in a particular society, such as foreigners, women, or the lower classes. In contrast to these negative associations, many practitioners of activities that have been labelled magical have emphasised that their actions are benevolent and beneficial. This conflicted with the common Christian view that all activities categorised as being forms of magic were intrinsically bad regardless of the intent of the magician, because all magical actions relied on the aid of demons. There could be conflicting attitudes regarding the practices of a magician; in European history, authorities often believed that cunning folk and traditional healers were harmful because their practices were regarded as magical and thus stemming from contact with demons, whereas a local community might value and respect these individuals because their skills and services were deemed beneficial.
In Western societies, the practice of magic, especially when harmful, was usually associated with women. For instance, during the witch trials of the early modern period, around three quarters of those executed as witches were female, to only a quarter who were men. That women were more likely to be accused and convicted of witchcraft in this period might have been because their position was more legally vulnerable, with women having little or no legal standing that was independent of their male relatives. The conceptual link between women and magic in Western culture may be because many of the activities regarded as magical—from rites to encourage fertility to potions to induce abortions—were associated with the female sphere. It might also be connected to the fact that many cultures portrayed women as being inferior to men on an intellectual, moral, spiritual, and physical level.
History
Mesopotamia
Magic was invoked in many kinds of rituals and medical formulae, and to counteract evil omens. Defensive or legitimate magic in Mesopotamia (asiputu or masmassutu in the Akkadian language) were incantations and ritual practices intended to alter specific realities. The ancient Mesopotamians believed that magic was the only viable defense against demons, ghosts, and evil sorcerers. To defend themselves against the spirits of those they had wronged, they would leave offerings known as kispu in the person's tomb in hope of appeasing them. If that failed, they also sometimes took a figurine of the deceased and buried it in the ground, demanding for the gods to eradicate the spirit, or force it to leave the person alone.
The ancient Mesopotamians also used magic intending to protect themselves from evil sorcerers who might place curses on them. Black magic as a category did not exist in ancient Mesopotamia, and a person legitimately using magic to defend themselves against illegitimate magic would use exactly the same techniques. The only major difference was that curses were enacted in secret; whereas a defense against sorcery was conducted in the open, in front of an audience if possible. One ritual to punish a sorcerer was known as Maqlû, or "The Burning". The person viewed as being afflicted by witchcraft would create an effigy of the sorcerer and put it on trial at night. Then, once the nature of the sorcerer's crimes had been determined, the person would burn the effigy and thereby break the sorcerer's power over them.
The ancient Mesopotamians also performed magical rituals to purify themselves of sins committed unknowingly. One such ritual was known as the Šurpu, or "Burning", in which the caster of the spell would transfer the guilt for all their misdeeds onto various objects such as a strip of dates, an onion, and a tuft of wool. The person would then burn the objects and thereby purify themself of all sins that they might have unknowingly committed. A whole genre of love spells existed. Such spells were believed to cause a person to fall in love with another person, restore love which had faded, or cause a male sexual partner to be able to sustain an erection when he had previously been unable. Other spells were used to reconcile a man with his patron deity or to reconcile a wife with a husband who had been neglecting her.
The ancient Mesopotamians made no distinction between rational science and magic. When a person became ill, doctors would prescribe both magical formulas to be recited as well as medicinal treatments. Most magical rituals were intended to be performed by an āšipu, an expert in the magical arts. The profession was generally passed down from generation to generation and was held in extremely high regard and often served as advisors to kings and great leaders. An āšipu probably served not only as a magician, but also as a physician, a priest, a scribe, and a scholar.
The Sumerian god Enki, who was later syncretized with the East Semitic god Ea, was closely associated with magic and incantations; he was the patron god of the bārȗ and the ašipū and was widely regarded as the ultimate source of all arcane knowledge. The ancient Mesopotamians also believed in omens, which could come when solicited or unsolicited. Regardless of how they came, omens were always taken with the utmost seriousness.
Incantation bowls
A common set of shared assumptions about the causes of evil and how to avert it are found in a form of early protective magic called incantation bowl or magic bowls. The bowls were produced in the Middle East, particularly in Upper Mesopotamia and Syria, what is now Iraq and Iran, and fairly popular during the sixth to eighth centuries. The bowls were buried face down and were meant to capture demons. They were commonly placed under the threshold, courtyards, in the corner of the homes of the recently deceased and in cemeteries. A subcategory of incantation bowls are those used in Jewish magical practice. Aramaic incantation bowls are an important source of knowledge about Jewish magical practices.
Egypt
In ancient Egypt (Kemet in the Egyptian language), Magic (personified as the god heka) was an integral part of religion and culture which is known to us through a substantial corpus of texts which are products of the Egyptian tradition.
While the category magic has been contentious for modern Egyptology, there is clear support for its applicability from ancient terminology. The Coptic term hik is the descendant of the pharaonic term heka, which, unlike its Coptic counterpart, had no connotation of impiety or illegality, and is attested from the Old Kingdom through to the Roman era. Heka was considered morally neutral and was applied to the practices and beliefs of both foreigners and Egyptians alike. The Instructions for Merikare informs us that heka was a beneficence gifted by the creator to humanity "in order to be weapons to ward off the blow of events".
Magic was practiced by both the literate priestly hierarchy and by illiterate farmers and herdsmen, and the principle of heka underlay all ritual activity, both in the temples and in private settings.
The main principle of heka is centered on the power of words to bring things into being. Karenga explains the pivotal power of words and their vital ontological role as the primary tool used by the creator to bring the manifest world into being. Because humans were understood to share a divine nature with the gods, snnw ntr (images of the god), the same power to use words creatively that the gods have is shared by humans.
Book of the Dead
The interior walls of the pyramid of Unas, the final pharaoh of the Egyptian Fifth Dynasty, are covered in hundreds of magical spells and inscriptions, running from floor to ceiling in vertical columns. These inscriptions are known as the Pyramid Texts and they contain spells needed by the pharaoh in order to survive in the afterlife. The Pyramid Texts were strictly for royalty only; the spells were kept secret from commoners and were written only inside royal tombs. During the chaos and unrest of the First Intermediate Period, however, tomb robbers broke into the pyramids and saw the magical inscriptions. Commoners began learning the spells and, by the beginning of the Middle Kingdom, commoners began inscribing similar writings on the sides of their own coffins, hoping that doing so would ensure their own survival in the afterlife. These writings are known as the Coffin Texts.
After a person died, his or her corpse would be mummified and wrapped in linen bandages to ensure that the deceased's body would survive for as long as possible because the Egyptians believed that a person's soul could only survive in the afterlife for as long as his or her physical body survived here on earth. The last ceremony before a person's body was sealed away inside the tomb was known as the Opening of the Mouth. In this ritual, the priests would touch various magical instruments to various parts of the deceased's body, thereby giving the deceased the ability to see, hear, taste, and smell in the afterlife.
Amulets
The use of amulets (meket) was widespread among both living and dead ancient Egyptians. They were used for protection and as a means of "reaffirming the fundamental fairness of the universe". The oldest amulets found are from the predynastic Badarian Period, and they persisted through to Roman times.
Judea
In the Mosaic Law, practices such as witchcraft (), being a soothsayer () or a sorcerer () or one who conjures spells () or one who calls up the dead () are specifically forbidden as abominations to the Lord.
Halakha (Jewish religious law) forbids divination and other forms of soothsaying, and the Talmud lists many persistent yet condemned divining practices. Practical Kabbalah in historical Judaism is a branch of the Jewish mystical tradition that concerns the use of magic. It was considered permitted white magic by its practitioners, reserved for the elite, who could separate its spiritual source from qlippothic realms of evil if performed under circumstances that were holy (Q-D-Š) and pure (). The concern of overstepping Judaism's strong prohibitions of impure magic ensured it remained a minor tradition in Jewish history. Its teachings include the use of Divine and angelic names for amulets and incantations. These magical practices of Judaic folk religion which became part of practical Kabbalah date from Talmudic times. The Talmud mentions the use of charms for healing, and a wide range of magical cures were sanctioned by rabbis. It was ruled that any practice actually producing a cure was not to be regarded superstitiously and there has been the widespread practice of medicinal amulets, and folk remedies () in Jewish societies across time and geography.
Although magic was forbidden by Levitical law in the Hebrew Bible, it was widely practised in the late Second Temple period, and particularly well documented in the period following the destruction of the temple into the 3rd, 4th, and 5th centuries CE.
Greco-Roman world
During the late sixth and early fifth centuries BCE, the Persian maguš was Graecicized and introduced into the ancient Greek language as μάγος and μαγεία. In doing so it transformed meaning, gaining negative connotations, with the magos being regarded as a charlatan whose ritual practices were fraudulent, strange, unconventional, and dangerous. As noted by Davies, for the ancient Greeks—and subsequently for the ancient Romans—"magic was not distinct from religion but rather an unwelcome, improper expression of it—the religion of the other". The historian Richard Gordon suggested that for the ancient Greeks, being accused of practicing magic was "a form of insult".
This change in meaning was influenced by the military conflicts that the Greek city-states were then engaged in against the Persian Empire. In this context, the term makes appearances in such surviving text as Sophocles' Oedipus Rex, Hippocrates' De morbo sacro, and Gorgias' Encomium of Helen. In Sophocles' play, for example, the character Oedipus derogatorily refers to the seer Tiresius as a magos—in this context meaning something akin to quack or charlatan—reflecting how this epithet was no longer reserved only for Persians.
In the first century BCE, the Greek concept of the magos was adopted into Latin and used by a number of ancient Roman writers as magus and magia. The earliest known Latin use of the term was in Virgil's Eclogue, written around 40 BCE, which makes reference to magicis ... sacris (magic rites). The Romans already had other terms for the negative use of supernatural powers, such as veneficus and saga. The Roman use of the term was similar to that of the Greeks, but placed greater emphasis on the judicial application of it. Within the Roman Empire, laws would be introduced criminalising things regarded as magic.
In ancient Roman society, magic was associated with societies to the east of the empire; the first century CE writer Pliny the Elder for instance claimed that magic had been created by the Iranian philosopher Zoroaster, and that it had then been brought west into Greece by the magician Osthanes, who accompanied the military campaigns of the Persian King Xerxes.
Ancient Greek scholarship of the 20th century, almost certainly influenced by Christianising preconceptions of the meanings of magic and religion, and the wish to establish Greek culture as the foundation of Western rationality, developed a theory of ancient Greek magic as primitive and insignificant, and thereby essentially separate from Homeric, communal (polis) religion. Since the last decade of the century, however, recognising the ubiquity and respectability of acts such as katadesmoi (binding spells), described as magic by modern and ancient observers alike, scholars have been compelled to abandon this viewpoint. The Greek word mageuo (practice magic) itself derives from the word Magos, originally simply the Greek name for a Persian tribe known for practicing religion. Non-civic mystery cults have been similarly re-evaluated:
Katadesmoi (), curses inscribed on wax or lead tablets and buried underground, were frequently executed by all strata of Greek society, sometimes to protect the entire polis. Communal curses carried out in public declined after the Greek classical period, but private curses remained common throughout antiquity. They were distinguished as magical by their individualistic, instrumental and sinister qualities. These qualities, and their perceived deviation from inherently mutable cultural constructs of normality, most clearly delineate ancient magic from the religious rituals of which they form a part.
A large number of magical papyri, in Greek, Coptic, and Demotic, have been recovered and translated. They contain early instances of:
the use of magic words said to have the power to command spirits;
the use of mysterious symbols or sigils which are thought to be useful when invoking or evoking spirits.
The practice of magic was banned in the late Roman world, and the Codex Theodosianus (438 AD) states:
Middle Ages
Magic practices such as divination, interpretation of omens, sorcery, and use of charms had been specifically forbidden in Mosaic Law and condemned in Biblical histories of the kings. Many of these practices were spoken against in the New Testament as well.
Some commentators say that in the first century CE, early Christian authors absorbed the Greco-Roman concept of magic and incorporated it into their developing Christian theology, and that these Christians retained the already implied Greco-Roman negative stereotypes of the term and extended them by incorporating conceptual patterns borrowed from Jewish thought, in particular the opposition of magic and miracle. Some early Christian authors followed the Greek-Roman thinking by ascribing the origin of magic to the human realm, mainly to Zoroaster and Osthanes. The Christian view was that magic was a product of the Babylonians, Persians, or Egyptians. The Christians shared with earlier classical culture the idea that magic was something distinct from proper religion, although drew their distinction between the two in different ways.
For early Christian writers like Augustine of Hippo, magic did not merely constitute fraudulent and unsanctioned ritual practices, but was the very opposite of religion because it relied upon cooperation from demons, the henchmen of Satan. In this, Christian ideas of magic were closely linked to the Christian category of paganism, and both magic and paganism were regarded as belonging under the broader category of superstitio (superstition), another term borrowed from pre-Christian Roman culture. This Christian emphasis on the inherent immorality and wrongness of magic as something conflicting with good religion was far starker than the approach in the other large monotheistic religions of the period, Judaism and Islam. For instance, while Christians regarded demons as inherently evil, the jinn—comparable entities in Islamic mythology—were perceived as more ambivalent figures by Muslims.
The model of the magician in Christian thought was provided by Simon Magus, (Simon the Magician), a figure who opposed Saint Peter in both the Acts of the Apostles and the apocryphal yet influential Acts of Peter. The historian Michael D. Bailey stated that in medieval Europe, magic was a "relatively broad and encompassing category". Christian theologians believed that there were multiple different forms of magic, the majority of which were types of divination, for instance, Isidore of Seville produced a catalogue of things he regarded as magic in which he listed divination by the four elements i.e. geomancy, hydromancy, aeromancy, and pyromancy, as well as by observation of natural phenomena e.g. the flight of birds and astrology. He also mentioned enchantment and ligatures (the medical use of magical objects bound to the patient) as being magical. Medieval Europe also saw magic come to be associated with the Old Testament figure of Solomon; various grimoires, or books outlining magical practices, were written that claimed to have been written by Solomon, most notably the Key of Solomon.
In early medieval Europe, magia was a term of condemnation. In medieval Europe, Christians often suspected Muslims and Jews of engaging in magical practices; in certain cases, these perceived magical rites—including the alleged Jewish sacrifice of Christian children—resulted in Christians massacring these religious minorities. Christian groups often also accused other, rival Christian groups such as the Hussites—which they regarded as heretical—of engaging in magical activities. Medieval Europe also saw the term maleficium applied to forms of magic that were conducted with the intention of causing harm. The later Middle Ages saw words for these practitioners of harmful magical acts appear in various European languages: sorcière in French, Hexe in German, strega in Italian, and bruja in Spanish. The English term for malevolent practitioners of magic, witch, derived from the earlier Old English term wicce.
Ars Magica or magic is a major component and supporting contribution to the belief and practice of spiritual, and in many cases, physical healing throughout the Middle Ages. Emanating from many modern interpretations lies a trail of misconceptions about magic, one of the largest revolving around wickedness or the existence of nefarious beings who practice it. These misinterpretations stem from numerous acts or rituals that have been performed throughout antiquity, and due to their exoticism from the commoner's perspective, the rituals invoked uneasiness and an even stronger sense of dismissal.
In the Medieval Jewish view, the separation of the mystical and magical elements of Kabbalah, dividing it into speculative theological Kabbalah (Kabbalah Iyyunit) with its meditative traditions, and theurgic practical Kabbalah (Kabbalah Ma'asit), had occurred by the beginning of the 14th century.
One societal force in the Middle Ages more powerful than the singular commoner, the Christian Church, rejected magic as a whole because it was viewed as a means of tampering with the natural world in a supernatural manner associated with the biblical verses of Deuteronomy 18:9–12. Despite the many negative connotations which surround the term magic, there exist many elements that are seen in a divine or holy light.
The divine right of kings in England was thought to be able to give them "sacred magic" power to heal thousands of their subjects from sicknesses.
Diversified instruments or rituals used in medieval magic include, but are not limited to: various amulets, talismans, potions, as well as specific chants, dances, and prayers. Along with these rituals are the adversely imbued notions of demonic participation which influence them. The idea that magic was devised, taught, and worked by demons would have seemed reasonable to anyone who read the Greek magical papyri or the Sefer-ha-Razim and found that healing magic appeared alongside rituals for killing people, gaining wealth, or personal advantage, and coercing women into sexual submission. Archaeology is contributing to a fuller understanding of ritual practices performed in the home, on the body and in monastic and church settings.
The Islamic reaction towards magic did not condemn magic in general and distinguished between magic which can heal sickness and possession, and sorcery. The former is therefore a special gift from God, while the latter is achieved through help of Jinn and devils. Ibn al-Nadim held that exorcists gain their power by their obedience to God, while sorcerers please the devils by acts of disobedience and sacrifices and they in return do him a favor. According to Ibn Arabi, Al-Ḥajjāj ibn Yusuf al-Shubarbuli was able to walk on water due to his piety. According to the Quran 2:102, magic was also taught to humans by devils and the angels Harut and Marut.
The influence of Arab Islamic magic in medieval and Renaissance Europe was very notable. Some magic books such as Picatrix and Al Kindi's De Radiis were the basis for much of medieval magic in Europe and for subsequent developments in the Renaissance. Another Arab Muslim author fundamental to the developments of medieval and Renaissance European magic was Ahmad al-Buni, with his books such as the Shams al-Ma'arif which deal above all with the evocation and invocation of spirits or jinn to control them, obtain powers and make wishes come true. These books are still important to the Islamic world specifically in Simiyya, a doctrine found commonly within Sufi-occult traditions.
During the early modern period, the concept of magic underwent a more positive reassessment through the development of the concept of magia naturalis (natural magic). This was a term introduced and developed by two Italian humanists, Marsilio Ficino and Giovanni Pico della Mirandola. For them, magia was viewed as an elemental force pervading many natural processes, and thus was fundamentally distinct from the mainstream Christian idea of demonic magic. Their ideas influenced an array of later philosophers and writers, among them Paracelsus, Giordano Bruno, Johannes Reuchlin, and Johannes Trithemius. According to the historian Richard Kieckhefer, the concept of magia naturalis took "firm hold in European culture" during the fourteenth and fifteenth centuries, attracting the interest of natural philosophers of various theoretical orientations, including Aristotelians, Neoplatonists, and Hermeticists.
Adherents of this position argued that magia could appear in both good and bad forms; in 1625, the French librarian Gabriel Naudé wrote his Apology for all the Wise Men Falsely Suspected of Magic, in which he distinguished "Mosoaicall Magick"—which he claimed came from God and included prophecies, miracles, and speaking in tongues—from "geotick" magic caused by demons. While the proponents of magia naturalis insisted that this did not rely on the actions of demons, critics disagreed, arguing that the demons had simply deceived these magicians. By the seventeenth century the concept of magia naturalis had moved in increasingly 'naturalistic' directions, with the distinctions between it and science becoming blurred. The validity of magia naturalis as a concept for understanding the universe then came under increasing criticism during the Age of Enlightenment in the eighteenth century.
Despite the attempt to reclaim the term magia for use in a positive sense, it did not supplant traditional attitudes toward magic in the West, which remained largely negative. At the same time as magia naturalis was attracting interest and was largely tolerated, Europe saw an active persecution of accused witches believed to be guilty of maleficia. Reflecting the term's continued negative associations, Protestants often sought to denigrate Roman Catholic sacramental and devotional practices as being magical rather than religious. Many Roman Catholics were concerned by this allegation and for several centuries various Roman Catholic writers devoted attention to arguing that their practices were religious rather than magical. At the same time, Protestants often used the accusation of magic against other Protestant groups which they were in contest with. In this way, the concept of magic was used to prescribe what was appropriate as religious belief and practice.
Similar claims were also being made in the Islamic world during this period. The Arabian cleric Muhammad ibn Abd al-Wahhab—founder of Wahhabism—for instance condemned a range of customs and practices such as divination and the veneration of spirits as sihr, which he in turn claimed was a form of shirk, the sin of idolatry.
The Renaissance
Renaissance humanism saw a resurgence in hermeticism and Neo-Platonic varieties of ceremonial magic. The Renaissance, on the other hand, saw the rise of science, in such forms as the dethronement of the Ptolemaic theory of the universe, the distinction of astronomy from astrology, and of chemistry from alchemy.
There was great uncertainty in distinguishing practices of superstition, occultism, and perfectly sound scholarly knowledge or pious ritual. The intellectual and spiritual tensions erupted in the Early Modern witch craze, further reinforced by the turmoil of the Protestant Reformation, especially in Germany, England, and Scotland.
In Hasidism, the displacement of practical Kabbalah using directly magical means, by conceptual and meditative trends gained much further emphasis, while simultaneously instituting meditative theurgy for material blessings at the heart of its social mysticism. Hasidism internalised Kabbalah through the psychology of deveikut (cleaving to God), and cleaving to the Tzadik (Hasidic Rebbe). In Hasidic doctrine, the tzaddik channels Divine spiritual and physical bounty to his followers by altering the Will of God (uncovering a deeper concealed Will) through his own deveikut and self-nullification. Dov Ber of Mezeritch is concerned to distinguish this theory of the Tzadik's will altering and deciding the Divine Will, from directly magical process.
In the sixteenth century, European societies began to conquer and colonise other continents around the world, and as they did so they applied European concepts of magic and witchcraft to practices found among the peoples whom they encountered. Usually, these European colonialists regarded the natives as primitives and savages whose belief systems were diabolical and needed to be eradicated and replaced by Christianity. Because Europeans typically viewed these non-European peoples as being morally and intellectually inferior to themselves, it was expected that such societies would be more prone to practicing magic. Women who practiced traditional rites were labelled as witches by the Europeans.
In various cases, these imported European concepts and terms underwent new transformations as they merged with indigenous concepts. In West Africa, for instance, Portuguese travellers introduced their term and concept of the feitiçaria (often translated as sorcery) and the feitiço (spell) to the native population, where it was transformed into the concept of the fetish. When later Europeans encountered these West African societies, they wrongly believed that the fetiche was an indigenous African term rather than the result of earlier inter-continental encounters. Sometimes, colonised populations themselves adopted these European concepts for their own purposes. In the early nineteenth century, the newly independent Haitian government of Jean-Jacques Dessalines began to suppress the practice of Vodou, and in 1835 Haitian law-codes categorised all Vodou practices as sortilège (sorcery/witchcraft), suggesting that it was all conducted with harmful intent, whereas among Vodou practitioners the performance of harmful rites was already given a separate and distinct category, known as maji.
Baroque period
During the Baroque era, several intriguing figures engaged with occult and magical themes that went beyond conventional thinking. Michael Sendivogius (1566–1636), a Polish alchemist, emphasized empirical experimentation in alchemy and made notable contributions to early chemistry. Tommaso Campanella (1568–1639), an Italian philosopher, blended Christianity with mysticism in works like The City of the Sun, envisioning an ideal society governed by divine principles. Jakob Böhme (1575–1624), a German mystic, explored the relationship between the divine and human experience, influencing later mystical movements.
Jan Baptist van Helmont, a Flemish chemist, coined the term "gas" and conducted experiments on plant growth, expanding the understanding of chemistry. Sir Kenelm Digby, known for his diverse interests, created the "Sympathetic Powder", believed to have mystical healing properties. Isaac Newton, famous for his scientific achievements, also delved into alchemy and collected esoteric manuscripts, revealing his fascination with hidden knowledge. These individuals collectively embody the curiosity and exploration characteristic of the Baroque period.
Modernity
By the nineteenth century, European intellectuals no longer saw the practice of magic through the framework of sin and instead regarded magical practices and beliefs as "an aberrational mode of thought antithetical to the dominant cultural logic—a sign of psychological impairment and marker of racial or cultural inferiority".
As educated elites in Western societies increasingly rejected the efficacy of magical practices, legal systems ceased to threaten practitioners of magical activities with punishment for the crimes of diabolism and witchcraft, and instead threatened them with the accusation that they were defrauding people through promising to provide things which they could not.
This spread of European colonial power across the world influenced how academics would come to frame the concept of magic. In the nineteenth century, several scholars adopted the traditional, negative concept of magic. That they chose to do so was not inevitable, for they could have followed the example adopted by prominent esotericists active at the time like Helena Blavatsky who had chosen to use the term and concept of magic in a positive sense.
Various writers also used the concept of magic to criticise religion by arguing that the latter still displayed many of the negative traits of the former. An example of this was the American journalist H. L. Mencken in his polemical 1930 work Treatise on the Gods; he sought to critique religion by comparing it to magic, arguing that the division between the two was misplaced. The concept of magic was also adopted by theorists in the new field of psychology, where it was often used synonymously with superstition, although the latter term proved more common in early psychological texts.
In the late nineteenth and twentieth centuries, folklorists examined rural communities across Europe in search of magical practices, which at the time they typically understood as survivals of ancient belief systems. It was only in the 1960s that anthropologists like Jeanne Favret-Saada also began looking in depth at magic in European contexts, having previously focused on examining magic in non-Western contexts. In the twentieth century, magic also proved a topic of interest to the Surrealists, an artistic movement based largely in Europe; the Surrealism André Breton for instance published L'Art magique in 1957, discussing what he regarded as the links between magic and art.
The scholarly application of magic as a sui generis category that can be applied to any socio-cultural context was linked with the promotion of modernity to both Western and non-Western audiences.
The term magic has become pervasive in the popular imagination and idiom.
In contemporary contexts, the word magic is sometimes used to "describe a type of excitement, of wonder, or sudden delight", and in such a context can be "a term of high praise". Despite its historical contrast against science, scientists have also adopted the term in application to various concepts, such as magic acid, magic bullets, and magic angles.
Modern Western magic has challenged widely-held preconceptions about contemporary religion and spirituality.
The polemical discourses about magic influenced the self-understanding of modern magicians, several whom—such as Aleister Crowley—were well versed in academic literature on the subject.
According to scholar of religion Henrik Bogdan, "arguably the best known emic definition" of the term magic was provided by Crowley. Crowley—who favoured the spelling 'magick' over magic to distinguish it from stage illusionism—was of the view that "Magick is the Science and Art of causing Change to occur in conformity with Will". Crowley's definition influenced that of subsequent magicians. Dion Fortune, the founder of Fraternity of the Inner Light for instance stated that "Magic is the art of changing consciousness according to Will". Gerald Gardner, the founder of Gardnerian Wicca, stated that magic was "attempting to cause the physically unusual", while Anton LaVey, the founder of LaVeyan Satanism, described magic as "the change in situations or events in accordance with one's will, which would, using normally acceptable methods, be unchangeable".
The chaos magic movement emerged during the late 20th century, as an attempt to strip away the symbolic, ritualistic, theological or otherwise ornamental aspects of other occult traditions and distill magic down to a set of basic techniques.
These modern Western concepts of magic rely on a belief in correspondences connected to an unknown occult force that permeates the universe. As noted by Hanegraaff, this operated according to "a new meaning of magic, which could not possibly have existed in earlier periods, precisely because it is elaborated in reaction to the 'disenchantment of the world.
For many, and perhaps most, modern Western magicians, the goal of magic is deemed to be personal spiritual development. The perception of magic as a form of self-development is central to the way that magical practices have been adopted into forms of modern Paganism and the New Age phenomenon. One significant development within modern Western magical practices has been sex magic. This was a practice promoted in the writings of Paschal Beverly Randolph and subsequently exerted a strong interest on occultist magicians like Crowley and Theodor Reuss.
The adoption of the term magic by modern occultists can in some instances be a deliberate attempt to champion those areas of Western society which have traditionally been marginalised as a means of subverting dominant systems of power. The influential American Wiccan and author Starhawk for instance stated that "Magic is another word that makes people uneasy, so I use it deliberately, because the words we are comfortable with, the words that sound acceptable, rational, scientific, and intellectually correct, are comfortable precisely because they are the language of estrangement." In the present day, "among some countercultural subgroups the label is considered 'cool.
Conceptual development
According to anthropologist Edward Evan Evans-Pritchard, magic formed a rational framework of beliefs and knowledge in some cultures, like the Azande people of Africa. The historian Owen Davies stated that the word magic was "beyond simple definition", and had "a range of meanings". Similarly, the historian Michael D. Bailey characterised magic as "a deeply contested category and a very fraught label"; as a category, he noted, it was "profoundly unstable" given that definitions of the term have "varied dramatically across time and between cultures". Scholars have engaged in extensive debates as to how to define magic, with such debates resulting in intense dispute. Throughout such debates, the scholarly community has failed to agree on a definition of magic, in a similar manner to how they have failed to agree on a definition of religion. According with scholar of religion Michael Stausberg the phenomenon of people applying the concept of magic to refer to themselves and their own practices and beliefs goes as far back as late antiquity. However, even among those throughout history who have described themselves as magicians, there has been no common ground of what magic is.
In Africa, the word magic might simply be understood as denoting management of forces, which, as an activity, is not weighted morally and is accordingly a neutral activity from the start of a magical practice, but by the will of the magician, is thought to become and to have an outcome which represents either good or bad (evil). Ancient African culture was in the habit customarily of always discerning difference between magic, and a group of other things, which are not magic, these things were medicine, divination, witchcraft and sorcery. Opinion differs on how religion and magic are related to each other with respect development or to which developed from which, some think they developed together from a shared origin, some think religion developed from magic, and some, magic from religion.
Anthropological and sociological theories of magic generally serve to sharply demarcate certain practices from other, otherwise similar practices in a given society. According to Bailey: "In many cultures and across various historical periods, categories of magic often define and maintain the limits of socially and culturally acceptable actions in respect to numinous or occult entities or forces. Even more, basically, they serve to delineate arenas of appropriate belief." In this, he noted that "drawing these distinctions is an exercise in power". This tendency has had repercussions for the study of magic, with academics self-censoring their research because of the effects on their careers.
Randall Styers noted that attempting to define magic represents "an act of demarcation" by which it is juxtaposed against "other social practices and modes of knowledge" such as religion and science. The historian Karen Louise Jolly described magic as "a category of exclusion, used to define an unacceptable way of thinking as either the opposite of religion or of science".
Modern scholarship has produced various definitions and theories of magic. According to Bailey, "these have typically framed magic in relation to, or more
frequently in distinction from, religion and science." Since the emergence of the study of religion and the social sciences, magic has been a "central theme in the theoretical literature" produced by scholars operating in these academic disciplines. Magic is one of the most heavily theorized concepts in the study of religion, and also played a key role in early theorising within anthropology. Styers believed that it held such a strong appeal for social theorists because it provides "such a rich site for articulating and contesting the nature and boundaries of modernity". Scholars have commonly used it as a foil for the concept of religion, regarding magic as the "illegitimate (and effeminized) sibling" of religion. Alternately, others have used it as a middle-ground category located between religion and science.
The context in which scholars framed their discussions of magic was informed by the spread of European colonial power across the world in the modern period.
These repeated attempts to define magic resonated with broader social concerns, and the pliability of the concept has allowed it to be "readily adaptable as a polemical and ideological tool". The links that intellectuals made between magic and those they characterized as primitives helped to legitimise European and Euro-American imperialism and colonialism, as these Western colonialists expressed the view that those who believed in and practiced magic were unfit to govern themselves and should be governed by those who, rather than believing in magic, believed in science and/or (Christian) religion. In Bailey's words, "the association of certain peoples [whether non-Europeans or poor, rural Europeans] with magic served to distance and differentiate them from those who ruled over them, and in large part to justify that rule."
Many different definitions of magic have been offered by scholars, although—according to Hanegraaff—these can be understood as variations of a small number of heavily influential theories.
Intellectualist approach
The intellectualist approach to defining magic is associated with two British anthropologists, Edward Tylor and James G. Frazer. This approach viewed magic as the theoretical opposite of science, and came to preoccupy much anthropological thought on the subject. This approach was situated within the evolutionary models which underpinned thinking in the social sciences during the early 19th century. The first social scientist to present magic as something that predated religion in an evolutionary development was Herbert Spencer; in his A System of Synthetic Philosophy, he used the term magic in reference to sympathetic magic. Spencer regarded both magic and religion as being rooted in false speculation about the nature of objects and their relationship to other things.
Tylor's understanding of magic was linked to his concept of animism. In his 1871 book Primitive Culture, Tylor characterized magic as beliefs based on "the error of mistaking ideal analogy for real analogy". In Tylor's view, "primitive man, having come to associate in thought those things which he found by experience to be connected in fact, proceeded erroneously to invert this action, and to conclude that association in thought must involve similar connection in reality. He thus attempted to discover, to foretell, and to cause events by means of processes which we can now see to have only an ideal significance". Tylor was dismissive of magic, describing it as "one of the most pernicious delusions that ever vexed mankind". Tylor's views proved highly influential, and helped to establish magic as a major topic of anthropological research.
Tylor's ideas were adopted and simplified by James Frazer. He used the term magic to mean sympathetic magic, describing it as a practice relying on the magician's belief "that things act on each other at a distance through a secret sympathy", something which he described as "an invisible ether". He further divided this magic into two forms, the "homeopathic (imitative, mimetic)" and the "contagious". The former was the idea that "like produces like", or that the similarity between two objects could result in one influencing the other. The latter was based on the idea that contact between two objects allowed the two to continue to influence one another at a distance. Like Taylor, Frazer viewed magic negatively, describing it as "the bastard sister of science", arising from "one great disastrous fallacy".
Where Frazer differed from Tylor was in characterizing a belief in magic as a major stage in humanity's cultural development, describing it as part of a tripartite division in which magic came first, religion came second, and eventually science came third. For Frazer, all early societies started as believers in magic, with some of them moving away from this and into religion. He believed that both magic and religion involved a belief in spirits but that they differed in the way that they responded to these spirits. For Frazer, magic "constrains or coerces" these spirits while religion focuses on "conciliating or propitiating them". He acknowledged that their common ground resulted in a cross-over of magical and religious elements in various instances; for instance he claimed that the sacred marriage was a fertility ritual which combined elements from both world-views.
Some scholars retained the evolutionary framework used by Frazer but changed the order of its stages; the German ethnologist Wilhelm Schmidt argued that religion—by which he meant monotheism—was the first stage of human belief, which later degenerated into both magic and polytheism. Others rejected the evolutionary framework entirely. Frazer's notion that magic had given way to religion as part of an evolutionary framework was later deconstructed by the folklorist and anthropologist Andrew Lang in his essay "Magic and Religion"; Lang did so by highlighting how Frazer's framework relied upon misrepresenting ethnographic accounts of beliefs and practiced among indigenous Australians to fit his concept of magic.
Functionalist approach
The functionalist approach to defining magic is associated with the French sociologists Marcel Mauss and Emile Durkheim.
In this approach, magic is understood as being the theoretical opposite of religion.
Mauss set forth his conception of magic in a 1902 essay, "A General Theory of Magic". Mauss used the term magic in reference to "any rite that is not part of an organized cult: a rite that is private, secret, mysterious, and ultimately tending towards one that is forbidden". Conversely, he associated religion with organised cult. By saying that magic was inherently non-social, Mauss had been influenced by the traditional Christian understandings of the concept. Mauss deliberately rejected the intellectualist approach promoted by Frazer, believing that it was inappropriate to restrict the term magic to sympathetic magic, as Frazer had done. He expressed the view that "there are not only magical rites which are not sympathetic, but neither is sympathy a prerogative of magic, since there are sympathetic practices in religion".
Mauss' ideas were adopted by Durkheim in his 1912 book The Elementary Forms of the Religious Life. Durkheim was of the view that both magic and religion pertained to "sacred things, that is to say, things set apart and forbidden". Where he saw them as being different was in their social organisation. Durkheim used the term magic to describe things that were inherently anti-social, existing in contrast to what he referred to as a Church, the religious beliefs shared by a social group; in his words, "There is no Church of magic." Durkheim expressed the view that "there is something inherently anti-religious about the maneuvers of the magician", and that a belief in magic "does not result in binding together those who adhere to it, nor in uniting them into a group leading a common life." Durkheim's definition encounters problems in situations—such as the rites performed by Wiccans—in which acts carried out communally have been regarded, either by practitioners or observers, as being magical.
Scholars have criticized the idea that magic and religion can be differentiated into two distinct, separate categories. The social anthropologist Alfred Radcliffe-Brown suggested that "a simple dichotomy between magic and religion" was unhelpful and thus both should be subsumed under the broader category of ritual. Many later anthropologists followed his example.
Nevertheless, this distinction is still often made by scholars discussing this topic.
Emotionalist approach
The emotionalist approach to magic is associated with the English anthropologist Robert Ranulph Marett, the Austrian Sigmund Freud, and the Polish anthropologist Bronisław Malinowski.
Marett viewed magic as a response to stress. In a 1904 article, he argued that magic was a cathartic or stimulating practice designed to relieve feelings of tension. As his thought developed, he increasingly rejected the idea of a division between magic and religion and began to use the term "magico-religious" to describe the early development of both. Malinowski similarly understood magic to Marett, tackling the issue in a 1925 article. He rejected Frazer's evolutionary hypothesis that magic was followed by religion and then science as a series of distinct stages in societal development, arguing that all three were present in each society. In his view, both magic and religion "arise and function in situations of emotional stress" although whereas religion is primarily expressive, magic is primarily practical. He therefore defined magic as "a practical art consisting of acts which are only means to a definite end expected to follow later on". For Malinowski, magical acts were to be carried out for a specific end, whereas religious ones were ends in themselves. He for instance believed that fertility rituals were magical because they were carried out with the intention of meeting a specific need. As part of his functionalist approach, Malinowski saw magic not as irrational but as something that served a useful function, being sensible within the given social and environmental context.
The term magic was used liberally by Freud. He also saw magic as emerging from human emotion but interpreted it very differently to Marett.
Freud explains that "the associated theory of magic merely explains the paths along which magic proceeds; it does not explain its true essence, namely the misunderstanding which leads it to replace the laws of nature by psychological ones". Freud emphasizes that what led primitive men to come up with magic is the power of wishes: "His wishes are accompanied by a motor impulse, the will, which is later destined to alter the whole face of the earth to satisfy his wishes. This motor impulse is at first employed to give a representation of the satisfying situation in such a way that it becomes possible to experience the satisfaction by means of what might be described as motor hallucinations. This kind of representation of a satisfied wish is quite comparable to children's play, which succeeds their earlier purely sensory technique of satisfaction. [...] As time goes on, the psychological accent shifts from the motives for the magical act on to the measures by which it is carried out—that is, on to the act itself. [...] It thus comes to appear as though it is the magical act itself which, owing to its similarity with the desired result, alone determines the occurrence of that result."
In the early 1960s, the anthropologists Murray and Rosalie Wax put forward the argument that scholars should look at the magical worldview of a given society on its own terms rather than trying to rationalize it in terms of Western ideas about scientific knowledge. Their ideas were heavily criticised by other anthropologists, who argued that they had set up a false dichotomy between non-magical Western worldviews and magical non-Western worldviews. The concept of the magical worldview nevertheless gained widespread use in history, folkloristics, philosophy, cultural theory, and psychology. The notion of magical thinking has also been utilised by various psychologists. In the 1920s, the psychologist Jean Piaget used the concept as part of his argument that children were unable to clearly differentiate between the mental and the physical. According to this perspective, children begin to abandon their magical thinking between the ages of six and nine.
According to Stanley Tambiah, magic, science, and religion all have their own "quality of rationality", and have been influenced by politics and ideology. As opposed to religion, Tambiah suggests that mankind has a much more personal control over events. Science, according to Tambiah, is "a system of behavior by which man acquires mastery of the environment."
Ethnocentrism
The magic-religion-science triangle developed in European society based on evolutionary ideas i.e. that magic evolved into religion, which in turn evolved into science. However using a Western analytical tool when discussing non-Western cultures, or pre-modern forms of Western society, raises problems as it may impose alien Western categories on them. While magic remains an emic (insider) term in the history of Western societies, it remains an etic (outsider) term when applied to non-Western societies and even within specific Western societies. For this reason, academics like Michael D. Bailey suggest abandoning the term altogether as an academic category. During the twentieth century, many scholars focusing on Asian and African societies rejected the term magic, as well as related concepts like witchcraft, in favour of the more precise terms and concepts that existed within these specific societies like Juju. A similar approach has been taken by many scholars studying pre-modern societies in Europe, such as Classical antiquity, who find the modern concept of magic inappropriate and favour more specific terms originating within the framework of the ancient cultures which they are studying. Alternately, this term implies that all categories of magic are ethnocentric and that such Western preconceptions are an unavoidable component of scholarly research. This century has seen a trend towards emic ethnographic studies by scholar practitioners that explicitly explore the emic/etic divide.
Many scholars have argued that the use of the term as an analytical tool within academic scholarship should be rejected altogether. The scholar of religion Jonathan Z. Smith for example argued that it had no utility as an etic term that scholars should use. The historian of religion Wouter Hanegraaff agreed, on the grounds that its use is founded in conceptions of Western superiority and has "...served as a 'scientific' justification for converting non-European peoples from benighted superstitions..." stating that "the term magic is an important object of historical research, but not intended for doing research."
Bailey noted that, as of the early 21st century, few scholars sought grand definitions of magic but instead focused with "careful attention to particular contexts", examining what a term like magic meant to a given society; this approach, he noted, "call[ed] into question the legitimacy of magic as a universal category". The scholars of religion Berndt-Christian Otto and Michael Stausberg suggested that it would be perfectly possible for scholars to talk about amulets, curses, healing procedures, and other cultural practices often regarded as magical in Western culture without any recourse to the concept of magic itself. The idea that magic should be rejected as an analytic term developed in anthropology, before moving into Classical studies and Biblical studies in the 1980s. Since the 1990s, the term's usage among scholars of religion has declined.
Magicians
Many of the practices which have been labelled magic can be performed by anyone. For instance, some charms can be recited by individuals with no specialist knowledge nor any claim to having a specific power. Others require specialised training in order to perform them. Some of the individuals who performed magical acts on a more than occasional basis came to be identified as magicians, or with related concepts like sorcerers/sorceresses, witches, or cunning folk. Identities as a magician can stem from an individual's own claims about themselves, or it can be a label placed upon them by others. In the latter case, an individual could embrace such a label, or they could reject it, sometimes vehemently.
Economic incentives can encourage individuals to identify as magicians. In the cases of various forms of traditional healers, as well as the later stage magicians or illusionists, the label of magician could become a job description. Others claim such an identity out of a genuinely held belief that they have specific unusual powers or talents. Different societies have different social regulations regarding who can take on such a role; for instance, it may be a question of familial heredity, or there may be gender restrictions on who is allowed to engage in such practices. A variety of personal traits may be credited with giving magical power, and frequently they are associated with an unusual birth into the world. For instance, in Hungary it was believed that a táltos would be born with teeth or an additional finger. In various parts of Europe, it was believed that being born with a caul would associate the child with supernatural abilities. In some cases, a ritual initiation is required before taking on a role as a specialist in such practices, and in others it is expected that an individual will receive a mentorship from another specialist.
Davies noted that it was possible to "crudely divide magic specialists into religious and lay categories". He noted for instance that Roman Catholic priests, with their rites of exorcism, and access to holy water and blessed herbs, could be conceived as being magical practitioners. Traditionally, the most common method of identifying, differentiating, and establishing magical practitioners from common people is by initiation. By means of rites the magician's relationship to the supernatural and his entry into a closed professional class is established (often through rituals that simulate death and rebirth into a new life). However, Berger and Ezzy explain that since the rise of Neopaganism, "As there is no central bureaucracy or dogma to determine authenticity, an individual's self-determination as a Witch, Wiccan, Pagan or Neopagan is usually taken at face value". Ezzy argues that practitioners' worldviews have been neglected in many sociological and anthropological studies and that this is because of "a culturally narrow understanding of science that devalues magical beliefs".
Mauss argues that the powers of both specialist and common magicians are determined by culturally accepted standards of the sources and the breadth of magic: a magician cannot simply invent or claim new magic. In practice, the magician is only as powerful as his peers believe him to be.
Throughout recorded history, magicians have often faced skepticism regarding their purported powers and abilities. For instance, in sixteenth-century England, the writer Reginald Scot wrote The Discoverie of Witchcraft, in which he argued that many of those accused of witchcraft or otherwise claiming magical capabilities were fooling people using illusionism.
See also
Books about magic
References
Citations
Works cited
Further reading
External links
Superstitions | Magic (supernatural) | Biology | 13,024 |
39,324,002 | https://en.wikipedia.org/wiki/Relict%20%28biology%29 | In biogeography and paleontology, a relict is a population or taxon of organisms that was more widespread or more diverse in the past. A relictual population is a population currently inhabiting a restricted area whose range was far wider during a previous geologic epoch. Similarly, a relictual taxon is a taxon (e.g. species or other lineage) which is the sole surviving representative of a formerly diverse group.
Definition
A relict (or relic) plant or animal is a taxon that persists as a remnant of what was once a diverse and widespread population. Relictualism occurs when a widespread habitat or range changes and a small area becomes cut off from the whole. A subset of the population is then confined to the available hospitable area, and survives there while the broader population either shrinks or evolves divergently. This phenomenon differs from endemism in that the range of the population was not always restricted to the local region. In other words, the species or group did not necessarily arise in that small area, but rather was stranded, or insularized, by changes over time. The agent of change could be anything from competition from other organisms, continental drift, or climate change such as an ice age.
When a relict is representative of taxa found in the fossil record, and yet is still living, such an organism is sometimes referred to as a living fossil. However, a relict need not be currently living. An evolutionary relict is any organism that was characteristic of the flora or fauna of one age and that persisted into a later age, with the later age being characterized by newly evolved flora or fauna significantly different from those that came before.
Examples
A notable example is the thylacine of Tasmania, a relict marsupial carnivore that survived into modern times on an island, whereas the rest of its species on mainland Australia had gone extinct between 3000 and 2000 years ago.
Another example is Omma, a genus of beetle with a fossil record extending back over 200 million years to the Late Triassic and found worldwide during the Jurassic and Cretaceous, now confined to a single living species in Australia. Another relict from the Triassic is Pholadomya, a common clam genus during the Mesozoic, now confined to a single rare species in the Caribbean.
The tuatara endemic to New Zealand is the only living member of the once-diverse reptile order Rhynchocephalia, which has a fossil record stretching back 240 million years and during the Mesozoic era was globally distributed and ecologically diverse.
An example from the fossil record would be a specimen of Nimravidae, an extinct branch of carnivores in the mammalian evolutionary tree, if said specimen came from Europe in the Miocene epoch. If that was the case, the specimen would represent, not the main population, but a last surviving remnant of the nimravid lineage. These carnivores were common and widespread in the previous epoch, the Oligocene, and disappeared when the climate changed and woodlands were replaced by savanna. They persisted in Europe in the last remaining forests as a relict of the Oligocene: a relict species in a relict habitat.
An example of divergent evolution creating relicts is found in the shrews of the islands off the coast of Alaska, namely the Pribilof Island shrew and the St. Lawrence Island shrew. These species are apparently relicts of a time when the islands were connected to the mainland, and these species were once conspecific with a more widespread species, now the cinereus shrew, the three populations having diverged through speciation.
In botany, an example of an ice age relict plant population is the Snowdon lily, notable as being precariously rare in Wales. The Welsh population is confined to the north-facing slopes of Snowdonia, where climatic conditions are apparently similar to ice age Europe. Some have expressed concern that the warming climate will cause the lily to die out in Great Britain. Other populations of the same plant can be found in the Arctic and in the mountains of Europe and North America, where it is known as the common alplily.
While the extirpation of a geographically disjunct population of a relict species may be of regional conservation concern, outright extinction at the species level may occur in this century of rapid climate change if geographic range occupied by a relict species has already contracted to the degree that it is narrowly endemic. For this reason, the traditional conservation tool of translocation has recently been reframed as assisted migration of narrowly endemic, critically endangered species that are already (or soon expected) to experience climate change beyond their levels of tolerance. Two examples of critically endangered relict species for which assisted migration projects are already underway are the western swamp tortoise of Australia and a subcanopy conifer tree in the United States called Florida Torreya.
A well-studied botanical example of a relictual taxon is Ginkgo biloba, the last living representative of Ginkgoales that is restricted to China in the wild. Ginkgo trees had a diverse and widespread northern distribution during the Mesozoic, but are not known from the fossil record after the Pliocene other than G. biloba.
The Saimaa ringed seal (Phoca hispida saimensis) is an endemic subspecies, a relict of last ice age that lives only in Finland in the landlocked and fragmented Saimaa freshwater lake complex. Now the population has less than 400 individuals, which poses a threat to its survival.
Another example is the relict leopard frog once found throughout Nevada, Arizona, Utah, and Colorado, but now only found at Lake Mead National Recreation Area in Nevada and Arizona.
Relevance
The concept of relictualism is useful in understanding the ecology and conservation status of populations that have become insularized, meaning confined to one small area or multiple small areas with no chance of movement between populations. Insularization makes a population vulnerable to forces that can lead to extinction, such as disease, inbreeding, habitat destruction, competition from introduced species, and global warming. Consider the case of the white-eyed river martin, a very localized species of bird found only in Southeast Asia, and extremely rare, if not already extinct. Its closest and only surviving living relative is the African river martin, also very localized in central Africa. These two species are the only known members of the subfamily Pseudochelidoninae, and their widely disjunct populations suggest they are relict populations of a more common and widespread ancestor. Known to science only since 1968, it seems to have disappeared.
Studies have been done on relict populations in isolated mountain and valley habitats in western North America, where the basin and range topography creates areas that are insular in nature, such as forested mountains surrounded by inhospitable desert, called sky islands. Such situations can serve as refuges for certain Pleistocene relicts, such as Townsend's pocket gopher, while at the same time creating barriers for biological dispersal. Studies have shown that such insular habitats have a tendency toward decreasing species richness. This observation has significant implications for conservation biology, because habitat fragmentation can also lead to the insularization of stranded populations.
So-called "relics of cultivation" are plant species that were grown in the past for various purposes (medicinal, food, dyes, etc.), but are no longer utilized. They are naturalized and can be found at archaeological sites.
See also
Living fossil
References
Biogeography | Relict (biology) | Biology | 1,549 |
32,590,824 | https://en.wikipedia.org/wiki/Continuous%20geometry | In mathematics, continuous geometry is an analogue of complex projective geometry introduced by , where instead of the dimension of a subspace being in a discrete set , it can be an element of the unit interval . Von Neumann was motivated by his discovery of von Neumann algebras with a dimension function taking a continuous range of dimensions, and the first example of a continuous geometry other than projective space was the projections of the hyperfinite type II factor.
Definition
Menger and Birkhoff gave axioms for projective geometry in terms of the lattice of linear subspaces of projective space. Von Neumann's axioms for continuous geometry are a weakened form of these axioms.
A continuous geometry is a lattice L with the following properties
L is modular.
L is complete.
The lattice operations ∧, ∨ satisfy a certain continuity property,
, where A is a directed set and if then , and the same condition with ∧ and ∨ reversed.
Every element in L has a complement (not necessarily unique). A complement of an element a is an element b with , , where 0 and 1 are the minimal and maximal elements of L.
L is irreducible: this means that the only elements with unique complements are 0 and 1.
Examples
Finite-dimensional complex projective space, or rather its set of linear subspaces, is a continuous geometry, with dimensions taking values in the discrete set
The projections of a finite type II von Neumann algebra form a continuous geometry with dimensions taking values in the unit interval .
showed that any orthocomplemented complete modular lattice is a continuous geometry.
If V is a vector space over a field (or division ring) F, then there is a natural map from the lattice PG(V) of subspaces of V to the lattice of subspaces of that multiplies dimensions by 2. So we can take a direct limit of
This has a dimension function taking values all dyadic rationals between 0 and 1. Its completion is a continuous geometry containing elements of every dimension in . This geometry was constructed by , and is called the continuous geometry over F
Dimension
This section summarizes some of the results of . These results are similar to, and were motivated by, von Neumann's work on projections in von Neumann algebras.
Two elements a and b of L are called perspective, written , if they have a common complement. This is an equivalence relation on L; the proof that it is transitive is quite hard.
The equivalence classes A, B, ... of L have a total order on them defined by if there is some a in A and b in B with . (This need not hold for all a in A and b in B.)
The dimension function D from L to the unit interval is defined as follows.
If equivalence classes A and B contain elements a and b with then their sum is defined to be the equivalence class of . Otherwise the sum is not defined. For a positive integer n, the product nA is defined to be the sum of n copies of A, if this sum is defined.
For equivalence classes A and B with A not {0} the integer is defined to be the unique integer such that with .
For equivalence classes A and B with A not {0} the real number is defined to be the limit of as C runs through a minimal sequence: this means that either C contains a minimal nonzero element, or an infinite sequence of nonzero elements each of which is at most half the preceding one.
D(a) is defined to be , where {a} and {1} are the equivalence classes containing a and 1.
The image of D can be the whole unit interval, or the set of numbers for some positive integer n. Two elements of L have the same image under D if and only if they are perspective, so it gives an injection from the equivalence classes to a subset of the unit interval. The dimension function D has the properties:
If then
D(a ∨ b) + D(a ∧ b) = D(a) + D(b)
if and only if , and if and only if
Coordinatization theorem
In projective geometry, the Veblen–Young theorem states that a projective geometry of dimension at least 3 is isomorphic to the projective geometry of a vector space over a division ring. This can be restated as saying that the subspaces in the projective geometry correspond to the principal right ideals of a matrix algebra over a division ring.
Neumann generalized this to continuous geometries, and more generally to complemented modular lattices, as follows . His theorem states that if a complemented modular lattice L has order at least 4, then the elements of L correspond to the principal right ideals of a von Neumann regular ring. More precisely if the lattice has order n then the von Neumann regular ring can be taken to be an n by n matrix ring Mn(R) over another von Neumann regular ring R. Here a complemented modular lattice has order n if it has a homogeneous basis of n elements, where a basis is n elements a1, ..., an such that if , and , and a basis is called homogeneous if any two elements are perspective. The order of a lattice need not be unique; for example, any lattice has order 1. The condition that the lattice has order at least 4 corresponds to the condition that the dimension is at least 3 in the Veblen–Young theorem, as a projective space has dimension at least 3 if and only if it has a set of at least 4 independent points.
Conversely, the principal right ideals of a von Neumann regular ring form a complemented modular lattice .
Suppose that R is a von Neumann regular ring and L its lattice of principal right ideals, so that L is a complemented modular lattice. Neumann showed that L is a continuous geometry if and only if R is an irreducible complete rank ring.
References
Projective geometry
Von Neumann algebras
Lattice theory | Continuous geometry | Mathematics | 1,202 |
53,345,935 | https://en.wikipedia.org/wiki/Asynchronous%20procedure%20call | An asynchronous procedure call (APC) is a unit of work in a computer.
Definition
Procedure calls can be synchronous or asynchronous. Synchronous procedure calls are made on one thread in a series, with each call waiting for the prior call to complete. on some thread. APCs instead are made without waiting for prior calls to complete.
For example, if some data are not ready (for example, a program waits for a user to reply), then stopping other activity on the thread is expensive, the thread has consumed memory and potentially other resources.
Structure
An APC is typically formed as an object with a small amount of memory and this object is passed to a service which handles the wait interval, activating it when the appropriate event (e.g., user input) occurs.
The life cycle of an APC consists of 2 stages: the passive stage, when it passively waits for input data, and active state, when that data is calculated in the same way as at the usual procedure call.
A reusable asynchronous procedure is termed Actor. In the Actor model two ports are used: one to receive input, and another (hidden) port to handle the input. In Dataflow programming many ports are used, passing to an execution service when all inputs are present.
Implementations
In Windows, APC is a function that executes asynchronously in the context of a specific thread. APCs can be generated by the system (kernel-mode APCs) or by an application (user mode APCs).
See also
Signal
References
Computer programming | Asynchronous procedure call | Technology,Engineering | 334 |
936,378 | https://en.wikipedia.org/wiki/Messier%2073 | Messier 73 (M73, also known as NGC 6994) is an asterism of four stars in the constellation Aquarius which was long thought to be a small open cluster. It lies several arcminutes east of globular cluster M72. According to Gaia EDR3, the stars are , , , and light-years from the Sun, with the second being a binary star.
History
M73 was discovered by Charles Messier in 1780 who originally described the object as a cluster of four stars with some nebulosity. Much later observations by John Herschel could not find any nebulosity. Moreover, Herschel noted that the designation of M73 as a cluster was questionable. Nonetheless, Herschel included M73 in his General Catalogue of clusters, nebulae, and galaxies, and John Dreyer included M73 when he compiled the New General Catalogue.
Relation between the stars
M73 was once treated as a potential sparsely populated open cluster, which consists of stars that are physically associated in space as well as on the sky. The question of whether the stars were an asterism or an open cluster was a matter of debate in the early 2000s.
In 2000, L. P. Bassino, S. Waldhausen, and R. E. Martinez published an analysis of the colors and luminosities of the stars in and around M73. They concluded that the four bright central stars and some other nearby stars followed the color-luminosity relation that is also followed by stars in open clusters (as seen in a Hertzsprung–Russell diagram). Their conclusion was that M73 was an old open cluster that was 9 ′ wide. However, G. Carraro, published results in 2000 based on a similar analysis and concluded that the stars did not follow any color-luminosity relation. Carraro's conclusion was that M73 was an asterism. Adding to the controversy, E. Bica and collaborators concluded that the chance alignment of the four bright stars seen in the center of M73 as well as one other nearby star was highly unlikely, so M73 was probably a sparse open cluster. The controversy was resolved in 2002, when M. Odenkirchen and C. Soubiran published an analysis of the high resolution spectra of the six brightest stars within 6 ′ of the centre point. They demonstrated that the distances from the Earth to the six stars were very different from each other, and the stars were moving in different directions. Therefore, they concluded that the stars were only an asterism.
Although M73 was determined to be only a chance alignment of stars, further analysis of asterisms is still important for the identification of sparsely populated open clusters. A full study of very many such clusters would demonstrate how, how often, and to what degree open clusters are ripped apart by the gravitational forces in the Milky Way and reveal more of the sources of these forces.
Location
See also
List of Messier objects
Messier 40 - a double star included in the Messier catalogue that was also mistakenly identified as having nebulosity
Notes and references
External links
SEDS: Messier Object 73
Messier 73, LRGB CCD image based on two-hours total exposure
Messier 073
Messier 073
073
Messier 073
Orion–Cygnus Arm
Astronomical objects discovered in 1780
Discoveries by Charles Messier | Messier 73 | Astronomy | 706 |
64,190,059 | https://en.wikipedia.org/wiki/ViaGen%20Pets | ViaGen Pets, based in Cedar Park, Texas, is a division of TransOva Genetics, that offers animal cloning services to pet owners. ViaGen Pets division was launched in 2016.
ViaGen Pets offers cloning as well as DNA preservation services, sometimes called tissue or cell banking.
Technology and patents
ViaGen's subsidiary, Start Licensing, owns a cloning patent which is licensed to their only competitor as of 2018, who also offers animal cloning services.
The cloning process used by both ViaGen and their competitor is somatic cell nuclear transfer, the same as which was used for cloning Dolly the Sheep.
History
ViaGen Pets began by offering cloning to the livestock and equine industry in 2003, and later included cloning of cats and dogs in 2016.
References
Cloning
Companies based in Texas
Companies based in Cedar Park, Texas
American companies established in 2016
Biotechnology companies of the United States
External links
official website | ViaGen Pets | Engineering,Biology | 189 |
854,464 | https://en.wikipedia.org/wiki/Suina | Suina (also known as Suiformes) is a suborder of omnivorous, non-ruminant artiodactyl mammals that includes the domestic pig and peccaries. A member of this clade is known as a suine. Suina includes the family Suidae, termed suids, known in English as pigs or swine, as well as the family Tayassuidae, termed tayassuids or peccaries. Suines are largely native to Africa, South America, and Southeast Asia, with the exception of the wild boar, which is additionally native to Europe and Asia and introduced to North America and Australasia, including widespread use in farming of the domestic pig subspecies. Suines range in size from the 55 cm (22 in) long pygmy hog to the 210 cm (83 in) long giant forest hog, and are primarily found in forest, shrubland, and grassland biomes, though some can be found in deserts, wetlands, or coastal regions. Most species do not have population estimates, though approximately two billion domestic pigs are used in farming, while several species are considered endangered or critically endangered with populations as low as 100. One species, Heude's pig, is considered by the International Union for Conservation of Nature to have gone extinct in the 20th century.
Classification
Suina's placement within Artiodactyla can be represented in the following cladogram:
The suborder Suina consists of 21 extant species in nine genera. These are split between the Suidae family, containing 18 species belonging to 6 genera, and the Tayassuidae family, containing 3 species in 3 genera. This does not include hybrid species such as boar–pig hybrids or extinct prehistoric species. Additionally, one species, Heude's pig, went extinct in the 20th century.
Family Suidae (Pigs)
Genus Babyrousa: four species
Genus Hylochoerus: one species
Genus Phacochoerus: two species
Genus Porcula: one species
Genus Potamochoerus: two species
Genus Sus: nine species
Family Tayassuidae (Peccaries)
Genus Catagonus: one species
Genus Dicotyles: one species
Genus Tayassu: one species
References
Taxa named by John Edward Gray
Mammal suborders
Artiofabula | Suina | Biology | 471 |
18,412,860 | https://en.wikipedia.org/wiki/Gliese%20445 | Gliese 445 (Gl 445 or AC +79 3888) is an M-type main sequence star in the northern part of the constellation Camelopardalis.
Location
Gliese 445 is currently 17.1 light-years from Earth and has an apparent magnitude of 10.8. It is visible all night long from locations north of the Tropic of Cancer, but not to the naked eye. Because the star is a red dwarf with a mass only a quarter to a third of that of the Sun, scientists question the ability of this system to support life. Gliese 445 is also a known X-ray source.
The Voyager 1 probe will pass within 1.6 light-years of Gliese 445 in about 40,000 years.
Solar encounter
While the Voyager probe moves through space towards a 1.6-light-year minimum distance from Gliese 445, the star is rapidly approaching the Sun. At the time the probe passes Gliese 445, the star will be about 1.059 parsecs (3.45 light-years) from the Sun, but with less than half the brightness necessary to be seen with the naked eye. At that time, Gliese 445 will be approximately tied with Ross 248 for being the closest star to the Sun (see List of nearest stars and brown dwarfs#Distant future and past encounters).
See also
Lists of stars
References
External links
Wikisky image of TYC 4553-192-1 (Gliese 445)
Camelopardalis
M-type main-sequence stars
057544
0445
Emission-line stars
TIC objects | Gliese 445 | Astronomy | 342 |
39,564,779 | https://en.wikipedia.org/wiki/Xiaomi%20Mi%202S | Xiaomi Mi2S (often referred to as Xiaomi Phone 2S, Chinese: 小米手机2s), is a high-end, Android smartphone produced by Xiaomi. The device features a quad-core 1.7 GHz Qualcomm Snapdragon 600 as its CPU.
Two variations of the Mi2S have been released, a 16 GB and a 32 GB model. In addition to storage amount also the back camera is also different. The 16 GB model features an 8-megapixel camera with aperture 2.0, whereas the 32 GB model features a 13-megapixel camera with aperture 2.2. Moreover, the lenses have 35 mm equivalent focal length of 27 mm on the 16 GB model compared to 28 mm on the 32 GB model. The devices were initially sold in China for ¥1999 for the 16 GB model, and ¥2299 for the 32 GB model.
Specifications
Hardware
The casing of the Xiaomi Mi2S is mostly made from plastic, with SIM card slots located inside. The microUSB port is located at the bottom of the device with the audio jack located at the top of the device. The power and volume keys were located on the right side of device. Near the top of the device are a front-facing camera, proximity sensors, and a notification LED. In particular, the proximity sensors are mostly used to detect whether the device is in a pocket or not. The device is widely available in white, green, yellow, blue, red and pink color finishes. The device's display is larger than its predecessor, with a 4.3-inch, 720p IPS LCD capacitive touchscreen with a resolution of ~342 ppi, and Dragontrail glass.
The model is one of two variations of the Xiaomi Mi2 Xiaomi created before creating the Xiaomi Mi3. The device comes with either 16 GB or 32 GB of internal storage. It contains a 2000 mAh battery.
Software
The Xiaomi Mi2S ships with Android and Xiaomi's MIUI user experience.
See also
Xiaomi
MIUI
Comparison of smartphones
References
External links
Mi 2S
Android (operating system) devices
Mobile phones introduced in 2013
Discontinued flagship smartphones
Mobile phones with user-replaceable battery | Xiaomi Mi 2S | Technology | 469 |
2,079,955 | https://en.wikipedia.org/wiki/Ponzo%20illusion | The Ponzo illusion is a geometrical-optical illusion that takes its name from the Italian psychologist Mario Ponzo (1882–1960). Ponzo never claimed to have discovered it, and it is indeed present in earlier work. Much confusion is present about this including many references to a paper that Ponzo published in 1911 on the Aristotle illusion. This is a tactile effect and it has nothing at all to do with what we now call the Ponzo illusion. The illusion can be demonstrated by drawing two identical lines across a pair of converging lines, similar to railway tracks, but the effect works also at different orientations.
One of the explanations for the Ponzo illusion is the "perspective hypothesis", which says that the perspective feature in the figure is produced by the converging lines ordinarily associated with distance; the two oblique lines appear to converge toward the horizon or a vanishing point. We interpret the upper line as though it were further away, so we see it as longer. A further object would have to be longer than a nearer one for both to produce retinal images of the same size.
Another explanation is the "framing-effects hypothesis", which says that the difference in the separation or gap of the horizontal lines from the framing converging lines may determine, or at least contribute to the magnitude of the distortion.
The Ponzo illusion is one possible explanation of the Moon illusion, as suggested by Ponzo in 1912. Objects appearing "far away" (because they are "on" the horizon) appears larger than objects "overhead". However, some have argued that explaining one perception ("appears far away") in terms of another ("appears bigger") is problematic scientifically, and there are probably complex internal processes behind these illusions.
The Ponzo illusion also occurs in touch and with an auditory-to-visual sensory-substitution device. However, prior visual experience seems mandatory to perceive it as demonstrated by the fact that congenitally blind subjects are not sensitive to it.
The Ponzo illusion has been used to demonstrate a dissociation between vision-for-perception and vision-for-action . Thus, the scaling of grasping movements directed towards objects embedded within a Ponzo illusion is not subject to the size illusion. In other words, the opening between the index finger and thumb is scaled to the real not the apparent size of the target object as the grasping hand approaches it.
Cross-cultural differences in susceptibility to the Ponzo illusion have been noted, with non-Western and rural people showing less susceptibility. Other recent research suggests that an individual's receptivity to this illusion, as well as the Ebbinghaus illusion, may be inversely correlated with the size of the individual's primary visual cortex.
References
Further reading
External links
An interactive illustration of the Ponzo illusion in Roger Shepard's Terror Subterra
Optical illusions | Ponzo illusion | Physics | 589 |
18,588,020 | https://en.wikipedia.org/wiki/Chill%20filtering | Chill filtering is a method in whisky making for removing residue. In chill filtering, whisky is cooled to between and passed through a fine adsorption filter. This is done mostly for cosmetic reasons — to remove cloudiness — however by many whisky drinkers it is thought to impair the taste by removing the details which differentiate between the many distilleries. It is only necessary for whisky that's bottled below 46.3% percent alcohol, as the cloudiness does not occur at or above this concentration.
Method
Chill filtering prevents the whisky from becoming hazy when in the bottle, when served, when chilled, or when water or ice is added, as well as precluding sedimentation from occurring in the bottles. It works through reducing the temperature sufficiently that some fatty acids, proteins and esters (created during the distillation process) precipitate out and are caught on the filter. Whiskies are usually chilled down to .
Factors affecting the chill filtering process include the temperature, number of filters used, and speed at which the whisky is passed through the filters. The slower the process and the more filters used, the more distillates will be collected, but at increasing cost.
This process generally impacts the taste of the whisky, by, for example, removing peat particles that contribute to the complexity, subtlety and smokiness of the flavour . Some distilleries pride themselves on not using this process. Non-chill-filtered whisky is often advertised as being more "natural", "authentic", or "old-fashioned". For example, the Aberlour Distillery's distinctively flavored A'bunadh whisky, Laphroaig's Quarter Cask bottles, Kilchoman' s Machir Bay, and all of Springbank distillery's whiskies are not chill-filtered and are advertised as such. There are also a number of specialist whisky suppliers, such as the Scotch Malt Whisky Society, that provide bottlings from a wide range of distilleries without chill filtering.
Chemistry
Unfiltered whiskies chilled below a certain temperature can force some fatty acid esters out of suspension. In Scotch whisky these are usually agglomerations of ethyl dodecanoate and ethyl hexadeconate. Chill filtering removes these as well as ethyl palmitoleate from the whisky, although complete removal of ethyl dodecanoate is not seen as desirable as it tends to contribute positively to the character of a spirit.
References
Filtration techniques
Whisky | Chill filtering | Chemistry | 523 |
52,002,890 | https://en.wikipedia.org/wiki/Marie-Antoinette%20Tonnelat | Marie-Antoinette Tonnelat (née Baudot) (March 5, 1912 – December 3, 1980) was a French theoretical physicist. Her physics research focused on relativistic quantum mechanics under the influence of gravity. Along with the help of Albert Einstein and Erwin Schrödinger, she attempted to propose one of the first unified field theories. She is also known for her work on the history of special and general relativity.
Life
Early years and education
Marie-Antoinette Baudot was born on March 5, 1912, in Charolles, a commune in the Southern Burgundy region of France. She began her education at Lycée de Chalon-sur-Saône and finished her higher education at Lycée Louis-le-Grand. Initially, she pursued engineering, but she eventually obtained two degrees in the sciences and in philosophy at the University of Sorbonne in Paris.
In 1935, Tonnelat pursued a doctorate in theoretical physics under Louis de Broglie at the Institut Henri Poincaré. In 1941, she finished her doctoral thesis titled On the Theory of the Photon in a Riemannian space. The same year she got married with Jacques Tonnelat, and in 1945 went on to become a researcher at the French National Center for Scientific Research (CNRS).
Research with de Broglie
Her research focused on the field of relativistic spin-particles under the influence of a gravitational field. With de Broglie's neutrino theory of light, Tonnelat arrived at particles with maximal spin 2 from massive spin 1 particles, or photons. Spin 2 corresponded to the graviton. With her knowledge of the Klein-Gordon equation, Maxwell's equations, and the linearized version of the equation for Einstein spaces, she examined the theory for a particle with spin 2 and called it "a unitary formalism". She published a paper in the early 1940s in which she established the standard commutation relations for the quantized spin-2 field. De Broglie supported her research in unified field theory, but he himself stayed away from it and chose not to be directly involved with her studies.
Although her papers were eventually published at the French Academy of Sciences, her work was subject to delays due to an interruption caused by German occupation of France in the early 1940s.
Professional career
Tonnelat spent much of her career as an educator. After the war, Tonnelat spent some time at the Dublin Institute for Advanced Studies with scientist Erwin Schrödinger in order to focus on furthering the research she had done under de Broglie earlier in her life. Once again, she began to examine the concept of unitary formalism that comes from emerging spin-2 particles. Her time with Schrödinger inspired her interest in the relativity theory and sparked her correspondence with Albert Einstein as well. Her goal was to create one unified theory using the concepts and ideas discovered by Einstein and Schrödinger.
In 1953, just prior to Einstein's death, Marie-Antoinette Tonnelat was invited to Princeton University to speak about the topic at the International Congress for the History of Science in Jerusalem. She gave many lectures throughout her career about her work related to the theory of relativity.
In 1956, she became a chair professor of physical theories at the Faculty of Science at the University of Paris. In parallel, she taught at the Institute of History of Science and Technology (directed by Gaston Bachelard) for twenty years.
In 1965, she published a second book on unified field theories that focused on the development of research in the field. There was only one chapter in the work that referred to her own research relating to Einstein and Schrödinger, but the book contained a few references to the doctoral theses that she had advised. Tonnelat's work was mainly concerned with establishing a connection between classic and quantum field theory. She debuts an alternative theory of gravitation (linear gravity), which she had studied in 1960.
In the 1960s, Tonnelat participated as nominator for the Nobel Prize in Physics, she proposed Louis Néel in 1960 and Alfred Kastler in 1965.
In 1980, Marie-Antoinette Tonnelat's deteriorating health made it difficult for her to continue giving her lectures. She died shortly after her last lecture.
She left an unpublished work about the history of theories of light and color.
Honors and awards
Tonnelat became a Peccot Lecturer and Laureate of the Collège de France in 1943. Her talk titled "Unitary theories of light and gravitation" (). From the French Academy of Sciences she received the Pierson–Perrin Prize (1946) and the Henri Poincaré Award (1971).
For her book History of the Principle of Relativity () Tonnelat received the of the Académie Française in 1972 awarded for outstanding publications.
Tonnelat was elected member of the International Academy of the History of Science in 1973.
References
1912 births
1980 deaths
French physicists
French theoretical physicists
Quantum physicists
French women scientists
French women physicists
20th-century French scientists
20th-century French women | Marie-Antoinette Tonnelat | Physics | 1,032 |
14,736,250 | https://en.wikipedia.org/wiki/Delannoy%20number | In mathematics, a Delannoy number counts the paths from the southwest corner (0, 0) of a rectangular grid to the northeast corner (m, n), using only single steps north, northeast, or east. The Delannoy numbers are named after French army officer and amateur mathematician Henri Delannoy.
The Delannoy number also counts the global alignments of two sequences of lengths and , the points in an m-dimensional integer lattice or cross polytope which are at most n steps from the origin, and, in cellular automata, the cells in an m-dimensional von Neumann neighborhood of radius n.
Example
The Delannoy number D(3, 3) equals 63. The following figure illustrates the 63 Delannoy paths from (0, 0) to (3, 3):
The subset of paths that do not rise above the SW–NE diagonal are counted by a related family of numbers, the Schröder numbers.
Delannoy array
The Delannoy array is an infinite matrix of the Delannoy numbers:
{| class="wikitable" style="text-align:right;"
|-
!
! width="50" | 0
! width="50" | 1
! width="50" | 2
! width="50" | 3
! width="50" | 4
! width="50" | 5
! width="50" | 6
! width="50" | 7
! width="50" | 8
|-
! 0
| 1 || 1 || 1 || 1 || 1 || 1 || 1 || 1 || 1
|-
! 1
| 1 || 3 || 5 || 7 || 9 || 11 || 13 || 15 || 17
|-
! 2
| 1 || 5 || 13 || 25 || 41 || 61 || 85 || 113 || 145
|-
! 3
| 1 || 7 || 25 || 63 || 129 || 231 || 377 || 575 || 833
|-
! 4
| 1 || 9 || 41 || 129 || 321 || 681 || 1289 || 2241 || 3649
|-
! 5
| 1 || 11 || 61 || 231 || 681 || 1683 || 3653 || 7183 || 13073
|-
! 6
| 1 || 13 || 85 || 377 || 1289 || 3653 || 8989 || 19825 || 40081
|-
! 7
| 1 || 15 || 113 || 575 || 2241 || 7183 || 19825 || 48639 || 108545
|-
! 8
| 1 || 17 || 145 || 833 || 3649 || 13073 || 40081 || 108545 || 265729
|-
! 9
| 1 || 19 || 181 || 1159 || 5641 || 22363 || 75517 || 224143 || 598417
|}
In this array, the numbers in the first row are all one, the numbers in the second row are the odd numbers, the numbers in the third row are the centered square numbers, and the numbers in the fourth row are the centered octahedral numbers. Alternatively, the same numbers can be arranged in a triangular array resembling Pascal's triangle, also called the tribonacci triangle, in which each number is the sum of the three numbers above it:
1
1 1
1 3 1
1 5 5 1
1 7 13 7 1
1 9 25 25 9 1
1 11 41 63 41 11 1
Central Delannoy numbers
The central Delannoy numbers D(n) = D(n, n) are the numbers for a square n × n grid. The first few central Delannoy numbers (starting with n = 0) are:
1, 3, 13, 63, 321, 1683, 8989, 48639, 265729, ... .
Computation
Delannoy numbers
For diagonal (i.e. northeast) steps, there must be steps in the direction and steps in the direction in order to reach the point ; as these steps can be performed in any order, the number of such paths is given by the multinomial coefficient
. Hence, one gets the closed-form expression
An alternative expression is given by
or by the infinite series
And also
where is given with .
The basic recurrence relation for the Delannoy numbers is easily seen to be
This recurrence relation also leads directly to the generating function
Central Delannoy numbers
Substituting in the first closed form expression above, replacing , and a little algebra, gives
while the second expression above yields
The central Delannoy numbers satisfy also a three-term recurrence relationship among themselves,
and have a generating function
The leading asymptotic behavior of the central Delannoy numbers is given by
where
and
.
See also
Motzkin number
Narayana number
Schroder-Hipparchus number
Schroder number
References
External links
Eponymous numbers in mathematics
Integer sequences
Triangles of numbers
Combinatorics | Delannoy number | Mathematics | 1,088 |
71,697,452 | https://en.wikipedia.org/wiki/Phageome | A phageome is a community of bacteriophages and their metagenomes localized in a particular environment, similar to a microbiome. Phageome is a subcategory of virome, which is all of the viruses that are associated with a host or environment. The term was first used in an article by Modi et al. in 2013 and has continued to be used in scientific articles that relate to bacteriophages and their metagenomes. A bacteriophage, or phage for short, is a virus that can infect bacteria and archaea, and can replicate inside of them. Phages make up the majority of most viromes and are currently understood as being the most abundant organism. Oftentimes scientists will look only at a phageome instead of a virome while conducting research. Variations due to many factors have also been explored such as diet, age, and geography. The phageome has been studied in humans in connection with a wide range of disorders of the human body, including IBD, IBS, and colorectal cancer.
In humans
Although bacteriophages cannot infect human cells, they are found in abundance in the human virome. Phageome research in humans has largely focused on the gut, however it is also being investigated in other areas like the skin, blood, and mouth. The composition of phages that make up a healthy human gut phageome is currently debated, since different methods of research can lead to different results. At birth, the human phageome, and the overall virome in general, is almost non-existent. The human phageome is thought to be brought about in newborns through prophage induction of bacteria passed on from the mother vaginally during birth. However, phages can be introduced through breastfeeding, made evident through studies finding near-exact matches of crAssphage sequences between mother and child. Variations in the human gut phageome continue across the lifespan. Siphoviridae and Myoviridae are the most abundant in infants and their numbers wane into childhood, whereas Crassvirales dominate in adults. The phageome can also experience changes as a result of diet, which can introduce new phages present in our foods. For example, in those with gluten-free diets, crAssphage were noted in higher abundance along with decreases in the families of Podoviridae. Global geographical differences in phageome composition have been noted, with further variation found within individuals living in rural and urban locations. For instance, residents in Hong Kong, China were found to have less phages associated with targeting pathogenic bacteria in comparison to those in Yunnan province. Furthermore, residing for longer periods of time in urban regions correlated with increases of Lactobacillus and Lactococcus phages.
In disease
Changes in the phageome have been seen in various disorders affecting the human body. In the gut, unique changes in the phageome have been described in both inflammatory bowel disease and irritable bowel syndrome. Even further specific changes exist in subtypes of the two disorders. IBS subtypes of IBS-D and IBS-C saw increases in different species belonging to Microviridae and Myoviridae. In Ulcerative colitis and Crohn's disease, which are subtypes of IBD, differences in levels of Caudovirales richness and species have been found. Furthermore, phages that target Acinetobacter have been found in the blood of patients with Crohn's disease. This is thought to occur due to the compromised, inflamed gut barrier allowing for bacteriophage transfer. In the mouth, periodontitis has been associated with Myoviridae residing under the gums along with a currently unspecified bacteriophage in the Siphoviridae family. Phageome changes have also been described in metabolic disorders including type-1 diabetes, type-2 diabetes and metabolic syndrome. In type-1 diabetes, overall shifts have been seen in Myoviridae and Podoviridae. The genome of bacteriophages residing in the gut in Type-2 diabetes patients have been shown to contain numerous genes implicated in disease development. Total phage representation in the virome is higher in individuals with Cardiovascular disease than healthy controls, totaling 63% and 18% respectively. Lastly, researchers studying Colorectal cancer have observed increased richness in a variety of phage genera, with the most notable differences seen in Inovirus and Tunalikevirus.
See also
Virosphere
References
Microbiology
Bacteriophages
Wikipedia Student Program
Microbiomes | Phageome | Chemistry,Biology,Environmental_science | 978 |
6,782,238 | https://en.wikipedia.org/wiki/Bob%20Pease | Robert Allen Pease (August 22, 1940 – June 18, 2011) was an electronics engineer known for analog integrated circuit (IC) design, and as the author of technical books and articles about electronic design. He designed several very successful "best-seller" ICs, many of them in continuous production for multiple decades.These include LM331 voltage-to-frequency converter, and the LM337 adjustable negative voltage regulator (complement to the LM317).
Life and career
Pease was born on August 22, 1940, in Rockville, Connecticut. He attended Northfield Mount Hermon School in Massachusetts, and subsequently obtained a Bachelor of Science in Electrical Engineering (BSEE) degree from Massachusetts Institute of Technology in 1961.
He started work in the early 1960s at George A. Philbrick Researches (GAP-R). GAP-R pioneered the first reasonable-cost, mass-produced operational amplifier (op-amp), the K2-W. At GAP-R, Pease developed many high-performance op-amps, built with discrete solid-state components.
In 1976, Pease moved to National Semiconductor Corporation (NSC) as a Design and Applications Engineer, where he began designing analog monolithic ICs, as well as design reference circuits using these devices. He had advanced to Staff Engineer by the time of his departure in 2009. During his tenure at NSC, he began writing a popular continuing monthly column called "Pease Porridge" in Electronic Design about his experiences in the world of electronic design and application.
The last project Pease worked on was the THOR-LVX (photo-nuclear) microtron Advanced Explosives contraband Detection System: "A Dual-Purpose Ion-Accelerator for Nuclear-Reaction-Based Explosives-and SNM-Detection in Massive Cargo".
Pease was the author of eight books, including Troubleshooting Analog Circuits, and he held 21 patents. Although his name was listed as "Robert A. Pease" in formal documents, he preferred to be called "Bob Pease" or to use his initials "RAP" in his magazine columns.
His other interests included hiking and biking in remote places, and working on his old Volkswagen Beetle, which he often mentioned in his columns. Pease's writing was "strongly opinionated, but he could communicate with a wry sense of humor that endeared him to readers whether they agreed with him or not".
Death
Pease was killed in the crash of his 1969 Volkswagen Beetle, on June 18, 2011. He was leaving a gathering in memory of Jim Williams, who was another well-known analog circuit designer, a technical author, and a renowned staff engineer working at Linear Technology. Pease was 70 years old, and was survived by his wife, two sons, and three grandchildren. The sudden death of Pease triggered a small flood of remembrances and tributes from fellow technical writers, practicing engineers, and electronics hardware hacking enthusiasts.
Bob was notorious for his design chops, but also for his messy office. Below is one of his early offices at National, where he won a contest from a newspaper for messiest desk. Nancy (his wife) recollects, “It was a San Jose Mercury News messiest desk contest. Someone entered a picture of his office on his behalf, and asked him if he won a big prize would he share it. Bob didn’t know what the prize was at the time. The competition was in no way up to his entry, so they gave him 1st, 2nd, and 3rd prizes. The prize was for office furniture. Bob sold it to National and threw a pizza party with the money.”
Publications (partial)
Books
– An industry standard bench-top reference book for troubleshooting (and designing) analog circuits
Journals
What’s All This Widlar Stuff, Anyhow? – An article about the then-recently-deceased op-amp designer Bob Widlar, written by Bob Pease in Electronic Design; re-published on Jun 29, 2012; first published on July 25, 1991
See also
Jim Williams – analog circuit designer, technical author, colleague of Bob Pease
Bob Widlar – pioneering analog integrated circuit designer, technical author, colleague at National Semiconductor Corporation, early contractor to Linear Technology Corporation
References
External links
Bob Pease articles at elecdesign.com
Bob Pease articles at electronicdesign.com
The philbrick archive
Remembering Bob Pease memorials, Bob's National Semiconductor archive, Lab Notes 2005, and more
Bob Pease Interview at EEWeb
American electrical engineers
Analog electronics engineers
Integrated circuits
MIT School of Engineering alumni
1940 births
2011 deaths
Road incident deaths in California
People from Rockville, Connecticut
American technology writers
Engineers from Connecticut
Northfield Mount Hermon School alumni | Bob Pease | Technology,Engineering | 972 |
2,916,857 | https://en.wikipedia.org/wiki/54%20Cancri | 54 Cancri is a star in the zodiac constellation of Cancer. It has an apparent visual magnitude of 6.38, which places it just below the normal brightness limit of stars visible to the naked eye. The annual parallax shift is 24.79 mas as measured from Earth's orbit, which yields a distance estimate of about 132 light years. It is moving away from the Sun with a radial velocity of +45 km/s.
Measurement of the stars proper motion over time suggest changes due to an acceleration component, which may indicate it is a close binary system. The visible component has a stellar classification of G5 V, indicating it is an ordinary G-type main-sequence star that is generating energy through hydrogen fusion in its core region. Hall et al. (2007) classify it as a low-activity variable star. The star is about five billion years old with a projected rotational velocity of 3.1 km/s. It has 1.23 times the mass of the Sun and 1.81 times the Sun's radius.
References
G-type main-sequence stars
Cancer (constellation)
Durchmusterung objects
Cancri, 54
075528
043454
3510 | 54 Cancri | Astronomy | 245 |
13,814,824 | https://en.wikipedia.org/wiki/L%C3%BCbke%20English | The term Lübke English (or, in German, Lübke-Englisch) refers to nonsensical English created by literal word-by-word translation of German phrases, disregarding differences between the languages in syntax and meaning.
Lübke English is named after Heinrich Lübke, a president of Germany in the 1960s, whose limited English made him a target of German humorists.
In 2006, the German magazine konkret revealed that most of the statements ascribed to Lübke were in fact invented by the editorship of Der Spiegel, mainly by staff writer Ernst Goyke and subsequent letters to the editor.
In the 1980s, comedian Otto Waalkes had a routine called "English for Runaways", which is a nonsensical literal translation of Englisch für Fortgeschrittene (actually an idiom for 'English for advanced speakers' in German – note that fortschreiten divides into fort, meaning "away" or "forward", and schreiten, meaning "to walk in steps"). In this mock "course", he translates every sentence back or forth between English and German at least once (usually from German literally into English). Though there are also other, more complex language puns, the title of this routine has gradually replaced the term Lübke English when a German speaker wants to point out naive literal translations.
See also
Fromlostiano, a similar translation from Spanish into English
References
Macaronic forms of English
German language
Machine translation
Syntax
Heinrich Lübke | Lübke English | Technology | 322 |
55,940,145 | https://en.wikipedia.org/wiki/Climate%20restoration | Climate restoration is the climate change goal and associated actions to restore to levels humans have actually survived long-term, below 300 ppm. This would restore the Earth system generally to a safe state, for the well-being of future generations of humanity and nature. Actions include carbon dioxide removal from the Carbon dioxide in Earth's atmosphere, which, in combination with emissions reductions, would reduce the level of in the atmosphere and thereby reduce the global warming produced by the greenhouse effect of an excess of over its pre-industrial level. Actions also include restoring pre-industrial atmospheric methane levels by accelerating natural methane oxidation.
Climate restoration enhances legacy climate goals (stabilizing Earth's climate) to include ensuring the survival of humanity by restoring to levels of the last 6000 years that allowed agriculture and civilization to develop.
Restoration and mitigation
Climate restoration is the goal underlying climate change mitigation, whose actions are intended to "limit the magnitude or rate of long-term climate change". Advocates of climate restoration accept that climate change has already had major negative impacts which threaten the long-term survival of humanity. The current mitigation pathway leaves the risk that conditions will go beyond adaptation and abrupt climate change will be upon us. There is a human moral imperative to maximize the chances of future generations' survival. By promoting the vision of the "survival and flourishing of humanity", with the Earth System restored to a state close to that in which our species and civilization evolved, advocates claim that there is a huge incentive for innovation and investment to ensure that this restoration takes place safely and in a timely fashion. As stated in "The Economist" in November 2017, "in any realistic scenario, emissions cannot be cut fast enough to keep the total stock of greenhouse gases sufficiently small to limit the rise in temperature successfully. But there is barely any public discussion of how to bring about the extra "negative emissions" needed to reduce the stock of ... Unless that changes, the promise of limiting the harm of climate change is almost certain to be broken."
Climate restoration as a policy goal
A first peer-reviewed article about climate restoration was published in April 2018 by the Rand Corporation.
The analysis "examines climate restoration through the lens of risk management under conditions of deep uncertainty, exploring the technology, economic, and policy conditions under which it might be possible to achieve various climate restoration goals and the conditions under which society might be better off with (rather than without) a climate restoration goal." One key finding of the study is that it would be possible to restore the atmospheric concentrations to preindustrial levels at an acceptable cost under two scenarios, where greenhouse gas reductions and direct air capture (DAC) technologies prove to be economically efficient. One example is Carbon Engineering, a Canadian-based clean energy company focussing on the commercialization of Direct Air Capture (DAC) technology that captures carbon dioxide () directly from the atmosphere.
One key recommendation of the Rand Corporation study is that an ambitious climate restoration goal may seek to achieve preindustrial concentration by 2075, or by the end of the century. It concludes that "The best we can do is pursue climate restoration with a passion while embedding it in a process of testing, experimentation, correction, and discovery."
On September 25, 2018, Rep. Jamie Raskin introduced a resolution on Climate Restoration to the U.S House Committee of Energy and Commerce, concluding with "Whereas scientists have researched methods for keeping warming below 2°C, but have not yet researched the best methods to remove all excess , stop sea-level rise, and restore a safe and healthy climate for future generations; and whereas declaring a goal of restoring a safe and healthy climate will encourage scientists to research the most effective ways to restore safe levels, stop sea-level rise, and restore a safe and healthy climate for future generations." This was followed by the Congressional Climate Emergency Resolutions (S.Con.Res.22, H.Con.Res.52) which "demands a national, social, industrial, and economic mobilization of the resources and labor of the United States at a massive-scale to halt, reverse, mitigate, and prepare for the consequences of the climate emergency and to restore the climate for future generations...."
On August 23, 2023, the California Senate passed SR-34, the nation's first resolution to explicitly recognize climate restoration as a policy priority It concludes: "WHEREAS, Climate restoration will benefit the people of the State of California by reducing losses and damage from wildfires, while producing positive effects on human and ecosystem health, industry, and jobs in agriculture and other sectors; now, therefore, be it resolved by the Senate of the State of California, That the Senate formally recognizes the obligation to future generations to restore a safe climate, and declares climate restoration, along with achieving net-zero and net-negative CO2 emissions, a climate policy priority; and be it further resolved, That the Senate calls on the State Air Resources Board to engage necessary federal entities as appropriate to urge the United States Ambassador to the United Nations to propose a climate treaty that would restore and stabilize GHG levels as our common climate goal."
Critical parameters
The endpoint goal of climate restoration is to generally maximize the probability of survival of our species and civilization by restoring Atmospheric CO2 levels. The approximate target levels are those of the Holocene norm in which our species and civilization most recently evolved. That is stated technically as "pre-industrial", or poetically as "like our grandparents had a hundred years ago". Numerically the goal is stated as getting atmospheric CO2 back below the highest levels humans have actually survived long-term, 300 ppm, by 2050. Achieving this will require permanently removing approximately a trillion tonnes of atmospheric .
Critical parameters of the Earth System include:
levels of climate forcing agents in the atmosphere, especially and methane for positive forcing and aerosol for negative forcing;
global mean surface temperature (compared to some baseline) and its rate of increase;
sea level and the rate that sea level is rising;
pH and rate of ocean acidification.
Ice levels of the polar ice caps.
One of the principal goals for climate restoration is to bring the level down from current level of ~420 ppm (2022) towards its pre-industrial level of ~280 ppm. Not only will this reduce 's global warming effect but also its effect on ocean acidification. The removed carbon would be sequestered or used as a construction material.
Climate restoration open letter
On November 13, 2020, an open letter, put together by the youth organisation Worldward, calling for climate restoration was published in the Guardian newspaper. The letter was signed by prominent scientists and activists, including: Michael E Mann, Dr James Hansen, George Monbiot, Hindou Oumarou Ibrahim, Dr Rowan Williams, Bella Lack, Will Attenborough, Mark Lynas, Chloe Ardijis, Dr Shahrar Ali, and many more. After its publication, the letter was opened up to general signatories, and the signatories published on Worldward's website.
Climate Restoration publications
White Paper
On September 17, 2019, the Foundation for Climate Restoration published a White Paper on existing Climate Restoration solutions and developing technologies. These solutions and technologies include proven, commercially viable projects, such as creating synthetic rock from carbon captured in the air for use in construction and paving, as well as emerging methods for removing and storing carbon, restoring oceans and fisheries. The White Paper also discusses Climate Restoration strategy and costs. A main goal of the Foundation for Climate Restoration is the reduction of atmospheric to below 300 ppm (i.e. near its pre-industrial level) by 2050.
Climate Restoration: The Only Future That Will Sustain the Human Race
Authored by Peter Fiekowsky and Carole Douglis, this book was published on April 21, 2022. It describes, among others, the criteria for climate restoration: Permanence —so the stays out of the atmosphere for at least 100 years; Scalability —the method must be able to remove at least 25 billion tons of a year; Financial viability—funding for at-scale carbon removal must be in place. It then describes four solutions that appear to fit the criteria: a) ocean fertilization; b) synthetic limestone; c) seaweed; d) enhanced atmospheric methane oxidation using iron chloride. It claims that the required technologies and finance are now in place to restore the climate. Scale-up now requires that the restoration goal be endorsed by the UN and large NGOs so that investors and governments can justify funding the projects. Because the projects are commercially self-funding, initial investments of only $2 billion per year through 2030 are estimated to be required globally.
Limitations
Not every aspect of the Earth System can be returned to a previous state: notably the warming of the deep sea or deep ocean and the associated sea level rise which has already taken place may be essentially irreversible this century. Conversely, there are certain aspects of the Earth System that need to be improved with respect to the recent past: notably food productivity, considering an increased global population by 2050 or 2100.
Key organisations
Worldward
Foundation for Climate Restoration
Global Coalition for Climate Restoration
The Climate Foundation
References
Climate engineering
Climate change policy
Ecological restoration
Environmental terminology
Planetary engineering | Climate restoration | Chemistry,Engineering | 1,881 |
16,063,325 | https://en.wikipedia.org/wiki/Peter%20B.%20Armentrout | Peter B. Armentrout (born 1953) is a researcher in thermochemistry, kinetics, and dynamics of simple and complex chemical reactions. He is a Chemistry Professor at the University of Utah.
Career
Armentrout received his B.S. degree from Case Western Reserve University in 1975 and earned his Ph.D. from the California Institute of Technology in 1980. During these studies he determined that much of the published information on thermodynamic states was not reliable, or was presented in differing formats. When he became a research professor he used this frustration as motivation to invent and construct the guided ion-beam tandem mass spectrometer, which provided highly accurate thermodynamic measurements. With this instrument in hand, he went on to invent or improve tools to analyze those measurements, including advanced computer algorithms. He has published much data on the properties of transition metals, and has worked most recently on the thermodynamic properties of biological systems.
Awards
1984–1989 National Science Foundation Presidential Young Investigator Award
2001 Biemann Medal
Case Western Chemistry Department - Outstanding Alumnus of the Year
American Chemical Society Utah Section - Award of Chemistry
Member of Phi Kappa Phi Honor Society
2009 American Chemical Society – Award for Outstanding Achievement in Mass Spectrometry
References
21st-century American chemists
Mass spectrometrists
California Institute of Technology alumni
Case Western Reserve University alumni
Living people
University of Utah faculty
1953 births
Fellows of the American Physical Society | Peter B. Armentrout | Physics,Chemistry | 296 |
27,754,656 | https://en.wikipedia.org/wiki/Clinical%20and%20Translational%20Science%20Award | Clinical and Translational Science Award (CTSA) is a type of U.S. federal grant administered by the National Center for Advancing Translational Sciences, part of the National Institutes of Health. The CTSA program began in October 2006 under the auspices of the National Center for Research Resources with a consortium of 12 academic health centers. The program was fully implemented in 2012, comprising 60 grantee institutions and their partners.
Program overview
The CTSA program helps institutions create an integrated academic home for clinical and translational science with the resources to support researchers and research teams working to apply new knowledge and techniques to patient care. The program is structured to encourage collaborations among researchers from different scientific fields.
The CTSA program has raised awareness of clinical and translational science as a discipline among academic and industry researchers, philanthropists, government officials and the broader public.
Strategic goals
CTSA consortium leaders have set five broad goals to guide their activities. These include building national clinical and translational research capability, providing training and improving career development of clinical and translational scientists, enhancing consortium-wide collaborations, improving the health of U.S. communities and the nation, and advancing T1 translational research to move basic laboratory discoveries and knowledge into clinical testing.
Selected research areas
Institutions funded by the CTSA program are working with other research facilities to improve drug discovery and development. For example, several consortium institutions are collaborating with the Rat Resource and Research Center at the University of Missouri to increase the speed of drug screening so that drug research is translated into clinical uses more quickly.
Consortium institutions also are creating new fields of study or new uses for technologies. For example, researchers at the University of Rochester are pioneering the field of lipidomics, exploring how lipids affect human disease. Their work has led to lipid research collaborations among experts in community and preventive medicine, proteomics, nutrition, and pharmaceutical research.
Some CTSA institutions are collaborating with community-based organizations to ensure research is translated successfully into clinical practice. Researchers at Duke University are working to prevent strokes by partnering with a local health care program to build stroke awareness among Latino immigrants.
Others are pursuing public and private partnerships to speed innovation. For example, the Oregon Health and Science University and Intel are developing new wireless devices with sensors to detect symptoms in patients who have diabetes or those at high risk of stroke so they can be treated earlier.
Participating institutions
With the most recent awards, announced in July 2011, the consortium comprises 60 institutions in 30 states and the District of Columbia. These include:
Albert Einstein College of Medicine (partnering with Montefiore Medical Center)
Boston University
Case Western Reserve University
Columbia University
Children's National Medical Center
Emory University (partnering with Morehouse School of Medicine, Georgia Institute of Technology, and University of Georgia)
Duke University
Georgetown University with Howard University
Harvard University
Indiana University School of Medicine (partnering with Purdue University and University of Notre Dame)
Johns Hopkins University
Mayo Clinic
Medical College of Wisconsin
Medical University of South Carolina
Mount Sinai School of Medicine
New York University School of Medicine
Northwestern University
Ohio State University
Oregon Health & Science University
Penn State Milton S. Hershey Medical Center
Rockefeller University
Rutgers University
Scripps Research Institute
Stanford University
Tufts University
University of Alabama at Birmingham
University of Arkansas for Medical Sciences
University of California, Davis
University of California, Irvine
University of California Los Angeles
University of California, San Diego
University of California, San Francisco
University of Chicago
University of Cincinnati
University of Colorado Denver
University of Florida
University of Kansas Medical Center
University of Kentucky Research Foundations
University of Massachusetts, Worcester
University of Michigan
University of New Mexico Health Sciences
University of Minnesota
University of North Carolina at Chapel Hill
University of Illinois at Chicago
University of Iowa
University of Pennsylvania
University of Pittsburgh
University of Rochester School of Medicine and Dentistry
University of Southern California
University of Texas Health Science Center at Houston
University of Texas Health Science Center at San Antonio
University of Texas Medical Branch
University of Texas Southwestern Medical Center at Dallas
University of Utah
University of Washington
University of Wisconsin – Madison
Vanderbilt University – CTSA Coordinating Center (partnering with Meharry Medical College)
Virginia Commonwealth University
Washington University in St. Louis
Weill Cornell Medical College (partnering with Hunter College)
Yale University
Investigations by the Department of Health and Human Services Office of Inspector General (OIG)
On the 20 December 2011, the OIG published a report critical of the NIH's administration of the Clinical and Translational Science Awards (CTSA) program. The report read in part:
See also
List of medicine awards
References
External links
Clinical and Translational Science Awards: Advancing Scientific Discoveries Nationwide to Improve Health
CTSA consortium
National Center for Advancing Translational Sciences
Research
Medicine awards
National Institutes of Health | Clinical and Translational Science Award | Technology | 923 |
6,324,689 | https://en.wikipedia.org/wiki/IQVIA | IQVIA, formerly Quintiles and IMS Health, Inc., is an American Fortune 500 and S&P 500 multinational company serving the combined industries of health information technology and clinical research. IQVIA is a provider of biopharmaceutical development, professional consulting and commercial outsourcing services, focused primarily on Phase I-IV clinical trials and associated laboratory and analytical services, including investment strategy and management consulting services. It has a network of more than 88,000 employees in more than 100 countries and a market capitalization of US$49 billion as of August 2021. As of 2023, IQVIA was reported to be one of the world's largest contract research organizations (CRO).
History
IQVIA is the result of the 2016 merger of Quintiles, a leading global contract research organization, and IMS Health, a leading healthcare data and analytics provider The name of the modern company honors the legacy organizations. IQVIA: I (IMS Health), Q (Quintiles), and VIA (by way of).
IMS Health
IMS Health was best known for its collection of healthcare information spanning sales, de-identified prescription data, medical claims, electronic medical records and social media. IMS Health's products and services were used by companies to develop commercialization plans and portfolio strategies, to select patient and physician populations for specific therapies, and to measure the effectiveness of pharmaceutical marketing and sales resources. The firm used its data to produce syndicated reports such as market forecasts and market intelligence.
The original name of the company was Intercontinental Marketing Statistics, hence the IMS name. IMS Health's corporate headquarters were located in Danbury, Connecticut, in the United States. Ari Bousbib was the chairman and CEO of IMS Health before the merger.
In 1998, the parent company, Cognizant Corporation, split into two companies: IMS Health and Nielsen Media Research. After this restructuring, Cognizant Technology Solutions became a public subsidiary of IMS Health
In 2002, IMS Health acquired Cambridge Pharma Consultancy, a privately held international firm that provides strategic advice to pharmaceutical management.
In 2002, IMS Health acquired the Rosenblatt Klauber Group, a privately held international consultancy that provides forecasting, opportunity assessment & management development services to pharmaceutical companies.
In 2003, acquired Marketing Initiatives, a specialist in healthcare facility profile data, and Data Niche Associates, a provider of rebate validation services for Medicaid and managed care. In 2003, IMS Health sold its entire 56% stake in Cognizant and both companies are separated into two independent entities as IMS Health and Cognizant
In 2004, United Research China Shanghai was acquired, providing coverage of China's consumer health market.
In 2005, acquired PharMetrics, a U.S. provider of patient-centric integrated claims data.
In 2006, acquired the Life Sciences practice of Strategic Decisions Group, a portfolio strategy consultant to the life sciences industry.
In 2007, IMS Health acquired IHS and MedInitiatives, providers of healthcare data management analytics and technology services. That same year, ValueMedics Research was acquired, extending IMS Health's health economics and outcomes research capabilities.
In 2007, ranked in the Businessweek 50. This list represents "best in class" companies from the ten economic sectors that make up the S&P 500.
In 2008, named to the World's Most Admired Companies list by Fortune. The company received the recognition again in 2010.
In 2008, acquired RMBC, a provider of national pharmaceutical market intelligence and analytics in Russia.
In 2008, acquired the Skura professional services group, based out of Mississauga, Ontario, Canada and specialized in data integration, consulting, and services in business intelligence platforms to pharmaceutical and healthcare clients in North America and Europe.
In 2009, named to the Dow Jones Sustainability North America Index in recognition of the company's economic, environmental and social performance among the largest 600 North American companies.
In February 2010, IMS Health was taken private by TPG Capital, CPP Investment Board, and Leonard Green & Partners.
In 2010, acquired Brogan, Inc., a privately held market research and consulting firm serving the Canadian healthcare market.
In 2011, expanded its specialty and patient-level data assets in the United States with the acquisition of SDI Health. Also that year, the company acquired Ardentia Ltd in the UK, and Med-Vantage in the United States to build on its payer services in those markets.
In 2012, acquired PharmARC Analytic Solutions Pvt. Ltd, a Bangalore-based analytics company.
In 2012, acquired DecisionView, a software solutions company that helps life sciences organizations plan and track patient enrollment for clinical trials and TTC, a benchmarking solutions and analytics company that helps clients plan for and negotiate the costs of clinical trials. Also in 2012, the company purchased PharmaDeals Ltd.,
In 2013, acquired several companies to expand its portfolio of SaaS products: Incential Software, a provider of sales performance management technology services; 360 Vantage, which delivers multi-channel CRM software capabilities; Appature, which offers a relationship marketing platform; and Semantelli, a provider of social media analytics for the global healthcare industry.
In May 2015, IMS increased its software development capability by acquiring Dataline Software Ltd, a bespoke software development company and big data research specialist in the UK.
In April 2015, IMS Health completed the purchase of Cegedim's Customer Relationship Management (CRM) software and Strategic Data business for €396 million. Cegedim acquired the software and related business when it purchased Dendrite International in 2007.
In August 2015, IMS Health completed the purchase of Boston Biomedical Consultants, a provider of market data and market research covering the in vitro Diagnostics market
Quintiles
Quintiles was the world's largest provider of biopharmaceutical development and commercial outsourcing services. The company offered clinical data management, clinical trial execution services, pharmaceuticals, drug development, financial partnering, and commercialization expertise to companies in the biotechnology, pharmaceutical and healthcare sectors.
In 1982, Dennis Gillings founded and incorporated Quintiles Transnational in North Carolina. Quintiles Transnational established Quintiles Pacific Inc. and Quintiles Ireland Ltd. in 1990. In 1991 Quintiles GmbH was established in Germany and Quintiles Laboratories Ltd. was established in Atlanta, Georgia. In September 1996, Quintiles purchased Innovex Ltd. of Britain for $747.5 million in stock. Quintiles went public in 1997 and completed a successful secondary stock offering.
In 1974, Dennis Gillings signs the first contract to provide statistical and data management consulting for pharmaceutical clients.
In 1982, Quintiles, Inc., is incorporated in North Carolina.
In 1990, Quintiles Pacific Inc. and Quintiles Ireland Ltd. are established.
In 1991, Quintiles GmbH is established in Germany; Quintiles Laboratories Ltd. is established in Atlanta, Georgia.
In 1996, Quintiles buys Innovex Ltd. and BRI International Inc., becoming the world's largest CRO.
In 1997, Quintiles goes public, completing a successful secondary stock offering.
In 1998, Quintiles becomes the first company in the industry to break the $1 billion mark, when it reports net revenues of $1.19 billion.
In 1999, the company joins the S&P 500 Index.
In 2003, the Board of Directors agrees to merge with Pharma Services Holdings Inc; Quintiles becomes a private company.
In 2009, Quintiles opens new corporate headquarters in Durham, North Carolina.
In 2010, Quintiles opens new European headquarters in the UK and establishes operations in East Africa.
In 2011, Quintiles buys Advion Biosciences, a bioanalytical lab based out of Ithaca, New York.
In 2013, Quintiles filed for an IPO on 15 February in order to go public again; Quintiles begins trading on the New York Stock Exchange (NYSE) under ticker symbol Q."
IMS Health and Quintiles become IQVIA
In May 2016, Quintiles agreed to merge with IMS Health in a deal worth $9 billion. IMS Health shareholders received 0.384 shares of Quintiles common stock for each share of IMS Health common stock they held, leaving the split of ownership at 51.4% IMS and 48.6% Quintiles. The merger was completed in October and the resulting company was a $17.6 billion company called QuintilesIMS. In November 2017, the company adopted the new name of IQVIA, and changed its ticker symbol on the NYSE from Q to IQV.
Controversies
Throughout its history, the legacy IMS Health's business of collecting anonymized pharmaceutical sales data came under scrutiny from both the media and the legal system.
IMS Health v. Ayotte was a free speech case involving IMS Health.
Sorrell v. IMS Health Inc. was a case about physician-data privacy, which went to the U.S. Supreme Court. The High Court ruled in favor of the company.
IQVIA was contracted by the UK government's Office of National Statistics to provide data on the prevalence of COVID-19 infection in the population. Some users of the survey reported problems contacting IQVIA and arranging for testing. The problems with how the survey results were collected were criticised for potentially leading to biased data by New Scientist.
On July 17, 2023, the Federal Trade Commission sued to block IQVIA’s acquisition of Propel Media alleging in an administrative complaint that the acquisition would give IQVIA a market-leading position in health care programmatic advertising and would raise health-care prices for consumers. In December 2023, U.S. District Court Judge Edgardo Ramos issued an order granting the FTC’s motion for preliminary injunction to block the merger. Speaking in favor of the FTC, Ramos said, "The FTC has shown that there is a reasonable probability that the proposed acquisition will substantially impair competition in the relevant market and that the equities weigh in favor of injunctive relief." An administrative trial was scheduled to start on January 18, 2024. However, on January 5, 2024, IQVIA and Propel Media announced that they had mutually agreed to abandon the proposed merger.
Russian invasion of Ukraine
As of May 25, 2022, IQVIA continues its operations in Russia and is actively hiring within the country. Despite the ongoing war, the company has maintained its presence and activities in Russia, providing services in the pharmaceutical and biopharmaceutical sectors.
References
External links
Companies listed on the New York Stock Exchange
Companies based in Durham, North Carolina
Contract research organizations
Consulting firms established in 1982
Life sciences industry
1982 establishments in North Carolina
2013 initial public offerings
1997 initial public offerings
International management consulting firms | IQVIA | Biology | 2,245 |
32,453,380 | https://en.wikipedia.org/wiki/Memo%20motion | Memo motion or spaced-shot photography is a tool of time and motion study that analyzes long operations by using a camera. It was developed 1946 by Marvin E. Mundel at Purdue University, who was first to save film material while planning studies on kitchen work.
Mundel published the method in 1947 with several studies in his textbook Systematic Motion and time study. A study showed the following advantages of Memo-Motion in regard to other forms of time and motion study:
Single operator repetition work ...
Aerea studies, the study of a group of men or machines.
Team studies.
Utilisation studies.
Work measurement.
As a versatile tool of work study it was used in the US to some extent, but rarely in Europe and other industrial countries mainly because of difficulties procuring the required cameras. Today Memo-Motion could have a comeback because more and more workplaces have conditions which it can explore.
Scottish motion study pioneer, Anne Gillespie Shaw, used Memomotion in a number of films commissioned from her company, The Anne Shaw Organisation, for commercial and public sector organisations.
References
Time and motion study
Industrial equipment | Memo motion | Engineering | 228 |
55,313 | https://en.wikipedia.org/wiki/Allergy | Allergies, also known as allergic diseases, are various conditions caused by hypersensitivity of the immune system to typically harmless substances in the environment. These diseases include hay fever, food allergies, atopic dermatitis, allergic asthma, and anaphylaxis. Symptoms may include red eyes, an itchy rash, sneezing, coughing, a runny nose, shortness of breath, or swelling. Note that food intolerances and food poisoning are separate conditions.
Common allergens include pollen and certain foods. Metals and other substances may also cause such problems. Food, insect stings, and medications are common causes of severe reactions. Their development is due to both genetic and environmental factors. The underlying mechanism involves immunoglobulin E antibodies (IgE), part of the body's immune system, binding to an allergen and then to a receptor on mast cells or basophils where it triggers the release of inflammatory chemicals such as histamine. Diagnosis is typically based on a person's medical history. Further testing of the skin or blood may be useful in certain cases. Positive tests, however, may not necessarily mean there is a significant allergy to the substance in question.
Early exposure of children to potential allergens may be protective. Treatments for allergies include avoidance of known allergens and the use of medications such as steroids and antihistamines. In severe reactions, injectable adrenaline (epinephrine) is recommended. Allergen immunotherapy, which gradually exposes people to larger and larger amounts of allergen, is useful for some types of allergies such as hay fever and reactions to insect bites. Its use in food allergies is unclear.
Allergies are common. In the developed world, about 20% of people are affected by allergic rhinitis, food allergy affects 10% of adults and 8% of children, and about 20% have or have had atopic dermatitis at some point in time. Depending on the country, about 1–18% of people have asthma. Anaphylaxis occurs in between 0.05–2% of people. Rates of many allergic diseases appear to be increasing. The word "allergy" was first used by Clemens von Pirquet in 1906.
Signs and symptoms
Many allergens such as dust or pollen are airborne particles. In these cases, symptoms arise in areas in contact with air, such as the eyes, nose, and lungs. For instance, allergic rhinitis, also known as hay fever, causes irritation of the nose, sneezing, itching, and redness of the eyes. Inhaled allergens can also lead to increased production of mucus in the lungs, shortness of breath, coughing, and wheezing.
Aside from these ambient allergens, allergic reactions can result from foods, insect stings, and reactions to medications like aspirin and antibiotics such as penicillin. Symptoms of food allergy include abdominal pain, bloating, vomiting, diarrhea, itchy skin, and hives. Food allergies rarely cause respiratory (asthmatic) reactions, or rhinitis. Insect stings, food, antibiotics, and certain medicines may produce a systemic allergic response that is also called anaphylaxis; multiple organ systems can be affected, including the digestive system, the respiratory system, and the circulatory system. Depending on the severity, anaphylaxis can include skin reactions, bronchoconstriction, swelling, low blood pressure, coma, and death. This type of reaction can be triggered suddenly, or the onset can be delayed. The nature of anaphylaxis is such that the reaction can seem to be subsiding but may recur throughout a period of time.
Skin
Substances that come into contact with the skin, such as latex, are also common causes of allergic reactions, known as contact dermatitis or eczema. Skin allergies frequently cause rashes, or swelling and inflammation within the skin, in what is known as a "weal and flare" reaction characteristic of hives and angioedema.
With insect stings, a large local reaction may occur in the form of an area of skin redness greater than 10 cm in size that can last one to two days. This reaction may also occur after immunotherapy.
The way our body responds to foreign invaders on the molecular level is similar to how our allergens are treated even on the skin. Our skin forms an effective barrier to the entry of most allergens but this barrier cannot withstand everything that comes at it because at the end of the day, it is only our skin. A situation such as an insect sting can breach the barrier and inject allergen to the affected spot. When an allergen enters the epidermis or dermis, it triggers a localized allergic reaction which activates the mast cells in the skin resulting in an immediate increase in vascular permeability, leading to fluid leakage and swelling in the affected area. Mast-cell activation also stimulates a skin lesion called the wheal-and-flare reaction. This is when the release of chemicals from local nerve endings by a nerve axon reflex, causes the vasodilatations of surrounding cutaneous blood vessels, which causes redness of the surrounding skin.
As a part of the allergy response, our body has developed a secondary response which in some individuals causes a more widespread and sustained edematous response. This usually occurs about 8 hours after the allergen originally comes in contact with the skin. When an allergen is ingested, a dispersed form of wheal-and-flare reaction, known as urticaria or hives will appear when the allergen enters the bloodstream and eventually reaches the skin. The way our skin reacts to different allergens gives allergists the upper hand and allows them to test for allergies by injecting a very small amount of an allergen into the skin. Even though these injections are very small and local, they still pose the risk of causing systematic anaphylaxis.
Cause
Risk factors for allergies can be placed in two broad categories, namely host and environmental factors. Host factors include heredity, sex, race, and age, with heredity being by far the most significant. However, there has been a recent increase in the incidence of allergic disorders that cannot be explained by genetic factors alone. Four major environmental candidates are alterations in exposure to infectious diseases during early childhood, environmental pollution, allergen levels, and dietary changes.
Dust mites
Dust mite allergy, also known as house dust allergy, is a sensitization and allergic reaction to the droppings of house dust mites. The allergy is common and can trigger allergic reactions such as asthma, eczema, or itching. The mite's gut contains potent digestive enzymes (notably peptidase 1) that persist in their feces and are major inducers of allergic reactions such as wheezing. The mite's exoskeleton can also contribute to allergic reactions. Unlike scabies mites or skin follicle mites, house dust mites do not burrow under the skin and are not parasitic.
Foods
A wide variety of foods can cause allergic reactions, but 90% of allergic responses to foods are caused by cow's milk, soy, eggs, wheat, peanuts, tree nuts, fish, and shellfish. Other food allergies, affecting less than 1 person per 10,000 population, may be considered "rare". The most common food allergy in the US population is a sensitivity to crustacea. Although peanut allergies are notorious for their severity, peanut allergies are not the most common food allergy in adults or children. Severe or life-threatening reactions may be triggered by other allergens and are more common when combined with asthma.
Rates of allergies differ between adults and children. Children can sometimes outgrow peanut allergies. Egg allergies affect one to two percent of children but are outgrown by about two-thirds of children by the age of 5. The sensitivity is usually to proteins in the white, rather than the yolk.
Milk-protein allergies—distinct from lactose intolerance—are most common in children. Approximately 60% of milk-protein reactions are immunoglobulin E–mediated, with the remaining usually attributable to inflammation of the colon. Some people are unable to tolerate milk from goats or sheep as well as from cows, and many are also unable to tolerate dairy products such as cheese. Roughly 10% of children with a milk allergy will have a reaction to beef. Lactose intolerance, a common reaction to milk, is not a form of allergy at all, but due to the absence of an enzyme in the digestive tract.
Those with tree nut allergies may be allergic to one or many tree nuts, including pecans, pistachios, and walnuts. In addition, seeds, including sesame seeds and poppy seeds, contain oils in which protein is present, which may elicit an allergic reaction.
Allergens can be transferred from one food to another through genetic engineering; however, genetic modification can also remove allergens. Little research has been done on the natural variation of allergen concentrations in unmodified crops.
Latex
Latex can trigger an IgE-mediated cutaneous, respiratory, and systemic reaction. The prevalence of latex allergy in the general population is believed to be less than one percent. In a hospital study, 1 in 800 surgical patients (0.125 percent) reported latex sensitivity, although the sensitivity among healthcare workers is higher, between seven and ten percent. Researchers attribute this higher level to the exposure of healthcare workers to areas with significant airborne latex allergens, such as operating rooms, intensive-care units, and dental suites. These latex-rich environments may sensitize healthcare workers who regularly inhale allergenic proteins.
The most prevalent response to latex is an allergic contact dermatitis, a delayed hypersensitive reaction appearing as dry, crusted lesions. This reaction usually lasts 48–96 hours. Sweating or rubbing the area under the glove aggravates the lesions, possibly leading to ulcerations. Anaphylactic reactions occur most often in sensitive patients who have been exposed to a surgeon's latex gloves during abdominal surgery, but other mucosal exposures, such as dental procedures, can also produce systemic reactions.
Latex and banana sensitivity may cross-react. Furthermore, those with latex allergy may also have sensitivities to avocado, kiwifruit, and chestnut. These people often have perioral itching and local urticaria. Only occasionally have these food-induced allergies induced systemic responses. Researchers suspect that the cross-reactivity of latex with banana, avocado, kiwifruit, and chestnut occurs because latex proteins are structurally homologous with some other plant proteins.
Medications
About 10% of people report that they are allergic to penicillin; however, of that 10%, 90% turn out not to be. Serious allergies only occur in about 0.03%.
Insect stings
One of the main sources of human allergies is insects. An allergy to insects can be brought on by bites, stings, ingestion, and inhalation.
Toxins interacting with proteins
Another non-food protein reaction, urushiol-induced contact dermatitis, originates after contact with poison ivy, eastern poison oak, western poison oak, or poison sumac. Urushiol, which is not itself a protein, acts as a hapten and chemically reacts with, binds to, and changes the shape of integral membrane proteins on exposed skin cells. The immune system does not recognize the affected cells as normal parts of the body, causing a T-cell-mediated immune response.
Of these poisonous plants, sumac is the most virulent. The resulting dermatological response to the reaction between urushiol and membrane proteins includes redness, swelling, papules, vesicles, blisters, and streaking.
Estimates vary on the population fraction that will have an immune system response. Approximately 25% of the population will have a strong allergic response to urushiol. In general, approximately 80–90% of adults will develop a rash if they are exposed to of purified urushiol, but some people are so sensitive that it takes only a molecular trace on the skin to initiate an allergic reaction.
Genetics
Allergic diseases are strongly familial; identical twins are likely to have the same allergic diseases about 70% of the time; the same allergy occurs about 40% of the time in non-identical twins. Allergic parents are more likely to have allergic children and those children's allergies are likely to be more severe than those in children of non-allergic parents. Some allergies, however, are not consistent along genealogies; parents who are allergic to peanuts may have children who are allergic to ragweed. The likelihood of developing allergies is inherited and related to an irregularity in the immune system, but the specific allergen is not.
The risk of allergic sensitization and the development of allergies varies with age, with young children most at risk. Several studies have shown that IgE levels are highest in childhood and fall rapidly between the ages of 10 and 30 years. The peak prevalence of hay fever is highest in children and young adults and the incidence of asthma is highest in children under 10.
Ethnicity may play a role in some allergies; however, racial factors have been difficult to separate from environmental influences and changes due to migration. It has been suggested that different genetic loci are responsible for asthma, to be specific, in people of European, Hispanic, Asian, and African origins.
When we think about how different we all look and perceive our surroundings, it becomes unimaginable to think about how different all the ways we are different on the molecular level. Everything from how we react to foreign bodies to how we respond to those bodies and why. This is all because of our genetic markup; our DNA, which is made up of genes that encode for specific molecules or whole complexes. Due to the variability in responses and how the disease manifests differently in individuals, a clear genetic basis for the predisposition and severity of allergic diseases has not yet been fully established. A lot of what causes the allergy is the way our body extremely reacts to the environment so the genes that cause these things are related to regulation of molecules.
Researchers have worked to characterize genes involved in inflammation and the maintenance of mucosal integrity. The identified genes associated with allergic disease severity, progression, and development primarily function in four areas: regulating inflammatory responses (IFN-α, TLR-1, IL-13, IL-4, IL-5, HLA-G, iNOS), maintaining vascular endothelium and mucosal lining (FLG, PLAUR, CTNNA3, PDCH1, COL29A1), mediating immune cell function (PHF11, H1R, HDC, TSLP, STAT6, RERE, PPP2R3C), and influencing susceptibility to allergic sensitization (e.g., ORMDL3, CHI3L1).
Multiple studies have investigated the genetic profiles of individuals with predispositions to and experiences of allergic diseases, revealing a complex polygenic architecture. Specific genetic loci, such as MIIP, CXCR4, SCML4, CYP1B1, ICOS, and LINC00824, have been directly associated with allergic disorders. Additionally, some loci show pleiotropic effects, linking them to both autoimmune and allergic conditions, including PRDM2, G3BP1, HBS1L, and POU2AF1. These genes engage in shared inflammatory pathways across various epithelial tissues—such as the skin, esophagus, vagina, and lung—highlighting common genetic factors that contribute to the pathogenesis of asthma and other allergic diseases.
In atopic patients, transcriptome studies have identified IL-13-related pathways as key for eosinophilic airway inflammation and remodeling. That causes the body to experience the type of airflow restriction of allergic asthma. Expression of genes was quite variable: genes associated with inflammation were found almost exclusively in superficial airways, while genes related to airway remodeling were mainly present in endobronchial biopsy specimens. This enhanced gene profile was similar across multiple sample sizes – nasal brushing, sputum, endobronchial brushing – demonstrating the importance of eosinophilic inflammation, mast cell degranulation and group 3 innate lymphoid cells in severe adult-onset asthma. IL-13 is an immunoregulatory cytokine that is made mostly by activated T-helper 2 (Th2) cells. It is an important cytokine for many steps in B-cell maturation and differentiation, since it increases CD23 and MHC class II molecules, and aids in B-cell isotype switching to IgE. IL-13 also suppresses macrophage function by reducing the release of pro-inflammatory cytokines and chemokines. The more striking thing is that IL-13 is the prime mover in allergen-induced asthma via pathways that are independent of IgE and eosinophils.
Hygiene hypothesis
Allergic diseases are caused by inappropriate immunological responses to harmless antigens driven by a TH2-mediated immune response. Many bacteria and viruses elicit a TH1-mediated immune response, which down-regulates TH2 responses. The first proposed mechanism of action of the hygiene hypothesis was that insufficient stimulation of the TH1 arm of the immune system leads to an overactive TH2 arm, which in turn leads to allergic disease. In other words, individuals living in too sterile an environment are not exposed to enough pathogens to keep the immune system busy. Since our bodies evolved to deal with a certain level of such pathogens, when they are not exposed to this level, the immune system will attack harmless antigens, and thus normally benign microbial objects—like pollen—will trigger an immune response.
The hygiene hypothesis was developed to explain the observation that hay fever and eczema, both allergic diseases, were less common in children from larger families, which were, it is presumed, exposed to more infectious agents through their siblings, than in children from families with only one child. It is used to explain the increase in allergic diseases that have been seen since industrialization, and the higher incidence of allergic diseases in more developed countries. The hygiene hypothesis has now expanded to include exposure to symbiotic bacteria and parasites as important modulators of immune system development, along with infectious agents.
Epidemiological data support the hygiene hypothesis. Studies have shown that various immunological and autoimmune diseases are much less common in the developing world than the industrialized world, and that immigrants to the industrialized world from the developing world increasingly develop immunological disorders in relation to the length of time since arrival in the industrialized world. Longitudinal studies in the third world demonstrate an increase in immunological disorders as a country grows more affluent and, it is presumed, cleaner. The use of antibiotics in the first year of life has been linked to asthma and other allergic diseases. The use of antibacterial cleaning products has also been associated with higher incidence of asthma, as has birth by caesarean section rather than vaginal birth.
Stress
Chronic stress can aggravate allergic conditions. This has been attributed to a T helper 2 (TH2)-predominant response driven by suppression of interleukin 12 by both the autonomic nervous system and the hypothalamic–pituitary–adrenal axis. Stress management in highly susceptible individuals may improve symptoms.
Other environmental factors
Allergic diseases are more common in industrialized countries than in countries that are more traditional or agricultural, and there is a higher rate of allergic disease in urban populations versus rural populations, although these differences are becoming less defined. Historically, the trees planted in urban areas were predominantly male to prevent litter from seeds and fruits, but the high ratio of male trees causes high pollen counts, a phenomenon that horticulturist Tom Ogren has called "botanical sexism".
Alterations in exposure to microorganisms is another plausible explanation, at present, for the increase in atopic allergy. Endotoxin exposure reduces release of inflammatory cytokines such as TNF-α, IFNγ, interleukin-10, and interleukin-12 from white blood cells (leukocytes) that circulate in the blood. Certain microbe-sensing proteins, known as Toll-like receptors, found on the surface of cells in the body are also thought to be involved in these processes.
Parasitic worms and similar parasites are present in untreated drinking water in developing countries, and were present in the water of developed countries until the routine chlorination and purification of drinking water supplies. Recent research has shown that some common parasites, such as intestinal worms (e.g., hookworms), secrete chemicals into the gut wall (and, hence, the bloodstream) that suppress the immune system and prevent the body from attacking the parasite. This gives rise to a new slant on the hygiene hypothesis theory—that co-evolution of humans and parasites has led to an immune system that functions correctly only in the presence of the parasites. Without them, the immune system becomes unbalanced and oversensitive.
In particular, research suggests that allergies may coincide with the delayed establishment of gut flora in infants. However, the research to support this theory is conflicting, with some studies performed in China and Ethiopia showing an increase in allergy in people infected with intestinal worms. Clinical trials have been initiated to test the effectiveness of certain worms in treating some allergies. It may be that the term 'parasite' could turn out to be inappropriate, and in fact a hitherto unsuspected symbiosis is at work. For more information on this topic, see Helminthic therapy.
Pathophysiology
Acute response
In the initial stages of allergy, a type I hypersensitivity reaction against an allergen encountered for the first time and presented by a professional antigen-presenting cell causes a response in a type of immune cell called a TH2 lymphocyte, a subset of T cells that produce a cytokine called interleukin-4 (IL-4). These TH2 cells interact with other lymphocytes called B cells, whose role is production of antibodies. Coupled with signals provided by IL-4, this interaction stimulates the B cell to begin production of a large amount of a particular type of antibody known as IgE. Secreted IgE circulates in the blood and binds to an IgE-specific receptor (a kind of Fc receptor called FcεRI) on the surface of other kinds of immune cells called mast cells and basophils, which are both involved in the acute inflammatory response. The IgE-coated cells, at this stage, are sensitized to the allergen.
If later exposure to the same allergen occurs, the allergen can bind to the IgE molecules held on the surface of the mast cells or basophils. Cross-linking of the IgE and Fc receptors occurs when more than one IgE-receptor complex interacts with the same allergenic molecule and activates the sensitized cell. Activated mast cells and basophils undergo a process called degranulation, during which they release histamine and other inflammatory chemical mediators (cytokines, interleukins, leukotrienes, and prostaglandins) from their granules into the surrounding tissue causing several systemic effects, such as vasodilation, mucous secretion, nerve stimulation, and smooth muscle contraction.
This results in rhinorrhea, itchiness, dyspnea, and anaphylaxis. Depending on the individual, allergen, and mode of introduction, the symptoms can be system-wide (classical anaphylaxis) or localized to specific body systems. Asthma is localized to the respiratory system and eczema is localized to the dermis.
Late-phase response
After the chemical mediators of the acute response subside, late-phase responses can often occur. This is due to the migration of other leukocytes such as neutrophils, lymphocytes, eosinophils, and macrophages to the initial site. The reaction is usually seen 2–24 hours after the original reaction. Cytokines from mast cells may play a role in the persistence of long-term effects. Late-phase responses seen in asthma are slightly different from those seen in other allergic responses, although they are still caused by release of mediators from eosinophils and are still dependent on activity of TH2 cells.
Allergic contact dermatitis
Although allergic contact dermatitis is termed an "allergic" reaction (which usually refers to type I hypersensitivity), its pathophysiology involves a reaction that more correctly corresponds to a type IV hypersensitivity reaction. In type IV hypersensitivity, there is activation of certain types of T cells (CD8+) that destroy target cells on contact, as well as activated macrophages that produce hydrolytic enzymes.
Diagnosis
Effective management of allergic diseases relies on the ability to make an accurate diagnosis. Allergy testing can help confirm or rule out allergies. Correct diagnosis, counseling, and avoidance advice based on valid allergy test results reduce the incidence of symptoms and need for medications, and improve quality of life. To assess the presence of allergen-specific IgE antibodies, two different methods can be used: a skin prick test, or an allergy blood test. Both methods are recommended, and they have similar diagnostic value.
Skin prick tests and blood tests are equally cost-effective, and health economic evidence shows that both tests were cost-effective compared with no test. Early and more accurate diagnoses save cost due to reduced consultations, referrals to secondary care, misdiagnosis, and emergency admissions.
Allergy undergoes dynamic changes over time. Regular allergy testing of relevant allergens provides information on if and how patient management can be changed to improve health and quality of life. Annual testing is often the practice for determining whether allergy to milk, egg, soy, and wheat have been outgrown, and the testing interval is extended to 2–3 years for allergy to peanut, tree nuts, fish, and crustacean shellfish. Results of follow-up testing can guide decision-making regarding whether and when it is safe to introduce or re-introduce allergenic food into the diet.
Skin prick testing
Skin testing is also known as "puncture testing" and "prick testing" due to the series of tiny punctures or pricks made into the patient's skin. Tiny amounts of suspected allergens and/or their extracts (e.g., pollen, grass, mite proteins, peanut extract) are introduced to sites on the skin marked with pen or dye (the ink/dye should be carefully selected, lest it cause an allergic response itself). A negative and positive control are also included for comparison (eg, negative is saline or glycerin; positive is histamine). A small plastic or metal device is used to puncture or prick the skin. Sometimes, the allergens are injected "intradermally" into the patient's skin, with a needle and syringe. Common areas for testing include the inside forearm and the back.
If the patient is allergic to the substance, then a visible inflammatory reaction will usually occur within 30 minutes. This response will range from slight reddening of the skin to a full-blown hive (called "wheal and flare") in more sensitive patients similar to a mosquito bite. Interpretation of the results of the skin prick test is normally done by allergists on a scale of severity, with +/− meaning borderline reactivity, and 4+ being a large reaction. Increasingly, allergists are measuring and recording the diameter of the wheal and flare reaction. Interpretation by well-trained allergists is often guided by relevant literature.
In general, a positive response is interpreted when the wheal of an antigen is ≥3mm larger than the wheal of the negative control (eg, saline or glycerin). Some patients may believe they have determined their own allergic sensitivity from observation, but a skin test has been shown to be much better than patient observation to detect allergy.
If a serious life-threatening anaphylactic reaction has brought a patient in for evaluation, some allergists will prefer an initial blood test prior to performing the skin prick test. Skin tests may not be an option if the patient has widespread skin disease or has taken antihistamines in the last several days.
Patch testing
Patch testing is a method used to determine if a specific substance causes allergic inflammation of the skin. It tests for delayed reactions. It is used to help ascertain the cause of skin contact allergy or contact dermatitis. Adhesive patches, usually treated with several common allergic chemicals or skin sensitizers, are applied to the back. The skin is then examined for possible local reactions at least twice, usually at 48 hours after application of the patch, and again two or three days later.
Blood testing
An allergy blood test is quick and simple and can be ordered by a licensed health care provider (e.g., an allergy specialist) or general practitioner. Unlike skin-prick testing, a blood test can be performed irrespective of age, skin condition, medication, symptom, disease activity, and pregnancy. Adults and children of any age can get an allergy blood test. For babies and very young children, a single needle stick for allergy blood testing is often gentler than several skin pricks.
An allergy blood test is available through most laboratories. A sample of the patient's blood is sent to a laboratory for analysis, and the results are sent back a few days later. Multiple allergens can be detected with a single blood sample. Allergy blood tests are very safe since the person is not exposed to any allergens during the testing procedure. After the onset of anaphylaxis or a severe allergic reaction, guidelines recommend emergency departments obtain a time-sensitive blood test to determine blood tryptase levels and assess for mast cell activation.
The test measures the concentration of specific IgE antibodies in the blood. Quantitative IgE test results increase the possibility of ranking how different substances may affect symptoms. A rule of thumb is that the higher the IgE antibody value, the greater the likelihood of symptoms. Allergens found at low levels that today do not result in symptoms cannot help predict future symptom development. The quantitative allergy blood result can help determine what a patient is allergic to, help predict and follow the disease development, estimate the risk of a severe reaction, and explain cross-reactivity.
A low total IgE level is not adequate to rule out sensitization to commonly inhaled allergens. Statistical methods, such as ROC curves, predictive value calculations, and likelihood ratios have been used to examine the relationship of various testing methods to each other. These methods have shown that patients with a high total IgE have a high probability of allergic sensitization, but further investigation with allergy tests for specific IgE antibodies for a carefully chosen of allergens is often warranted.
Laboratory methods to measure specific IgE antibodies for allergy testing include enzyme-linked immunosorbent assay (ELISA, or EIA), radioallergosorbent test (RAST), fluorescent enzyme immunoassay (FEIA), and chemiluminescence immunoassay (CLIA).
Other testing
Challenge testing: Challenge testing is when tiny amounts of a suspected allergen are introduced to the body orally, through inhalation, or via other routes. Except for testing food and medication allergies, challenges are rarely performed. When this type of testing is chosen, it must be closely supervised by an allergist.
Elimination/challenge tests: This testing method is used most often with foods or medicines. A patient with a suspected allergen is instructed to modify his diet to totally avoid that allergen for a set time. If the patient experiences significant improvement, he may then be "challenged" by reintroducing the allergen, to see if symptoms are reproduced.
Unreliable tests: There are other types of allergy testing methods that are unreliable, including applied kinesiology (allergy testing through muscle relaxation), cytotoxicity testing, urine autoinjection, skin titration (Rinkel method), and provocative and neutralization (subcutaneous) testing or sublingual provocation.
Differential diagnosis
Before a diagnosis of allergic disease can be confirmed, other plausible causes of the presenting symptoms must be considered. Vasomotor rhinitis, for example, is one of many illnesses that share symptoms with allergic rhinitis, underscoring the need for professional differential diagnosis. Once a diagnosis of asthma, rhinitis, anaphylaxis, or other allergic disease has been made, there are several methods for discovering the causative agent of that allergy.
Prevention
Giving peanut products early in childhood may decrease the risk of allergies, and only breastfeeding during at least the first few months of life may decrease the risk of allergic dermatitis. There is little evidence that a mother's diet during pregnancy or breastfeeding affects the risk of allergies, although there has been some research to show that irregular cow's milk exposure might increase the risk of cow's milk allergy. There is some evidence that delayed introduction of certain foods is useful, and that early exposure to potential allergens may actually be protective.
Fish oil supplementation during pregnancy is associated with a lower risk of food sensitivities. Probiotic supplements during pregnancy or infancy may help to prevent atopic dermatitis.
Management
Management of allergies typically involves avoiding the allergy trigger and taking medications to improve the symptoms. Allergen immunotherapy may be useful for some types of allergies.
Medication
Several medications may be used to block the action of allergic mediators, or to prevent activation of cells and degranulation processes. These include antihistamines, glucocorticoids, epinephrine (adrenaline), mast cell stabilizers, and antileukotriene agents are common treatments of allergic diseases. Anticholinergics, decongestants, and other compounds thought to impair eosinophil chemotaxis are also commonly used. Although rare, the severity of anaphylaxis often requires epinephrine injection, and where medical care is unavailable, a device known as an epinephrine autoinjector may be used.
Immunotherapy
Allergen immunotherapy is useful for environmental allergies, allergies to insect bites, and asthma. Its benefit for food allergies is unclear and thus not recommended. Immunotherapy involves exposing people to larger and larger amounts of allergen in an effort to change the immune system's response.
Meta-analyses have found that injections of allergens under the skin is effective in the treatment in allergic rhinitis in children and in asthma. The benefits may last for years after treatment is stopped. It is generally safe and effective for allergic rhinitis and conjunctivitis, allergic forms of asthma, and stinging insects.
To a lesser extent, the evidence also supports the use of sublingual immunotherapy for rhinitis and asthma. For seasonal allergies the benefit is small. In this form the allergen is given under the tongue and people often prefer it to injections. Immunotherapy is not recommended as a stand-alone treatment for asthma.
Alternative medicine
An experimental treatment, enzyme potentiated desensitization (EPD), has been tried for decades but is not generally accepted as effective. EPD uses dilutions of allergen and an enzyme, beta-glucuronidase, to which T-regulatory lymphocytes are supposed to respond by favoring desensitization, or down-regulation, rather than sensitization. EPD has also been tried for the treatment of autoimmune diseases, but evidence does not show effectiveness.
A review found no effectiveness of homeopathic treatments and no difference compared with placebo. The authors concluded that based on rigorous clinical trials of all types of homeopathy for childhood and adolescence ailments, there is no convincing evidence that supports the use of homeopathic treatments.
According to the National Center for Complementary and Integrative Health, U.S., the evidence is relatively strong that saline nasal irrigation and butterbur are effective, when compared to other alternative medicine treatments, for which the scientific evidence is weak, negative, or nonexistent, such as honey, acupuncture, omega 3's, probiotics, astragalus, capsaicin, grape seed extract, Pycnogenol, quercetin, spirulina, stinging nettle, tinospora, or guduchi.
Epidemiology
The allergic diseases—hay fever and asthma—have increased in the Western world over the past 2–3 decades. Increases in allergic asthma and other atopic disorders in industrialized nations, it is estimated, began in the 1960s and 1970s, with further increases occurring during the 1980s and 1990s, although some suggest that a steady rise in sensitization has been occurring since the 1920s. The number of new cases per year of atopy in developing countries has, in general, remained much lower.
Changing frequency
Although genetic factors govern susceptibility to atopic disease, increases in atopy have occurred within too short a period to be explained by a genetic change in the population, thus pointing to environmental or lifestyle changes. Several hypotheses have been identified to explain this increased rate. Increased exposure to perennial allergens may be due to housing changes and increased time spent indoors, and a decreased activation of a common immune control mechanism may be caused by changes in cleanliness or hygiene, and exacerbated by dietary changes, obesity, and decline in physical exercise. The hygiene hypothesis maintains that high living standards and hygienic conditions exposes children to fewer infections. It is thought that reduced bacterial and viral infections early in life direct the maturing immune system away from TH1 type responses, leading to unrestrained TH2 responses that allow for an increase in allergy.
Changes in rates and types of infection alone, however, have been unable to explain the observed increase in allergic disease, and recent evidence has focused attention on the importance of the gastrointestinal microbial environment. Evidence has shown that exposure to food and fecal-oral pathogens, such as hepatitis A, Toxoplasma gondii, and Helicobacter pylori (which also tend to be more prevalent in developing countries), can reduce the overall risk of atopy by more than 60%, and an increased rate of parasitic infections has been associated with a decreased prevalence of asthma. It is speculated that these infections exert their effect by critically altering TH1/TH2 regulation. Important elements of newer hygiene hypotheses also include exposure to endotoxins, exposure to pets and growing up on a farm.
History
Some symptoms attributable to allergic diseases are mentioned in ancient sources. Particularly, three members of the Roman Julio-Claudian dynasty (Augustus, Claudius and Britannicus) are suspected to have a family history of atopy. The concept of "allergy" was originally introduced in 1906 by the Viennese pediatrician Clemens von Pirquet, after he noticed that patients who had received injections of horse serum or smallpox vaccine usually had quicker, more severe reactions to second injections. Pirquet called this phenomenon "allergy" from the Ancient Greek words ἄλλος allos meaning "other" and ἔργον ergon meaning "work".
All forms of hypersensitivity used to be classified as allergies, and all were thought to be caused by an improper activation of the immune system. Later, it became clear that several different disease mechanisms were implicated, with a common link to a disordered activation of the immune system. In 1963, a new classification scheme was designed by Philip Gell and Robin Coombs that described four types of hypersensitivity reactions, known as Type I to Type IV hypersensitivity.
With this new classification, the word allergy, sometimes clarified as a true allergy, was restricted to type I hypersensitivities (also called immediate hypersensitivity), which are characterized as rapidly developing reactions involving IgE antibodies.
A major breakthrough in understanding the mechanisms of allergy was the discovery of the antibody class labeled immunoglobulin E (IgE). IgE was simultaneously discovered in 1966–67 by two independent groups: Ishizaka's team at the Children's Asthma Research Institute and Hospital in Denver, USA, and by Gunnar Johansson and Hans Bennich in Uppsala, Sweden. Their joint paper was published in April 1969.
Diagnosis
Radiometric assays include the radioallergosorbent test (RAST test) method, which uses IgE-binding (anti-IgE) antibodies labeled with radioactive isotopes for quantifying the levels of IgE antibody in the blood.
The RAST methodology was invented and marketed in 1974 by Pharmacia Diagnostics AB, Uppsala, Sweden, and the acronym RAST is actually a brand name. In 1989, Pharmacia Diagnostics AB replaced it with a superior test named the ImmunoCAP Specific IgE blood test, which uses the newer fluorescence-labeled technology.
American College of Allergy Asthma and Immunology (ACAAI) and the American Academy of Allergy Asthma and Immunology (AAAAI) issued the Joint Task Force Report "Pearls and pitfalls of allergy diagnostic testing" in 2008, and is firm in its statement that the term RAST is now obsolete:
The updated version, the ImmunoCAP Specific IgE blood test, is the only specific IgE assay to receive Food and Drug Administration approval to quantitatively report to its detection limit of 0.1kU/L.
Medical specialty
The medical speciality that studies, diagnoses and treats diseases caused by allergies is called allergology.
An allergist is a physician specially trained to manage and treat allergies, asthma, and the other allergic diseases. In the United States physicians holding certification by the American Board of Allergy and Immunology (ABAI) have successfully completed an accredited educational program and evaluation process, including a proctored examination to demonstrate knowledge, skills, and experience in patient care in allergy and immunology. Becoming an allergist/immunologist requires completion of at least nine years of training.
After completing medical school and graduating with a medical degree, a physician will undergo three years of training in internal medicine (to become an internist) or pediatrics (to become a pediatrician). Once physicians have finished training in one of these specialties, they must pass the exam of either the American Board of Pediatrics (ABP), the American Osteopathic Board of Pediatrics (AOBP), the American Board of Internal Medicine (ABIM), or the American Osteopathic Board of Internal Medicine (AOBIM). Internists or pediatricians wishing to focus on the sub-specialty of allergy-immunology then complete at least an additional two years of study, called a fellowship, in an allergy/immunology training program. Allergist/immunologists listed as ABAI-certified have successfully passed the certifying examination of the ABAI following their fellowship.
In the United Kingdom, allergy is a subspecialty of general medicine or pediatrics. After obtaining postgraduate exams (MRCP or MRCPCH), a doctor works for several years as a specialist registrar before qualifying for the General Medical Council specialist register. Allergy services may also be delivered by immunologists. A 2003 Royal College of Physicians report presented a case for improvement of what were felt to be inadequate allergy services in the UK.
In 2006, the House of Lords convened a subcommittee. It concluded likewise in 2007 that allergy services were insufficient to deal with what the Lords referred to as an "allergy epidemic" and its social cost; it made several recommendations.
Research
Low-allergen foods are being developed, as are improvements in skin prick test predictions; evaluation of the atopy patch test, wasp sting outcomes predictions, a rapidly disintegrating epinephrine tablet, and anti-IL-5 for eosinophilic diseases.
See also
Allergic shiner
GWAS in allergy
Histamine intolerance
List of allergens
Oral allergy syndrome
References
External links
Effects of external causes
Immunology
Respiratory diseases
Immune system
Immune system disorders
Wikipedia medicine articles ready to translate
Wikipedia emergency medicine articles ready to translate | Allergy | Biology | 9,506 |
65,062,767 | https://en.wikipedia.org/wiki/Matthias%20L%C3%BCtolf | Matthias Lutolf (born in 1973, also known as Matthias Lütolf) is a bio-engineer and a professor at EPFL (École Polytechnique Fédérale de Lausanne) where he leads the Laboratory of Stem Cell Bioengineering. He is specialised in biomaterials, and in combining stem cell biology and engineering to develop improved organoid models. In 2021, he became the scientific director for Roche's Institute for Translation Bioengineering in Basel.
Career
Lutolf studied materials engineering at ETH Zurich where he graduated in 1998. In 2002, he received his PhD in biomedical engineering from ETH Zurich for his studies on cell-responsive hydrogels for tissue engineering and cell culture, in the group of Jeffrey Hubbell. He completed postdoctoral studies in the laboratory of Helen Blau at Stanford University, where he worked on novel cell culture approaches for blood and muscle stem cells, so called synthetic niches. In 2007, he founded his own laboratory at EPFL, where he was promoted to associate professor in 2014 and full professor in 2018. From 2014 to 2018, he was director of EPFL's Institute of Bioengineering. In June 2021, Lutolf became scientific director of the newly established Roche Institute for Translational Bioengineering in Basel, Switzerland.
Research
Lutolf's laboratory develops in vitro organoids mimicking healthy and diseased tissues and organs. Specifically, Lutolf uses bioengineering strategies to guide stem cell-based development to build novel organoids with improved reproducibility and physiological relevance for basic science and in vitro testing of drug candidates. His team has developed approaches to generate organoids in fully controllable 3D matrices, and has contributed to the understanding of how extrinsic biochemical and physical factors control stem cell fate and organogenesis. His team has developed concepts based on microfabrication, bioprinting, and microfluidics to improve the reproducibility, size, shape, and function of organoids.
Distinctions
In 2007, Lutolf received the European Young Investigator (EURYI) Award by the European Science Foundation. Since 2018, he is elected as member of the European Molecular Biology Organization (EMBO). He serves as associate editor of The Company of Biologists' journal Development.
Publications
External links
Website of the Laboratory of Stem Cell Bioengineering
References
Academic staff of the École Polytechnique Fédérale de Lausanne
ETH Zurich alumni
Stanford University people
21st-century Swiss biologists
Living people
1973 births
Bioengineers | Matthias Lütolf | Engineering,Biology | 521 |
36,467,210 | https://en.wikipedia.org/wiki/Mingarelli%20identity | In the field of ordinary differential equations, the Mingarelli identity is a theorem that provides criteria for the oscillation and non-oscillation of solutions of some linear differential equations in the real domain. It extends the Picone identity from two to three or more differential equations of the second order.
The identity
Consider the solutions of the following (uncoupled) system of second order linear differential equations over the –interval :
where .
Let denote the forward difference operator, i.e.
The second order difference operator is found by iterating the first order operator as in
,
with a similar definition for the higher iterates. Leaving out the independent variable for convenience, and assuming the on , there holds the identity,
where
is the logarithmic derivative,
, is the Wronskian determinant,
are binomial coefficients.
When this equality reduces to the Picone identity.
An application
The above identity leads quickly to the following comparison theorem for three linear differential equations, which extends the classical Sturm–Picone comparison theorem.
Let , , be real-valued continuous functions on the interval and let
be three homogeneous linear second order differential equations in self-adjoint form, where
for each and for all in , and
the are arbitrary real numbers.
Assume that for all in we have,
,
,
.
Then, if on and , then any solution has at least one zero in .
Notes
References
Ordinary differential equations
Mathematical identities | Mingarelli identity | Mathematics | 293 |
575,388 | https://en.wikipedia.org/wiki/Hand-kissing | Hand-kissing is a greeting gesture that indicates courtesy, politeness, respect, admiration, affection or even devotion by one person toward another. A hand-kiss is considered a respectful way for a gentleman to greet a lady. Today, non-ritual hand-kissing is rare and takes place mostly within conservative class or diplomatic contexts. Today, the hand kiss has largely been replaced by a kiss on the cheek or a handshake.
A non-ritual hand-kiss can be initiated by the lady, who would hold out her right hand with the back of the hand facing upward; or by the gentleman extending his right hand with the palm facing upward to invite the lady to put her right hand lightly on it facing downward. The gentleman may bow towards the offered hand and (often symbolically) would touch her knuckles with his lips, while lightly holding the offered hand. However, the lips do not actually touch the hand in modern tradition, especially in a formal environment where any intimate or romantic undertones could be considered inappropriate. The gesture is short, lasting less than a second.
Around the world
In Arab World, Iran, Turkey, Malaysia, Indonesia, and Brunei, hand-kissing is a common way to greet elder people of all genders, primarily the closest relatives (both parents, grandparents, and uncles or aunts) and teachers. Occasionally, after kissing the hand, the greeter will draw the hand to his own forehead. In the Philippines, the gesture evolved into just touching the hand to the forehead; hand-kissing itself has become a separate kind of gesture that has merged with the European custom concerning when it may be used.
In Southern Italy, especially Sicily, the verbal greeting "I kiss the hands." () derives from this usage. Similarly, in Hungary the verbal greeting "I kiss your hand." (Hungarian: "Kezét csókolom.") is sometimes used, especially when greeting elders and in rural communities. The shortened version "I kiss it." (Hungarian: "Csókolom.") is more wide spread. A similar expression exists also in Poland (Polish: "Całuję rączki", meaning "I kiss [your] little hands"), although nowadays it's considered obsolete.
In Romania the gesture is reserved for priests and women and it is common greeting when first introduced to a woman in parts of the country. The verbal expression towards women is "I kiss your hand" (Romanian: "sarut mana" and sometimes shortened to "saru-mana") Towards priests it is sometimes changed into "i kiss your right" due to the belief that the right hand of the priest is holy and blessed regardless of the priest himself and any eventual shortcomings. In the past both parents used to get their hand kissed and seen as a type of blessing, however the expression is now almost exclusively towards women.
Chivalrous gesture
A hand-kiss was considered a respectful way for a gentleman to greet a lady. The practice originated in the Polish–Lithuanian Commonwealth and the Spanish courts of the 17th and 18th centuries. The gesture is still at times observed in Central Europe, namely in Poland, Austria and Hungary, among others.
Traditionally, the hand-kiss was initiated by a woman, who offered her hand to a man to kiss. The lady offering her hand was expected to be of the same or higher social status than the man. It was a gesture of courtesy and extreme politeness, and it was considered impolite and even rude to refuse an offered hand. Today, the practice is very uncommon in many European countries, and has been largely replaced by a kiss on the cheek or a handshake.
Kissing the ring
Kissing the hand, or particularly a ring on the hand was also a gesture of formal submission or pledge of allegiance of man to man, or as a diplomatic gesture. The gesture would indicate submission by kissing the signet ring (a form of seal worn as a jewelry ring), the person's symbol of authority. The gesture was common in the European upper class throughout the 18th and 19th centuries. It started to disappear in the 20th century, to be replaced by the egalitarian handshake. However, former French president Jacques Chirac made hand-kissing his trademark and the gesture is still encountered in diplomatic situations.
Religious usage
In the Catholic Church, a Catholic meeting the Pope or a Cardinal, or even a lower-ranking prelate, will kiss the ring on his hand. This has become uncommon in circles not used to formal protocol, even often dispensed with amongst clergy. Sometimes, the devout Catholic combines the hand kissing with kneeling on the left knee as an even stronger expression of filial respect for the clerically high-ranking father. The cleric may then in a fatherly way lay his other hand on the kisser's head or even bless him/her by a manual cross sign. In the Catholic Church, it is also traditional for the laity to kiss the hands of a newly-ordained priest after his inaugural mass, in veneration of the Body of Christ, which is held in the priest's hands during the Holy Eucharist. In May 2014, Pope Francis kissed the hands of six Holocaust survivors to honour the six million Jews killed in the Holocaust.
In the Eastern Orthodox Church, and Oriental Orthodox Churches, it is appropriate and common for laity to greet clergy, whether priests or bishops, by making a profound bow and saying, "Father, bless" (to a priest) or "Master, bless" (to a bishop) while placing their right hand, palm up, in front of their bodies. The priest then blesses them with the sign of the cross and then places his hand in theirs, offering the opportunity to kiss his hand. Orthodox Christians kiss their priest's hands not only to honor their spiritual father confessor, but in veneration of the Body of Christ which the priest handles during the Divine Liturgy as he prepares Holy Communion. It is also a common practice when writing a letter to a priest to begin with the words "Father Bless" rather than "Dear Father" and end the letter with the words "Kissing your right hand" rather than "Sincerely."
During liturgical services, altar servers and lower clergy will kiss the hand of a priest when handing him something in the course of their duties, such as a censer, when he receives it in his right hand, and a bishop when he receives it in either hand since a bishop bestows blessings with both hands.
There are records of hand-kissing in the Islamic Caliphate as early as the 7th century. Hand-kissing known as Taqbil, as a respect for nobility, is practiced by the Hadharem of Yemen.
In popular culture
The hand-kiss is used quite prominently in The Godfather series, as a way to indicate the person who is the Don. It also features in period films, such as Dangerous Liaisons.
See also
Greeting
Salute
Kissing hands
Mano (gesture)
References
External links
Catholics kissing prelate's hands on a church watching blog
Gestures
Kissing
Gestures of respect
Bowing
Hand | Hand-kissing | Biology | 1,441 |
12,213,197 | https://en.wikipedia.org/wiki/Mixed-function%20oxidase | Mixed-function oxidase is the name of a family of oxidase enzymes that catalyze a reaction in which each of the two atoms of oxygen in O2 is used for a different function in the reaction.
Oxidase is a general name for enzymes that catalyze oxidations in which molecular oxygen is the electron acceptor but oxygen atoms do not appear in the oxidized product. Often, oxygen is reduced to either water (cytochrome oxidase of the mitochondrial electron transfer chain) or hydrogen peroxide (dehydrogenation of fatty acyl-CoA in peroxisomes). Most of the oxidases are flavoproteins.
The name "mixed-function oxidase" indicates that the enzyme oxidizes two different substrates simultaneously. Desaturation of fatty acyl-CoA in vertebrates is an example of the mixed-function oxidase reaction. In the process, saturated fatty acyl-CoA and NADPH are oxidized by molecular oxygen (O2) to produce monounsaturated fatty acyl-CoA, NADP+ and 2 molecules of water.
Reaction
The mixed-function oxidase reaction proceeds as follows:
AH + BH2 + O2 --> AOH + B + H2O (H2O as catalyst.)
Medical significance
High levels of mixed-function oxidase activity have been studied for their activation effects in human colon carcinoma cell lines, to study the susceptibility to certain cancers. The research has been successful in mice but remains inconclusive in humans.
References
Oxidoreductases | Mixed-function oxidase | Chemistry,Biology | 338 |
4,267,984 | https://en.wikipedia.org/wiki/Rydberg%20state | The Rydberg states of an atom or molecule are electronically excited states with energies that follow the Rydberg formula as they converge on an ionic state with an ionization energy. Although the Rydberg formula was developed to describe atomic energy levels, it has been used to describe many other systems that have electronic structure roughly similar to atomic hydrogen. In general, at sufficiently high principal quantum numbers, an excited electron-ionic core system will have the general character of a hydrogenic system and the energy levels will follow the Rydberg formula. Rydberg states have energies converging on the energy of the ion. The ionization energy threshold is the energy required to completely liberate an electron from the ionic core of an atom or molecule. In practice, a Rydberg wave packet is created by a laser pulse on a hydrogenic atom and thus populates a superposition of Rydberg states. Modern investigations using pump-probe experiments show molecular pathways – e.g. dissociation of (NO)2 – via these special states.
Rydberg series
Rydberg series describe the energy levels associated with partially removing an electron from the ionic core. Each Rydberg series converges on an ionization energy threshold associated with a particular ionic core configuration. These quantized Rydberg energy levels can be associated with the quasiclassical Bohr atomic picture. The closer you get to the ionization threshold energy, the higher the principal quantum number, and the smaller the energy difference between "near threshold Rydberg states." As the electron is promoted to higher energy levels, the spatial excursion of the electron from the ionic core increases and the system is more like the Bohr quasiclassical picture.
Energy of Rydberg states
The energy of Rydberg states can be refined by including a correction called the quantum defect in the Rydberg formula. The "quantum defect" correction is associated with the presence of a distributed ionic core. Even for many electronically excited molecular systems, the ionic core interaction with an excited electron can take on the general aspects of the interaction between the proton and the electron in the hydrogen atom. The spectroscopic assignment of these states follows the Rydberg formula and they are called Rydberg states of molecules.
Molecular Rydberg states
Although the energy formula of Rydberg series is a result of hydrogen-like atom structure, Rydberg states are also present in molecules. Wave functions of high Rydberg states are very diffuse and span diameters that approach infinity. As a result, any isolated neutral molecule behaves like a hydrogen-like atom at the Rydberg limit. For molecules with multiple stable monovalent cations, multiple Rydberg series may exist. Because of the complexity of molecular spectra, low-lying Rydberg states of molecules are often mixed with valence states with similar energy and are thus not pure Rydberg states.
See also
Rydberg atom
Rydberg matter
Orbital state
References
Atomic Spectra and Atomic Structure, Gerhard Herzberg, Prentice-Hall, 1937.
Atoms and Molecules, Martin Karplus and Richard N. Porter, Benjamin & Company, Inc., 1970.
External links
Army Creates Quantum Sensor That Detects Entire Radio-Frequency Spectrum; Defense One.
Rydberg Atoms and the Quantum Defect; Physics Department, Davidson College.
Rydberg Transitions; Chemistry and Biochemistiry, Georgia Tech.
Atomic physics
Atomic, molecular, and optical physics | Rydberg state | Physics,Chemistry | 669 |
68,261,023 | https://en.wikipedia.org/wiki/Rope-burning%20puzzle | In recreational mathematics, rope-burning puzzles are a class of mathematical puzzle in which one is given lengths of rope, fuse cord, or shoelace that each burn for a given amount of time, and matches to set them on fire, and must use them to measure a non-unit amount of time. The fusible numbers are defined as the amounts of time that can be measured in this way.
As well as being of recreational interest, these puzzles are sometimes posed at job interviews as a test of candidates' problem-solving ability, and have been suggested as an activity for middle school mathematics students.
Example
A common and simple version of this problem asks to measure a time of 45 seconds using only two fuses that each burn for a minute. The assumptions of the problem are usually specified in a way that prevents measuring out 3/4 of the length of one fuse and burning it end-to-end, for instance by stating that the fuses burn unevenly along their length.
One solution to this problem is to perform the following steps:
Light one end of the first fuse, and both ends of the second fuse.
Once the second fuse has burned out, 30 seconds have elapsed, and there are 30 seconds of burn time left on the first fuse. Light the other end of the first fuse.
Once the first fuse burns out, 45 seconds have elapsed.
Many other variations are possible, in some cases using fuses that burn for different amounts of time from each other.
Fusible numbers
In common versions of the problem, each fuse lasts for a unit length of time, and the only operations used or allowed in the solution are to light one or both ends of a fuse at known times, determined either as the start of the solution or as the time that another fuse burns out. If only one end of a fuse is lit at time , it will burn out at time . If both ends of a fuse are lit at times and , it will burn out at time , because a portion of is burnt at the original rate, and the remaining portion of is burnt at twice the original rate, hence the fuse burns out at
.
A number is a fusible number if it is possible to use unit-time fuses to measure out units of time using only these operations. For instance, by the solution to the example problem, is a fusible number.
One may assume without loss of generality that every fuse is lit at both ends, by replacing a fuse that is lit only at one end at time by two fuses, the first one lit at both ends at time and the second one lit at both ends at time when the first fuse burns out.
In this way, the fusible numbers can be defined as the set of numbers that can be obtained from the number by repeated application of the operation , applied to pairs that have already been obtained and for which .
The fusible numbers include all of the non-negative integers, and are a well-ordered subset of the dyadic rational numbers, the fractions whose denominators are powers of two. Being well-ordered means that, if one chooses a decreasing sequence of fusible numbers, the sequence must always be finite. Among the well-ordered sets, their ordering can be classified as , an epsilon number (a special case of the infinite ordinal numbers). Because they are well-ordered, for each integer there is a unique smallest fusible number among the fusible numbers larger than ; it has the form for some . This number grows very rapidly as a function of , so rapidly that for it is (in Knuth's up-arrow notation for large numbers) already larger than . The existence of this number , for each , cannot be proven in Peano arithmetic.
Lighting more than two points of a fuse
If the rules of the fuse-burning puzzles are interpreted to allow fuses to be lit at more points than their ends, a larger set of amounts of time can be measured. For instance, if a fuse is lit in such a way that, while it burns, it always has three ends burning (for instance, by lighting one point in the middle and one end, and then lighting another end or another point in the middle whenever one or two of the current lit points burn out) then it will burn for 1/3 of a unit of time rather than a whole unit. By representing a given amount of time as a sum of unit fractions, and successively burning fuses with multiple lit points so that they last for each unit fraction amount of time, it is possible to measure any rational number of units of time. However, keeping the desired number of flames lit, even on a single fuse, may require an infinite number of re-lighting steps.
The problem of representing a given rational number as a sum of unit fractions is closely related to the construction of Egyptian fractions, sums of distinct unit fractions; however, for fuse-burning problems there is no need for the fractions to be different from each other. Using known methods for Egyptian fractions one can prove that measuring a fractional amount of time , with , needs only fuses (expressed in big O notation). An unproven conjecture of Paul Erdős on Egyptian fractions suggests that fewer fuses, , may always be enough.
History
In a booklet on these puzzles titled Shoelace Clock Puzzles, created by Dick Hess for a 1998 Gathering 4 Gardner conference, Hess credits Harvard statistician Carl Morris as his original source for these puzzles.
See also
Water pouring puzzle, another class of puzzles involving the combination of measurements
References
Mathematical problems
Recreational mathematics | Rope-burning puzzle | Mathematics | 1,137 |
34,942,573 | https://en.wikipedia.org/wiki/Sara%20Billey | Sara Cosette Billey is an American mathematician working in algebraic combinatorics. She is known for her contributions on Schubert polynomials, singular loci of Schubert varieties, Kostant polynomials, and Kazhdan–Lusztig polynomials often using computer verified proofs. She is currently a professor of mathematics at the University of Washington.
Education and career
Billey did her undergraduate studies at the Massachusetts Institute of Technology, graduating in 1990.
She earned her Ph.D. in mathematics in 1994 from the University of California, San Diego, under the joint supervision of Adriano Garsia and Mark Haiman. She returned to MIT as a postdoctoral researcher with Richard P. Stanley, and continued there as an assistant and associate professor until 2003, when she moved to the University of Washington.
Recognition
In 2012, she became a fellow of the American Mathematical Society.
Selected publications
Books
Articles
References
External links
Massachusetts Institute of Technology School of Science alumni
University of California, San Diego alumni
Massachusetts Institute of Technology faculty
University of Washington faculty
Combinatorialists
20th-century American mathematicians
20th-century American women mathematicians
21st-century American mathematicians
21st-century American women mathematicians
Living people
Fellows of the American Mathematical Society
People from Alva, Oklahoma
Mathematicians from Oklahoma
Recipients of the Presidential Early Career Award for Scientists and Engineers
Year of birth missing (living people) | Sara Billey | Mathematics | 268 |
13,856,857 | https://en.wikipedia.org/wiki/List%20of%20engineering%20branches | Engineering is the discipline and profession that applies scientific theories, mathematical methods, and empirical evidence to design, create, and analyze technological solutions, balancing technical requirements with concerns or constraints on safety, human factors, physical limits, regulations, practicality, and cost, and often at an industrial scale. In the contemporary era, engineering is generally considered to consist of the major primary branches of biomedical engineering, chemical engineering, civil engineering, electrical engineering, materials engineering and mechanical engineering. There are numerous other engineering sub-disciplines and interdisciplinary subjects that may or may not be grouped with these major engineering branches.
Biomedical engineering
Biomedical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes).
Chemical engineering
Chemical engineering is the application of chemical, physical, and biological sciences to developing technological solutions from raw materials or chemicals.
Civil engineering
Civil engineering comprises the design, construction, and maintenance of the physical and natural built environments.
Electrical engineering
Electrical engineering comprises the study and application of electricity, electronics and electromagnetism.
Material engineering
Materials engineering is the application of material science and engineering principles to understand the properties of materials. Material science emerged in the mid-20th century, grouping together fields which had previously been considered unrelated. Materials engineering is thus much more interdisciplinary than the other major engineering branches.
Mechanical engineering
Mechanical engineering comprises the design and analysis of heat and mechanical power for the operation of machines and mechanical systems.
Interdisciplinary
See also
Outline of engineering
History of engineering
Glossary of engineering: A–L
Glossary of engineering: M–Z
:Category:Engineering disciplines
Engineering techniques:
Computer-aided engineering
Model-driven engineering
Concurrent engineering
Engineering analysis
Engineering design process (engineering method)
Engineering mathematics
Engineering notation
Engineering optimization
Engineering statistics
Front-end engineering
Knowledge engineering
Life-cycle engineering
Redundancy (engineering)
Reverse engineering
Sustainable engineering
Traditional engineering
Value engineering
Non-technical fields:
Cost engineering
Demographic engineering
Engineering management
Financial engineering
Market engineering
Memetic engineering
Political engineering
Sales engineering
Social engineering (political science)
Social engineering (security)
Tariff engineering
Exploratory engineering – the design and analysis of hypothetical models of systems not feasible with current technologies
Astronomical engineering
Megascale engineering
Planetary engineering
Stellar engineering
Engineering studies – the study of engineers
Engineering economics
Engineering ethics
Engineering law
Engineering psychology
Philosophy of engineering
References
Branches | List of engineering branches | Engineering | 462 |
9,301,973 | https://en.wikipedia.org/wiki/Ozone%20monitoring%20instrument | The ozone monitoring instrument (OMI) is a nadir-viewing visual and ultraviolet spectrometer aboard the NASA Aura spacecraft, which is part of the satellite constellation A-Train. In this group of satellites Aura flies in formation about 15 minutes behind Aqua satellite, both of which orbit the Earth in a polar Sun-synchronous pattern, and which provides nearly global coverage in one day. Aura satellite was launched on July 15, 2004, and OMI has collected data since August 9, 2004.
From a technical point of view, OMI instrument use hyperspectral imaging to observe solar-backscatter radiation to the space with an spectral range that covers the visible and ultraviolet. Its spectral capabilities were designed to achieve specific requirements of total ozone amounts retrievals in terms of accuracy and precision. Also its characteristics provide accurate radiometric and wavelength self calibration over the long-term project requirements.
OMI project
The OMI project is a cooperation between the Netherlands Agency for Aerospace Programmes (NIVR), the Finnish Meteorological Institute (FMI) and the National Aeronautics and Space Agency (NASA).
The OMI project was carried out under the direction of the NIVR and financed by the Dutch Ministries of Economic Affairs, Transport and Public Works and the Ministry of Education and Science. The instrument was built by Dutch Space in co-operation with Netherlands Organisation for Applied Scientific Research Science and Industry and Netherlands Institute for Space Research. The Finnish industry supplied the electronics. The scientific part of the OMI project is managed by KNMI (principal investigator Prof. Dr. P. F. Levelt now at the Delft University of Technology), in close co-operation with NASA and the Finnish Meteorological Institute.
Scientific objectives and atmospheric monitoring
One of the scientific objectives of OMI is to measure trace gases: ozone (O3), nitrogen dioxide (NO2), sulfur dioxide (SO2), formaldehyde (HCHO), BrO, and OClO. However, OMI sensors can distinguish between aerosol types, such as smoke, dust, and sulfates, and can measure cloud pressure and cloud coverage, which provide data to derive tropospheric ozone. In that regard OMI follows in the heritage of TOMS, SBUV, GOME, SCIAMACHY, and GOMOS. On top of that, OMI aims to detect emissions in volcanic eruptions with up to at least 100 times more sensitivity than TOMS. The Ozone Monitoring Instrument has been proved an useful platform to monitor other traces gases like Glyoxal, variables like surface UV radiation, or total column estimations like the water vapor, NO2 and Ozone. Has been uses in operational services by European Centre for Medium-range Weather Forecasts (ECMWF), the US National Oceanic and Atmospheric Administration (NOAA) for ozone and air quality forecasts, and the Volcanic Ash Advisory Centers (VAACs) for the rerouting of aircraft in case of a volcanic eruption.
Instrument Information
The instrument observes Earth's backscattered radiation and uses two imaging grating spectrometers, and each grating spectrometer is coupled to a CCD detector with 780x576 (spectral x spatial) pixels. The instrument can operate in two different modes: the normal operational mode where a single pixel in the observation has an spatial resolution 13x24 km2 at nadir (straight down), and the zoom mode where this resolution is increased to 13x12 km2.
Spectral Information
OMI measurements cover a spectral region of 264–504 nm (nanometers) with a spectral resolution between 0.42 nm and 0.63 nm and a nominal ground footprint of 13 × 24 km2 at nadir. This spectral coverage is divided in three different channels two of them in the ultraviolet range, and one in the visible spectrum. Note that the ground pixel size of the UV-1 channel is twice as large in the swath direction compared to the other two channels, this optical design of the UV channel were done to reduce straylight in this wavelength range.
Orbital Information
The Aura satellite orbits at an altitude of 705 km in a sun-synchronous polar orbit with an exact 16-day repeat cycle and with a local equator crossing time of 13. 45 ( 1:45 P.M.) on the ascending node. The orbital inclination is 98.1 degrees, providing latitudinal coverage from 82° N to 82° S. It is a wide-field-imaging spectrometer with a 114° across-track viewing angle range that provides a 2600 km wide swath, enabling measurements with a daily global coverage.
Calibration and Validation
The discussion of the calibration and validation processes began before the launch of Aura Satellite. Once the instrument was in orbit the information of these calibration was published, showing specific details of the absolute radiometric calibration, the bi-directional scattering distribution function (BSDF) calibration and the spectral calibration carried on. Note also that the instrument is equipped with an internal white light source for detector calibration purposes. The validation, which aim to assess the inherent uncertainties in satellite data products of the instrument together with retrieval algorithms used for each data product, was carried on continuously since the launch of Aura satellite. The validation include products like: total ozone column, NO2, ozone vertical profiles.
In-flight performance
One important aspect of satellite instruments for scientific measurements is the evolution of the performance during the life-cycle of the sensors, as well as, the continuous evaluation of the quality of the data products. In the case of an instrument like OMI the main aspects to consider are: the radiometric and spectral stability, the row anomaly, and detector degradation. In the first aspect: the radiometric degradation of OMI ranges from ∼2% in the UV channels to ∼0.5% in the VIS channel, which is much lower than any other similar satellite instrument. Regarding the wavelength calibration of the instrument it remains stable to 0.005–0.020 nm which indicates a high wavelength stability. It was detected a row anomaly due, probably, to a partial cover of the instrument, warning flags were included in the raw products to avoid the use of these specific rows and keep the quality of the retrieval products. Further information of the long-term calibration indicated in 2017 that the instrument will be able to provide useful science data for another 5 to 10 years.
Scientific relevance
The OMI project has been monitoring the atmospheric composition and providing measurements widely used in the field of atmospheric chemistry research. The fact that it has been operational for more than a decade makes it also useful for trend monitoring. The reference describing the first 14 years of the OMI details the research data products provided by NASA, KNMI, FMI and SAO, also according to these authors, beyond the initial goals, OMI has been important due the high-resolution NO2 and SO2 measurements (OMI is the first instrument that is able to obtain daily global coverage combined with such spatial resolution), and the fact that top-down studies allowed for source attribution analyses.
Awards
The International Team of the Ozone Monitoring Instrument has received several awards for its contributions to a better understanding of the Earth system:
USGS 2018 Pecora Award The Pecora award is annual to recognize individuals or teams using remote sensing in the field of Earth Science. It consider not only the scientific role but also its role informing decision makers and supporting natural or human-induced disaster responses.
2021 AMS Special Award A broad description of this award to OMI International Team is given as an AMS video.
Contributions to scientific research
Assessment of the Montreal Protocol: the instrument has proved stability to provide long-term data record for monitoring the ozone layer, which is the particular interest to evaluate the possible recovery of the ozone depletion in the southern hemisphere.
Global concentrations of trace gases: the OMI data show a steady decline in concentrations of NO2 in the United States, Europe, and Japan, whereas in China, first strong increases were observed, followed by decreases after 2014.
Absorbing aerosol that can cause warming: OMI can provide information as from its ultraviolet (UV) channel it is possible to derive such absorbing capacity.
Long-term data record of tropospheric ozone has been established: Tropospheric ozone assessment is important as it is the third main anthropogenic greenhouse gas, and the fraction of ozone in the troposphere can be derived from the OMI data, either by itself alone or in combination with other instruments
OMI formaldehyde retrievals indicate increases of this trace gas over India and China, and a downward trend over the Amazonian forest, spatially correlated with areas affected by deforestation
OMI has been the first satellite instrument to be used for daily monitoring of volcanic emissions
References
External links
OMI webpage at NASA.gov
OMI webpage at KNMI.nl
Tropospheric Emission Monitoring Internet Service (TEMIS)
https://docserver.gesdisc.eosdis.nasa.gov/repository/Mission/OMI/3.3_ScienceDataProductDocumentation/3.3.2_ProductRequirements_Designs/README.OMI_DUG.pdf
Scientific instruments
Spectrometers
Ozone depletion
Weather imaging satellite sensors | Ozone monitoring instrument | Physics,Chemistry,Technology,Engineering | 1,914 |
40,904,144 | https://en.wikipedia.org/wiki/Rocketry%20Organization%20of%20California | Rocketry Organization of California (ROC) is one of the world's oldest and biggest amateur high power rocket clubs. Monthly one day launches are held the second Saturday of each month. Anyone interested in hobby rocketry is welcome, and spectators are always free.
For more information, our web page is: rocstock.org
Launches are normally held at Lucerne Dry Lake in California. The launch site is a mile west of State Route 247 and 2 miles north of State Route 18. Launches are held by permit on BLM land, and with a Certificate of Authorization from the FAA.
Launches are for hobby rockets using commercial rocket motors listed by the Office of the Stare Fire Marshal, and range from small model rockets up to very large high power rockets. With FAA approval, rockets can be launched up to 7,000 feet above ground level.
Since 1995 two annual multi day launches have been held named ROCstock in April and November, the tag line 'Love, Peace and Rockets' is used as homage to Woodstock.
The annual Tripoli launch LDRS (Large Dangerous Rocket Ships) has been hosted on 5 occasions by ROC. Four of these were at Lucerne Dry Lake, CA, (20, 29, 35, and 41) and the other (22) in Jean, NV.
The National Rocketry Association "National Sport Launch" has been hosted twice by ROC at Lucerne Dry Lake, CA.
Registered as a California non profit corporation with federal and state non-profit status.
Tripoli Prefecture: 48
NAR Section: 538
Member and their projects
Wedge Oldham's projects - Nike Black Brant
Dirk Gates - Gates Brothers Rocketry
External links
ROC web site
ROC Facebook group
Tripoli Rocketry Association web site
LDRS web site
National Association of Rocketry web site
References
Rocketry
Clubs and societies in California | Rocketry Organization of California | Engineering | 363 |
13,038,858 | https://en.wikipedia.org/wiki/EMR3 | EGF-like module-containing mucin-like hormone receptor-like 3 is a protein encoded by the ADGRE3 gene. EMR3 is a member of the adhesion GPCR family.
Adhesion GPCRs are characterized by an extended extracellular region often possessing N-terminal protein modules that is linked to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain.
EMR3 expression is restricted to monocytes/macrophages, myeloid dendritic cells, and mature granulocytes in human. Transcription of the EMR3 gene results in two alternative spliced forms: a surface protein with extracellular, 7TM, and intracellular domains as well as a truncated soluble form of only the extracellular domain. Mice, next to Emr2, lack the Emr3 gene.
Function
The protein may play a role in myeloid-myeloid interactions during immune and inflammatory responses.
Ligands
A potential ligand of EMR3 likely is expressed on human macrophage and activated neutrophils.
References
External links
GPCR consortium
G protein-coupled receptors | EMR3 | Chemistry | 239 |
4,759,830 | https://en.wikipedia.org/wiki/Photochromic%20lens | A photochromic lens is an optical lens that darkens on exposure to light of sufficiently high frequency, most commonly ultraviolet (UV) radiation. In the absence of activating light, the lenses return to their clear state. Photochromic lenses may be made of polycarbonate, or another plastic. Glass lenses use visible light to darken. They are principally used in glasses that are dark in bright sunlight, but clear, or more rarely, lightly tinted in low ambient light conditions. They darken significantly within about a minute of exposure to bright light and take somewhat longer to clear. A range of clear and dark transmittances is available.
In one sort of technology, molecules of silver chloride or another silver halide are embedded in photochromic lenses. They are transparent to visible light without significant ultraviolet component, which is normal for artificial lighting. In another sort of technology, organic photochromic molecules, when exposed to ultraviolet (UV) rays as in direct sunlight, undergo a chemical process that causes them to change shape and absorb a significant percentage of the visible light, i.e., they darken. These processes are reversible; once the lens is removed from strong sources of UV rays the photochromic compounds return to their transparent state.
Invention
Photochromic lenses were developed by William H. Armistead and Stanley Donald Stookey at the Corning Glass Works Inc. in the 1960s.
Technical details
Mechanism
The glass version of these lenses achieves their photochromic properties through the embedding of microcrystalline silver halides (usually silver chloride) in a glass substrate. Plastic photochromic lenses use organic photochromic molecules (for example oxazines and naphthopyrans) to achieve the reversible darkening effect. These lenses darken when exposed to ultraviolet light of the intensity present in sunlight, but not in artificial light.
In glass lenses, when in the presence of UV-A light (wavelengths of 320–400 nm) electrons from the glass combine with the colourless silver cations to form elemental silver. Because elemental silver is visible, the lenses appear darker.
AgCl + e^- <=> Ag + Cl^-
Back in the shade, this reaction is reversed. The silver returns to its original ionic state, and the lenses become clear.
Ag - e^- -> Ag^+
Ag^+ + Cl^- -> AgCl
With the photochromic material dispersed in the glass substrate, the degree of darkening depends on the thickness of glass, which poses problems with variable-thickness lenses in prescription glasses. With plastic lenses, the material is typically embedded into the surface layer of the plastic in a uniform thickness of up to 150 μm.
Variables
Typically, photochromic lenses darken substantially in response to UV light in less than one minute, and continue to darken a little more over the next fifteen minutes. The lenses begin to clear in the absence of UV light, and will be noticeably lighter within two minutes, mostly clear within five minutes, and fully back to their non-exposed state in about fifteen minutes. A report by the Institute of Ophthalmology at the University College London suggested that at their clearest photochromic lenses can absorb up to 20% of ambient light.
Because photochromic compounds fade back to their clear state by a thermal process, the higher the temperature, the less dark photochromic lenses will be. This thermal effect is called "temperature dependency" and prevents these devices from achieving true sunglass darkness in very hot weather. Conversely, photochromic lenses will get very dark in cold weather conditions. Once inside, away from the triggering UV light, the cold lenses take longer to regain their transparency than warm lenses.
A number of sunglass manufacturers and suppliers including INVU, BIkershades, Tifosi, Intercast, Oakley, ZEISS, Serengeti Eyewear, and Persol provide tinted lenses that use photochromism to go from a dark to a darker state. They are typically used for outdoor sunglasses rather than as general-purpose lenses.
See also
Photosensitive glass
Essilor
References
External links
.
Corrective lenses
Chromism
Glass compositions
Glass chemistry
Glass applications | Photochromic lens | Physics,Chemistry,Materials_science,Engineering | 869 |
1,065,209 | https://en.wikipedia.org/wiki/Worldspan | Worldspan is a provider of travel technology and content and a part of the Travelport GDS business. It offers worldwide electronic distribution of travel information, Internet products and connectivity, and e-commerce capabilities for travel agencies, travel service providers and corporations. Its primary system is commonly known as a Global Distribution System (GDS), which is used by travel agents and travel related websites to book airline tickets, hotel rooms, rental cars, tour packages and associated products. Worldspan also hosts IT services and product solutions for major airlines.
Recent events
In December, 2006, Travelport, owner of the Galileo GDS, Gullivers Travel Associates (GTA) and a controlling share in Orbitz, agreed to acquire Worldspan. However, at the time, management of Travelport did not commit to the eventual merging of the two GDS systems, saying that they were considering all options, including running both systems in parallel. On August 21, 2007, the acquisition was completed for $1.4 billion and Worldspan became a part of Travelport GDS, which also includes Galileo and other related businesses. On September 28, 2008, the Galileo and Apollo GDS were moved from the Travelport datacenter in Denver, Colorado to the Worldspan datacenter in Atlanta, Georgia (although they continue to be run as separate systems from the Worldspan GDS).
In 2012, Worldspan customers were migrated from the TPF-based FareSource pricing engine to Travelport's Linux-based 360 Fares pricing engine already used by Galileo and Apollo. Although the three systems share a common pricing platform, they continue to operate as separate GDS.
History
Worldspan was formed in early 1990 by Delta Air Lines, Northwest Airlines, and TWA to operate and sell its GDS services to travel agencies worldwide. Worldspan operated very effectively and profitably, successfully expanding its business in markets throughout North America, South America, Europe, and Asia. As a result, in mid-2003, Worldspan was sold by its owner airlines to Citigroup Venture Capital and Ontario Teachers' Pension Fund which in turn sold the business to Travelport in 2007.
Worldspan was formed in 1990 by combining the PARS partnerships companies (owned by TWA and Northwest Airlines, Inc.) and DATAS II, a division of Delta Air Lines, Inc. One of Worldspan’s predecessors – TWA PARS – became the first GDS to be installed in travel agencies in 1976. ABACUS, an Asian company owned by a number of Asian airlines, owned a small portion of Worldspan, and Worldspan owned a small portion of Abacus. Worldspan and Abacus entered into a series of business and technology relationships. These relationships were terminated after Abacus engaged in fraudulent and deceptive practices, for which Worldspan received a sizable judgement in an arbitration in London.
See also
Amadeus IT Group
List of global distribution systems
Passenger Name Record
Code sharing
Travel technology
References
Airline tickets
Travel technology
Computer reservation systems | Worldspan | Technology | 603 |
70,716,297 | https://en.wikipedia.org/wiki/Spatial%20anxiety | Spatial anxiety (sometimes also referred to as spatial orientation discomfort) is a sense of anxiety an individual experiences while processing environmental information contained in one's geographical space (in the sense of Montello's classification of space), with the purpose of navigation and orientation through that space (usually unfamiliar, or very little known). Spatial anxiety is also linked to the feeling of stress regarding the anticipation of a spatial-content related performance task (such as mental rotation, spatial perception, spatial visualisation, object location memory, dynamic spatial ability). Particular cases of spatial anxiety can result in a more severe form of distress, as in agoraphobia.
Classification
It is still investigated whether spatial anxiety would be considered as one solid, concrete ("unitary") construct (including the experiences of anxiety due to any spatial task), or whether it could be considered to be a "multifactorial construct" (including various subcomponents), attributing the experience of anxiety to several aspects. Evidence has shown that spatial anxiety seems to be a "multifactorial construct" that entails two components; that of anxiety regarding navigation and that of anxiety regarding the demand of rotation and visualization skills.
Gender and further individual differences
Gender differences appear to be one of the most prominent differences in spatial anxiety as well as in navigational strategies. Evidence show higher levels of spatial anxiety in women, who tend to choose route strategies, as opposed to men, who tend to choose orientation strategies (a fact which, in turn, has been found to be negatively related to spatial anxiety).
Spatial anxiety levels also seem to vary across different age groups. Evidence has shown spatial anxiety to appear also, early on, during the elementary school years, with anxiety varying in level and tending to be stable; with minimum fluctuations, across life span.
Measuring instruments
There are two primary ways of measuring spatial anxiety. One of them is Lawton's Spatial Anxiety Scale, which was dominant during its era of creation. The other is the Child Spatial Anxiety Questionnaire, which was first one to assess spatial anxiety levels related to other spatial abilities other than navigation and map reading.
Lawton's Spatial Anxiety Scale
The scale measures the degree of anxiety regarding the individual's experience and performance, in tasks assessing one's information processing related to the environment; such as way-finding and navigation.
In total there are eight statements. Some examples are "leaving a store that you have been to for the first time and deciding which way to turn to get to a destination" and "finding your way around in an unfamiliar mall". The rating takes place on a 5-point scale, expressing the degree of anxiety with a continuum from "not at all" to "very much".
Child Spatial Anxiety Questionnaire
The Child Spatial Anxiety Questionnaire was designed for young children and attempts to assess anxiety related to a wider (than usually) range of spatial abilities. Children are asked to report the level of anxiety they feel while in particular spatial abilities-demanding situations. In total it includes eight situations. Some examples are: "how do you feel being asked to say which direction is right or left?", "how do you feel when you are asked to point to a certain place on a map, like this one?", "how do you feel when you have to solve a maze like this in one minute?".
In the original version, the rating takes place on a 3-point scale which includes three different faces; each facial expression, representing a different emotional state (getting from "calm", to "somewhat nervous", to "very nervous"). The revised version assessment takes place on a 5-point scale, with two more facial expressions added.
Cognitive maps in individuals with spatial anxiety
Self-reported spatial anxiety is negatively correlated with performance in spatial tasks, both small-scale – as assessing mental rotation, spatial visualization; and large scale – as environment learning, with participants scoring higher in spatial anxiety scale showing lowered performance. Spatial anxiety is also negatively correlated navigation proficiency ratings on the self-reported sense of direction measures, as well as orientation (map based) and route (egocentric) strategies. Additionally, as anxiety has been shown to influence performance on tasks that utilize working memory resources, working memory is bound to be affected by spatial anxiety, especially visuo-spatial working memory.
There has been evidence demonstrating the negative relationship between spatial anxiety and environmental learning ability. For example, spatial anxiety is found to induce more errors in directional pointing tasks. In an experiment where participants were required to use directional instructions to move a toy car in a virtual three-dimensional environment, those with higher reported spatial anxiety performed with less accuracy. As spatial anxiety increases, pointing accuracy decreases, and navigation errors increase significantly.. This effect has been also shown in patients with cognitive impairment. Early detection might therefore allow for timely therapeutical intervention, e.g., in Alzheimer's disease
Moreover, spatial anxiety has been shown to relate to gender differences in spatial abilities. Generally, women report higher levels of spatial anxiety than men. The use of orientation (based on map view) strategies in indoor or/and outdoor environment can be associated with lower levels of spatial anxiety. Women tend to report using route strategies more than orientation strategies, whereas men report the opposite. Spatial anxiety also contributes to gender differences in environment learning. Recent findings in university students indicate that men rely more than women upon distal gradient cues that provide information on both orientation and direction (i.e., hill lines) whereas women depend upon proximal pinpoint (i.e., landmark) cues more than other cue types when identifying a visual scene. The addition of an exogenous stressor would differentially alter the impact of spatial anxiety on performance in men and women by producing a higher perception of stress in women than males, which results in decreasing performance in females. The findings suggest that gender differences in distal gradient and new cue perception varied based on stress condition.
Some studies have discovered that acute stress can reduce memory for spatial locations, and people reporting difficulties in memorizing landmarks and directions when they are displaced also report higher levels of spatial anxiety. In addition, it has been demonstrated that people with Agoraphobia Disorder have reduced visuo-spatial working memory when they are required to process multiple spatial elements simultaneously. Specifically, in tasks where they were required to navigate using the landmarks independent of themselves (allocentric coordinates), visuo-spatial working memory deficits were shown to hinder their performance.
Bilateral vestibulopathy can cause higher levels of spatial anxiety, potentially related to hippocampal atrophy. Overall, the role of the vestibular system on spatial anxiety is not yet fully understood, but vestibular function plays a relevant role in emotion processing and the development of (vertigo-related) anxiety, as well as in spatial perception.
Possible explanations for the negative correlation between spatial anxiety and the ability to form cognitive map include: individuals lacking sense of their own position with respect to the external environment are more likely to get anxious when faced with unplanned navigation, and the anxiety about becoming lost itself may reduce the ability to attend to cues necessary for way-finding strategizing.
The influence of spatial anxiety can be counteracted by positive beliefs, such as spatial self-efficacy and confidence (i.e. as the belief that one will do well in cognitive tasks). For example, it has been demonstrated that confidence was a predictive factor for accuracy in mental rotation tasks, with participants being more accurate when they were more confident. When this factor was manipulated, the performance was significantly affected. Furthermore, having more self-perception of spatial self-efficacy has a positive role in supporting environment learning beyond the role of gender.
See also
Spatial cognition
Agoraphobia
Navigation
Sex differences in psychology
References
External links
Child Spatial Anxiety Questionnaire (CSAQ) (northwestern.edu)
SpatialAnxietyQuestionnaire A sample of the CSAQ's items
Anxiety
Spatial cognition
Navigation
Orientation (geometry)
Agoraphobia | Spatial anxiety | Physics,Mathematics | 1,630 |
65,088,597 | https://en.wikipedia.org/wiki/Moto%20G9 | Moto G9 (stylized by Motorola as moto g9) is a series of Android smartphones developed by Motorola Mobility, a subsidiary of Lenovo. It is the ninth generation of the Moto G family.
Specifications
Some specifications such as wireless technologies and storage will differ between regions.
References
Motorola smartphones
Mobile phones introduced in 2020
Android (operating system) devices
Mobile phones with multiple rear cameras | Moto G9 | Technology | 81 |
1,112,273 | https://en.wikipedia.org/wiki/Heat%20of%20combustion | The heating value (or energy value or calorific value) of a substance, usually a fuel or food (see food energy), is the amount of heat released during the combustion of a specified amount of it.
The calorific value is the total energy released as heat when a substance undergoes complete combustion with oxygen under standard conditions. The chemical reaction is typically a hydrocarbon or other organic molecule reacting with oxygen to form carbon dioxide and water and release heat. It may be expressed with the quantities:
energy/mole of fuel
energy/mass of fuel
energy/volume of the fuel
There are two kinds of enthalpy of combustion, called high(er) and low(er) heat(ing) value, depending on how much the products are allowed to cool and whether compounds like are allowed to condense.
The high heat values are conventionally measured with a bomb calorimeter. Low heat values are calculated from high heat value test data. They may also be calculated as the difference between the heat of formation ΔH of the products and reactants (though this approach is somewhat artificial since most heats of formation are typically calculated from measured heats of combustion)..
For a fuel of composition CcHhOoNn, the (higher) heat of combustion is usually to a good approximation (±3%), though it gives poor results for some compounds such as (gaseous) formaldehyde and carbon monoxide, and can be significantly off if , such as for glycerine dinitrate, .
By convention, the (higher) heat of combustion is defined to be the heat released for the complete combustion of a compound in its standard state to form stable products in their standard states: hydrogen is converted to water (in its liquid state), carbon is converted to carbon dioxide gas, and nitrogen is converted to nitrogen gas. That is, the heat of combustion, ΔH°comb, is the heat of reaction of the following process:
(std.) + (c + - ) (g) → c (g) + (l) + (g)
Chlorine and sulfur are not quite standardized; they are usually assumed to convert to hydrogen chloride gas and or gas, respectively, or to dilute aqueous hydrochloric and sulfuric acids, respectively, when the combustion is conducted in a bomb calorimeter containing some quantity of water.
Ways of determination
Gross and net
Zwolinski and Wilhoit defined, in 1972, "gross" and "net" values for heats of combustion. In the gross definition the products are the most stable compounds, e.g. (l), (l), (s) and (l). In the net definition the products are the gases produced when the compound is burned in an open flame, e.g. (g), (g), (g) and (g). In both definitions the products for C, F, Cl and N are (g), (g), (g) and (g), respectively.
Dulong's Formula
The heating value of a fuel can be calculated with the results of ultimate analysis of fuel. From analysis, percentages of the combustibles in the fuel (carbon, hydrogen, sulfur) are known. Since the heat of combustion of these elements is known, the heating value can be calculated using Dulong's Formula:
HHV [kJ/g]= 33.87mC + 122.3(mH - mO ÷ 8) + 9.4mS
where mC, mH, mO, mN, and mS are the contents of carbon, hydrogen, oxygen, nitrogen, and sulfur on any (wet, dry or ash free) basis, respectively.
Higher heating value
The higher heating value (HHV; gross energy, upper heating value, gross calorific value GCV, or higher calorific value; HCV) indicates the upper limit of the available thermal energy produced by a complete combustion of fuel. It is measured as a unit of energy per unit mass or volume of substance. The HHV is determined by bringing all the products of combustion back to the original pre-combustion temperature, including condensing any vapor produced. Such measurements often use a standard temperature of . This is the same as the thermodynamic heat of combustion since the enthalpy change for the reaction assumes a common temperature of the compounds before and after combustion, in which case the water produced by combustion is condensed to a liquid. The higher heating value takes into account the latent heat of vaporization of water in the combustion products, and is useful in calculating heating values for fuels where condensation of the reaction products is practical (e.g., in a gas-fired boiler used for space heat). In other words, HHV assumes all the water component is in liquid state at the end of combustion (in product of combustion) and that heat delivered at temperatures below can be put to use.
Lower heating value
The lower heating value (LHV; net calorific value; NCV, or lower calorific value; LCV) is another measure of available thermal energy produced by a combustion of fuel, measured as a unit of energy per unit mass or volume of substance. In contrast to the HHV, the LHV considers energy losses such as the energy used to vaporize wateralthough its exact definition is not uniformly agreed upon. One definition is simply to subtract the heat of vaporization of the water from the higher heating value. This treats any H2O formed as a vapor that is released as a waste. The energy required to vaporize the water is therefore lost.
LHV calculations assume that the water component of a combustion process is in vapor state at the end of combustion, as opposed to the higher heating value (HHV) (a.k.a. gross calorific value or gross CV) which assumes that all of the water in a combustion process is in a liquid state after a combustion process.
Another definition of the LHV is the amount of heat released when the products are cooled to . This means that the latent heat of vaporization of water and other reaction products is not recovered. It is useful in comparing fuels where condensation of the combustion products is impractical, or heat at a temperature below cannot be put to use.
One definition of lower heating value, adopted by the American Petroleum Institute (API), uses a reference temperature of .
Another definition, used by Gas Processors Suppliers Association (GPSA) and originally used by API (data collected for API research project 44), is the enthalpy of all combustion products minus the enthalpy of the fuel at the reference temperature (API research project 44 used 25 °C. GPSA currently uses 60 °F), minus the enthalpy of the stoichiometric oxygen (O2) at the reference temperature, minus the heat of vaporization of the vapor content of the combustion products.
The definition in which the combustion products are all returned to the reference temperature is more easily calculated from the higher heating value than when using other definitions and will in fact give a slightly different answer.
Gross heating value
Gross heating value accounts for water in the exhaust leaving as vapor, as does LHV, but gross heating value also includes liquid water in the fuel prior to combustion. This value is important for fuels like wood or coal, which will usually contain some amount of water prior to burning.
Measuring heating values
The higher heating value is experimentally determined in a bomb calorimeter. The combustion of a stoichiometric mixture of fuel and oxidizer (e.g. two moles of hydrogen and one mole of oxygen) in a steel container at is initiated by an ignition device and the reactions allowed to complete. When hydrogen and oxygen react during combustion, water vapor is produced. The vessel and its contents are then cooled to the original 25 °C and the higher heating value is determined as the heat released between identical initial and final temperatures.
When the lower heating value (LHV) is determined, cooling is stopped at 150 °C and the reaction heat is only partially recovered. The limit of 150 °C is based on acid gas dew-point.
Note: Higher heating value (HHV) is calculated with the product of water being in liquid form while lower heating value (LHV) is calculated with the product of water being in vapor form.
Relation between heating values
The difference between the two heating values depends on the chemical composition of the fuel. In the case of pure carbon or carbon monoxide, the two heating values are almost identical, the difference being the sensible heat content of carbon dioxide between 150 °C and 25 °C (sensible heat exchange causes a change of temperature, while latent heat is added or subtracted for phase transitions at constant temperature. Examples: heat of vaporization or heat of fusion). For hydrogen, the difference is much more significant as it includes the sensible heat of water vapor between 150 °C and 100 °C, the latent heat of condensation at 100 °C, and the sensible heat of the condensed water between 100 °C and 25 °C. In all, the higher heating value of hydrogen is 18.2% above its lower heating value (142MJ/kg vs. 120MJ/kg). For hydrocarbons, the difference depends on the hydrogen content of the fuel. For gasoline and diesel the higher heating value exceeds the lower heating value by about 10% and 7%, respectively, and for natural gas about 11%.
A common method of relating HHV to LHV is:
where Hv is the heat of vaporization of water, n,out is the number of moles of water vaporized and nfuel,in is the number of moles of fuel combusted.
Most applications that burn fuel produce water vapor, which is unused and thus wastes its heat content. In such applications, the lower heating value must be used to give a 'benchmark' for the process.
However, for true energy calculations in some specific cases, the higher heating value is correct. This is particularly relevant for natural gas, whose high hydrogen content produces much water, when it is burned in condensing boilers and power plants with flue-gas condensation that condense the water vapor produced by combustion, recovering heat which would otherwise be wasted.
Usage of terms
Engine manufacturers typically rate their engines fuel consumption by the lower heating values since the exhaust is never condensed in the engine, and doing this allows them to publish more attractive numbers than are used in conventional power plant terms. The conventional power industry had used HHV (high heat value) exclusively for decades, even though virtually all of these plants did not condense exhaust either. American consumers should be aware that the corresponding fuel-consumption figure based on the higher heating value will be somewhat higher.
The difference between HHV and LHV definitions causes endless confusion when quoters do not bother to state the convention being used. since there is typically a 10% difference between the two methods for a power plant burning natural gas. For simply benchmarking part of a reaction the LHV may be appropriate, but HHV should be used for overall energy efficiency calculations if only to avoid confusion, and in any case, the value or convention should be clearly stated.
Accounting for moisture
Both HHV and LHV can be expressed in terms of AR (all moisture counted), MF and MAF (only water from combustion of hydrogen). AR, MF, and MAF are commonly used for indicating the heating values of coal:
AR (as received) indicates that the fuel heating value has been measured with all moisture- and ash-forming minerals present.
MF (moisture-free) or dry indicates that the fuel heating value has been measured after the fuel has been dried of all inherent moisture but still retaining its ash-forming minerals.
MAF (moisture- and ash-free) or DAF (dry and ash-free) indicates that the fuel heating value has been measured in the absence of inherent moisture- and ash-forming minerals.
Heat of combustion tables
Note
There is no difference between the lower and higher heating values for the combustion of carbon, carbon monoxide and sulfur since no water is formed during the combustion of those substances.
BTU/lb values are calculated from MJ/kg (1 MJ/kg = 430 BTU/lb).
Higher heating values of natural gases from various sources
The International Energy Agency reports the following typical higher heating values per Standard cubic metre of gas:
Algeria: 39.57MJ/Sm3
Bangladesh: 36.00MJ/Sm3
Canada: 39.00MJ/Sm3
China: 38.93MJ/Sm3
Indonesia: 40.60MJ/Sm3
Iran: 39.36MJ/Sm3
Netherlands: 33.32MJ/Sm3
Norway: 39.24MJ/Sm3
Pakistan: 34.90MJ/Sm3
Qatar: 41.40MJ/Sm3
Russia: 38.23MJ/Sm3
Saudi Arabia: 38.00MJ/Sm3
Turkmenistan: 37.89MJ/Sm3
United Kingdom: 39.71MJ/Sm3
United States: 38.42MJ/Sm3
Uzbekistan: 37.89MJ/Sm3
The lower heating value of natural gas is normally about 90% of its higher heating value. This table is in Standard cubic metres (1atm, 15°C), to convert to values per Normal cubic metre (1atm, 0°C), multiply above table by 1.0549.
See also
Adiabatic flame temperature
Cost of electricity by source
Electrical efficiency
Energy content of fuel
Energy conversion efficiency
Energy density
Energy value of coal
Exothermic reaction
Figure of merit
Fire
Food energy
Internal energy
ISO 15971
Mechanical efficiency
Thermal efficiency
Wobbe index: heat density
References
Further reading
External links
NIST Chemistry WebBook
Engineering thermodynamics
Combustion
Fuels
Thermodynamic properties
Nuclear physics
Thermochemistry | Heat of combustion | Physics,Chemistry,Mathematics,Engineering | 2,887 |
25,205,346 | https://en.wikipedia.org/wiki/Black%20hole%20starship | In astronautics, a black hole starship is the theoretical concept of a starship capable of interstellar travel using a black hole as an energy source for spacecraft propulsion. The concept was first discussed in science fiction, notably in the book Imperial Earth by Arthur C. Clarke, and in the work of Charles Sheffield, in which energy extracted from a Kerr–Newman black hole is described as powering the rocket engines in the story "Killing Vector" (1978).
In a more detailed analysis, a proposal to create an artificial black hole and using a parabolic reflector to reflect its Hawking radiation was discussed in 2009 by Louis Crane and Shawn Westmoreland. Their conclusion was that it was on the edge of possibility, but that quantum gravity effects that are presently unknown will either make it easier, or make it impossible. Similar concepts were also sketched out by Alexander Bolonkin.
Advantages
Although beyond current technological capabilities, a black hole starship offers some advantages compared to other possible methods. For example, in nuclear fusion or fission, only a small proportion of the mass is converted into energy, so enormous quantities of material would be needed. Thus, a nuclear starship would greatly deplete Earth of fissile and fusile material. One possibility is antimatter, but the manufacturing of antimatter is hugely energy-inefficient, and antimatter is difficult to contain. The Crane and Westmoreland paper states:
Criteria
According to the authors, a black hole to be used in space travel needs to meet five criteria:
has a long enough lifespan to be useful,
is powerful enough to accelerate itself up to a reasonable fraction of the speed of light in a reasonable amount of time,
is small enough that we can access the energy to make it,
is large enough that we can focus the energy to make it,
has mass comparable to a starship.
Black holes seem to have a sweet spot in terms of size, power and lifespan which is almost ideal. A black hole weighing 606,000 metric tons (6.06 × 108 kg) would have a Schwarzschild radius of 0.9 attometers (0.9 × 10–18 m, or 9 × 10–19 m), a power output of 160 petawatts (160 × 1015 W, or 1.6 × 1017 W), and a 3.5-year lifespan. With such a power output, the black hole could accelerate to 10% the speed of light in 20 days, assuming 100% conversion of energy into kinetic energy. Assuming only 10% conversion into kinetic energy, it would take 10 times more.
Getting the black hole to act as a power source and engine also requires a way to convert the Hawking radiation into energy and thrust. One potential method involves placing the hole at the focal point of a parabolic reflector attached to the ship, creating forward thrust, if such a reflector can be built. A slightly easier, but less efficient method would involve simply absorbing all the gamma radiation heading towards the fore of the ship to push it onwards, and let the rest shoot out the back. This would, however, generate an enormous amount of heat as radiation is absorbed by the dish.
Criticism
It is not clear that a starship powered by Hawking radiation can be made feasible within the laws of known physics. In the standard black hole thermodynamic model, the average energy of emitted quanta increases as size decreases, and extremely small black holes emit the majority of their energy in particles other than photons. In the Journal of the British Interplanetary Society, Jeffrey S. Lee of Icarus Interstellar states a typical quantum of radiation from a one-attometer black hole would be too energetic to be reflected. Lee further argues absorption (for example, by pair production from emitted gamma rays) may also be infeasible: A titanium "Dyson cap", optimized at 1 cm thickness and a radius around 33 km (to avoid melting), would absorb almost half the incident energy, but the maximum spaceship velocity over the black hole lifetime would be less than 0.0001c (about 30 km/s), according to Lee's calculations.
Govind Menon of Troy University suggests exploring the use of a rotating (Kerr–Newmann) black hole instead: "With non-rotating black holes, this is a very difficult thing...we typically look for energy almost exclusively from rotating black holes. Schwarzschild black holes do not radiate in an astrophysical, gamma ray burst point of view. It is not clear if Hawking radiation alone can power starships."
In fiction
Arthur C. Clarke, Imperial Earth (1976)
Charles Sheffield, "Killing Vector" (1978)
Peter Watts, "The Freeze Frame Revolution" (2016)
John Varley, The Ophiuchi Hotline (1977) in novel they use Black hole engine to move Jupiter moon Poseidon to Alpha Centauri.
In the 2014 Hannu Rajaniemi science fiction novel The Causal Angel Jean le Flambeur's ship Leblanc has a black hole that emits Hawking radiation which is used for propulsion.
In the Star Trek universe, the Romulan D'deridex-class warbird uses an artificial quantum singularity as a power source for its warp propulsion drive.
In the 1997 Paul W. S. Anderson science fiction horror film Event Horizon, the eponymous starship uses an artificial black hole drive to achieve faster-than-light travel.
In the MMO Eve Online, starships designed by the Triglavian faction utilize naked singularities contained on the external hull as their vessel's primary power source.
In Foundation (TV series), jump ships appear to use black holes to power their jumpdrives, enabling faster-than-light (FTL) travel over interstellar distances. This differs from the Foundation series on which the TV series is based, where FTL travel is facilitated through hyperspace travel.
In the TV series, Doctor Who, the TARDIS is powered by a black hole and that is not just how it is capable of being bigger on the inside but also how it travels through time.
In the book, How High We Go In The Dark, by Sequoia Nagamatsu an interstellar starship is used to take passengers in cryo-sleep 582 light years from Earth at 10% the speed of light. The starship uses an engine powered by Hawking radiation.
See also
Abraham–Lorentz force
Beyond black holes
Black hole electron
Hawking radiation
Kugelblitz (astrophysics)
List of quantum gravity researchers
Micro black hole
References
Hypothetical spacecraft
Interstellar travel | Black hole starship | Astronomy,Technology | 1,363 |
50,431,127 | https://en.wikipedia.org/wiki/TRAPPIST-1d | TRAPPIST-1d is a small exoplanet (about 40 percent the mass of the Earth), which orbits on the inner edge of the habitable zone of the ultracool dwarf star TRAPPIST-1, located away from Earth in the constellation of Aquarius. The exoplanet was found by using the transit method. The first signs of the planet were announced in 2016, but it was not until the following years that more information concerning the probable nature of the planet was obtained. TRAPPIST-1d is the second-least massive planet of the system and is likely to have a compact hydrogen-poor atmosphere similar to Venus, Earth, or Mars. It receives just 4.3% more sunlight than Earth, placing it on the inner edge of the habitable zone. It has about <5% of its mass as a volatile layer, which could consist of atmosphere, oceans, and/or ice layers. A 2018 study by the University of Washington concluded that TRAPPIST-1d might be a Venus-like exoplanet with an uninhabitable atmosphere. The planet is an eyeball planet candidate.
Physical characteristics
Radius, mass, and temperature
TRAPPIST-1d was detected with the transit method, allowing scientists to accurately determine its radius. The planet is about with a small error margin of about 70 km. Transit timing variations and complex computer simulations helped accurately determine the mass of the planet, which led to scientists being able to calculate its density, surface gravity, and composition. TRAPPIST-1d is a mere , making it one of the least massive exoplanets yet found. Initial estimates suggested that it has 61.6% the density of Earth (3.39 g/cm3) and just under half the gravity. Compared to Mars, it has nearly three times that planet's mass but was thought to still be significantly less dense, which would indicate the presence of a significant atmosphere; models of the low density of TRAPPIST-1d indicated a mainly rocky composition, but with about ≤5% of its mass in the form of a volatile layer. The volatile layer of TRAPPIST-1d may consist of atmosphere, ocean, and/or ice layers. However, refined estimates show that the planet is more dense, closer to 79.2% of Earth's bulk density (). TRAPPIST-1d has an equilibrium temperature of , assuming an albedo of 0. For an Earth-like albedo of 0.3, the planet's equilibrium temperature is around , very similar to Earth's at .
Orbit
TRAPPIST-1d is a closely orbiting planet, with one full orbit taking just 4.05 days (about 97 hours) to complete. It orbits at a distance of just 0.02228 AU from the host star, or about 2.2% the distance between Earth and the Sun. For comparison, Mercury, the Solar System's innermost planet, takes 88 days to orbit at a distance of about 0.38 AU. The size of TRAPPIST-1 and the close orbit of TRAPPIST-1d around it means that the star as seen from the planet appears 5.5 times as large as the Sun from the Earth. While a planet at TRAPPIST-1d's distance from the Sun would be a scorched world, the low luminosity of TRAPPIST-1 means that the planet gets only 1.043 times the sunlight that Earth receives, placing it within the inner part of the conservative habitable zone.
Host star
The planet orbits an (M-type) ultracool dwarf star named TRAPPIST-1. The star has a mass of 0.089 (close to the boundary between brown dwarfs and hydrogen-fusing stars) and a radius of 0.121 . It has a temperature of , and is between 3 and 8 billion years old. In comparison, the Sun is 4.6 billion years old and has a temperature of 5778 K (5504.85 °C, 9940.73 °F). The star is metal-rich, with a metallicity ([Fe/H]) of 0.04, or 109% the solar amount. This is particularly odd as such low-mass stars near the boundary between brown dwarfs and hydrogen-fusing stars should be expected to have considerably less metals than the Sun. Its luminosity () is 0.05% of that of the Sun.
Stars like TRAPPIST-1 have the ability to live up to 4–5 trillion years, 400–500 times longer than the Sun will live (the Sun only has about 8 billion years of lifespan left, slightly more than half of its lifetime). Because of this ability to live for long periods of time, it is likely TRAPPIST-1 will be one of the last remaining stars when the Universe is much older than it is now, when the gas needed to form new stars will be exhausted, and the remaining ones begin to die off.
The star's apparent magnitude, or how bright it appears from Earth's perspective, is 18.8. Therefore, it is too dim to be seen with the naked eye (the limit for that is 6.5).
The star is not just very small and far away, it also emits comparatively little visible light, mainly shining in the invisible infrared. Even from the close in proximity of TRAPPIST-1d, about 50 times closer than Earth is from the Sun, the planet receives less than 1% the visible light Earth sees from the Sun. This would probably make the days on TRAPPIST-1d never brighter than twilight is on Earth. However, that still means that TRAPPIST-1 could easily shine at least 3000 times brighter in the sky of TRAPPIST-1d than the full moon does in Earth's night sky.
Habitability
Models and scientists are divided on whether their convergent solutions from the data for TRAPPIST-1d indicates Earth-like habitability or a severe greenhouse effect.
In some respects, this exoplanet is one of the most Earth-like found. It does not have a hydrogen or helium-based atmosphere, which makes larger planets uninhabitable (the planet is not massive enough to retain light gases).
The planet is located at the inner edge of the expected habitable zone of its parent star (where liquid water can reasonably be expected to exist on its surface, if present). The planet may also have liquid and atmospheric water, up to many times more than Earth. However, some three-dimensional modeling solutions have a little water surviving beyond the early hot phase in the planet's history. Most models by the University of Washington for TRAPPIST-1d strongly converge on a Venus-like planet (runaway greenhouse effect) with an uninhabitable atmosphere.
Because TRAPPIST-1d is only ~30% the Earth's mass, it, like Venus and Mars, may have no magnetic field, which would allow the parent star's solar wind to strip away the more volatile components of its atmosphere (including water), leaving it hydrogen-poor like those planets. However, due to its close orbit, TRAPPIST-1d is likely tidally locked and it may be very geologically active due to tidal squeezing as happens to Jupiter's moon Io and the volcanic gases could replenish the atmosphere lost to the solar wind. TRAPPIST-1d may resist this the tidal heating, especially if it has an Earth-like albedo of ≥0.3, according to other analyses. The same researchers point out that such proximity to the host star tends to increase geothermal activity, and tidally heat the bottom of any seas. If the planet has suffered a runaway greenhouse, its atmosphere should be thinner and cooler than Venus', due to its smaller mass and the fact it only receives about as much radiation as the Earth (while Venus receives about twice as much).
The lack of a magnetic field will also result in the surface receiving more charged particles than the Earth does. And if the planet is tidally locked, a dense atmosphere could be enough to transfer heat from the illuminated side to the much colder dark side.
Discovery
A team of astronomers headed by Michaël Gillon of the Institut d’Astrophysique et Géophysique at the University of Liège in Belgium used the TRAPPIST (Transiting Planets and Planetesimals Small Telescope) telescope at the La Silla Observatory in the Atacama Desert, Chile, to observe TRAPPIST-1 and search for orbiting planets. By utilising transit photometry, they discovered three Earth-sized planets orbiting the dwarf star; the innermost two are tidally locked to their host star while the outermost appears to lie either within the system's habitable zone or just outside of it. The team made their observations from September to December 2015 and published its findings in the May 2016 issue of the journal Nature.
The original claim and presumed size of the planet was revised when the full seven-planet system was revealed in 2017:
"We already knew that TRAPPIST-1, a small, faint star some 40 light years away, was special. In May 2016, a team led by Michaël Gillon at Belgium’s University of Liege announced it was closely orbited by three planets that are probably rocky: TRAPPIST-1b, c and d...
"As the team kept watching shadow after shadow cross the star, three planets no longer seemed like enough to explain the pattern. “At some point we could not make sense of all these transits,” Gillon says.
"Now, after using the space-based Spitzer telescope to stare at the system for almost three weeks straight, Gillon and his team have solved the problem: TRAPPIST-1 has four more planets.
"The planets closest to the star, TRAPPIST-1b and c, are unchanged. But there’s a new third planet, which has taken the d moniker, and what had looked like d before turned out to be glimpses of e, f and g. There’s a planet h, too, drifting farthest away and only spotted once."
See also
List of potentially habitable exoplanets
References
Exoplanets discovered in 2016
Near-Earth-sized exoplanets in the habitable zone
Transiting exoplanets
TRAPPIST-1
Aquarius (constellation)
Sub-Earth exoplanets
J23062928-0502285 d | TRAPPIST-1d | Astronomy | 2,173 |
1,937,658 | https://en.wikipedia.org/wiki/Time%20reversibility | A mathematical or physical process is time-reversible if the dynamics of the process remain well-defined when the sequence of time-states is reversed.
A deterministic process is time-reversible if the time-reversed process satisfies the same dynamic equations as the original process; in other words, the equations are invariant or symmetrical under a change in the sign of time. A stochastic process is reversible if the statistical properties of the process are the same as the statistical properties for time-reversed data from the same process.
Mathematics
In mathematics, a dynamical system is time-reversible if the forward evolution is one-to-one, so that for every state there exists a transformation (an involution) π which gives a one-to-one mapping between the time-reversed evolution of any one state and the forward-time evolution of another corresponding state, given by the operator equation:
Any time-independent structures (e.g. critical points or attractors) which the dynamics give rise to must therefore either be self-symmetrical or have symmetrical images under the involution π.
Physics
In physics, the laws of motion of classical mechanics exhibit time reversibility, as long as the operator π reverses the conjugate momenta of all the particles of the system, i.e. (T-symmetry).
In quantum mechanical systems, however, the weak nuclear force is not invariant under T-symmetry alone; if weak interactions are present, reversible dynamics are still possible, but only if the operator π also reverses the signs of all the charges and the parity of the spatial co-ordinates (C-symmetry and P-symmetry). This reversibility of several linked properties is known as CPT symmetry.
Thermodynamic processes can be reversible or irreversible, depending on the change in entropy during the process. Note, however, that the fundamental laws that underlie the thermodynamic processes are all time-reversible (classical laws of motion and laws of electrodynamics), which means that on the microscopic level, if one were to keep track of all the particles and all the degrees of freedom, the many-body system processes are all reversible; However, such analysis is beyond the capability of any human being (or artificial intelligence), and the macroscopic properties (like entropy and temperature) of many-body system are only defined from the statistics of the ensembles. When we talk about such macroscopic properties in thermodynamics, in certain cases, we can see irreversibility in the time evolution of these quantities on a statistical level. Indeed, the second law of thermodynamics predicates that the entropy of the entire universe must not decrease, not because the probability of that is zero, but because it is so unlikely that it is a statistical impossibility for all practical considerations (see Crooks fluctuation theorem).
Stochastic processes
A stochastic process is time-reversible if the joint probabilities of the forward and reverse state sequences are the same for all sets of time increments { τs }, for s = 1, ..., k for any k:
.
A univariate stationary Gaussian process is time-reversible. Markov processes can only be reversible if their stationary distributions have the property of detailed balance:
.
Kolmogorov's criterion defines the condition for a Markov chain or continuous-time Markov chain to be time-reversible.
Time reversal of numerous classes of stochastic processes has been studied, including Lévy processes, stochastic networks (Kelly's lemma), birth and death processes, Markov chains, and piecewise deterministic Markov processes.
Waves and optics
Time reversal method works based on the linear reciprocity of the wave equation, which states that the time reversed solution of a wave equation is also a solution to the wave equation since standard wave equations only contain even derivatives of the unknown variables. Thus, the wave equation is symmetrical under time reversal, so the time reversal of any valid solution is also a solution. This means that a wave's path through space is valid when travelled in either direction.
Time reversal signal processing is a process in which this property is used to reverse a received signal; this signal is then re-emitted and a temporal compression occurs, resulting in a reversal of the initial excitation waveform being played at the initial source.
See also
T-symmetry
Memorylessness
Markov property
Reversible computing
Notes
References
Isham, V. (1991) "Modelling stochastic phenomena". In: Stochastic Theory and Modelling, Hinkley, DV., Reid, N., Snell, E.J. (Eds). Chapman and Hall. .
Tong, H. (1990) Non-linear Time Series: A Dynamical System Approach. Oxford UP.
Dynamical systems
Time series
Symmetry | Time reversibility | Physics,Mathematics | 1,032 |
21,496,171 | https://en.wikipedia.org/wiki/Site%20plan | A site plan or a plot plan is a type of drawing used by architects, landscape architects, urban planners, and engineers which shows existing and proposed conditions for a given area, typically a parcel of land which is to be modified. Sites plan typically show buildings, roads, sidewalks and paths/trails, parking, drainage facilities, sanitary sewer lines, water lines, lighting, and landscaping and garden elements.
Such a plan of a site is a "graphic representation of the arrangement of buildings, parking, drives, landscaping and any other structure that is part of a development project".
A site plan is a "set of construction drawings that a builder or contractor uses to make improvements to a property. Counties can use the site plan to verify that development codes are being met and as a historical resource. Site plans are often prepared by a design consultant who must be either a licensed engineer, architect, landscape architect or land surveyor".
Site plans include site analysis, building elements, and planning of various types including transportation and urban. An example of a site plan is the plan for Indianapolis by Alexander Ralston in 1821.
The specific objects and relations shown are dependent on the purpose for creating the plot plan, but typically contain: retained and proposed buildings, landscape elements, above-ground features and obstructions, major infrastructure routes, and critical legal considerations such as property boundaries, setbacks, and rights of way…
Site plan topics
Site analysis
Site analysis is an inventory completed as a preparatory step to site planning, a form of urban planning which involves research, analysis, and synthesis. It primarily deals with basic data as it relates to a specific site. The topic itself branches into the boundaries of architecture, landscape architecture, engineering, economics, and urban planning. Site analysis is an element in site planning and design. Kevin A. Lynch, an urban planner developed an eight cycle step process of site design, in which the second step is site analysis, the focus of this section.
When analyzing a potential site for development, the status quo of the site should be analyzed and mapped. This includes but is not limited to:
The location of the plot
Topography, including information about slope, soils, hydrology, vegetation, orientation
Existing buildings
Roads and traffic
Public facilities and utilities, including water, sewer, and power lines
Related laws, regulation, codes, and policies
By determining areas that are poor for development (such as floodplains or steep slopes) and better for development, the planner or architect can determine the optimal location for different functions or structures and create a design that works within the space.
Site plan building blocks
A site plan is a top view, bird’s eye view of a property that is drawn to scale. A site plan can show:
property lines
outline of existing and proposed buildings and structures
distance between buildings
distance between buildings and property lines (setbacks)
parking lots, indicating parking spaces
driveways
surrounding streets
landscaped areas
easements
ground sign location
utilities
Site planning
Site planning in landscape architecture and architecture refers to the organizational stage of the landscape design process. It involves the organization of land use zoning, access, circulation, privacy, security, shelter, land drainage, and other factors. Site planning includes the arrangement of buildings, roadways, utilities, landscape elements, topography, water features, and vegetation to achieve the desired site.
In urban planning, site planning is done by city planners to develop a clear plan/design of what the city planners want for a community. For example, in a participatory planning process, community members would make claims of renovations and improvements that need to be done in their community. Then the community developers will come up with a way to meet the community members' demand, which is done by creating a site plan. With a limited budget, planners have to be smart and creative about their designs. Planners must take into consideration not only heights of buildings, traffic flows, open spaces, parking for cars/bikes, but also the project's potential impact to the stakeholders involved. All these actions of creating a site plan is referred to as site planning.
Transportation planning
Transportation planning is the field involved with the siting of transportation facilities (generally streets, highways, sidewalks, bike lanes and public transport lines). Transportation planning historically has followed the rational planning model of defining goals and objectives, identifying problems, generating alternatives, evaluating alternatives, and developing the plan. Other models for planning include rational actor, satisficing, incremental planning, organizational process, and political bargaining. However, planners are increasingly expected to adopt a multi-disciplinary approach, especially due to the rising importance of environmentalism. For example, the use of behavioral psychology to persuade drivers to abandon their automobiles and use public transport instead. The role of the transport planner is shifting from technical analysis to promoting sustainability through integrated transport policies.
Urban planning
Urban, city, and town planning explores a very wide range of aspects of the built and social environments of places. Regional planning deals with a still larger environment, at a less detailed level. Based upon the origins of urban planning from the Roman (pre-Dark Ages) era, the current discipline revisits the synergy of the disciplines of urban planning, architecture and landscape architecture.
Examples
See also
Plan (drawing)
Archaeological plan
Floor plan
Technical drawing
Architectural drawing
Engineering drawing
Landscape design
Site Waste Management Plans Regulations 2008
References
External links
SCHWARZPLAN.eu - Download archive for site plans based on data by OpenStreetMap.org
Architectural communication
Landscape architecture
Technical drawing
ru:Проект планировки территории | Site plan | Engineering | 1,124 |
12,706,857 | https://en.wikipedia.org/wiki/Aircraft%20specific%20energy | Aircraft-specific energy is a form of specific energy applied to aircraft and missile trajectory analysis. It represents the combined kinetic and potential energy of the vehicle at any given time. It is the total energy of the vehicle (relative to the Earth's surface) per unit weight of the vehicle. Being independent of the mass of the vehicle, it provides a powerful tool for the design of optimal trajectories. Aircraft-specific energy is very similar to specific orbital energy except that it is expressed as a positive quantity. A zero value of aircraft-specific energy represents an aircraft at rest on the Earth's surface, and the value increases as speed and altitude increases. As with other forms of specific energy, aircraft-specific energy is an intensive property and is represented in units of length since it is independent of the mass of the vehicle. That is, while the specific energy may be expressed in joule per kilogram (J/kg), the specific energy height may be expressed in meters by the formula (v^2 / 2g) + h, where V is the airspeed of the aircraft, g is the acceleration due to gravity, and h is the altitude of the aircraft.
Applications
The field of trajectory optimization has made use of the concept since the 1950s in the form of energy analysis. In this approach, the specific energy is defined as one of the dynamic states of the problem and is the slowest varying state. All other states such as altitude and flight path angle are approximated as infinitely fast compared to the specific energy dynamics. This assumption allow the solution of optimal trajectories in a relatively simple form.
The specific energy is computed by the total energy (as defined above relative the Earth's surface) divided by the mass of the vehicle. It is a key element in performance of aircraft and rockets. For a rocket flying vertically (in a vacuum), it is the apogee that the rocket would obtain.
Aircraft-specific energy is used extensively in the energy–maneuverability theory governing modern aircraft dogfighting tactics. The primary goal of air combat manoeuvring is to maintain an optimal aircraft-specific energy. Speed allows an aircraft the ability to potentially outmaneuver adversaries, and altitude can be converted into speed, while also providing extended range for guided munitions (due to lower air density and therefore lower drag at any given velocity). Aircraft such as the F-16 Fighting Falcon were designed to be optimized in accordance to the energy-maneuverability theory, allowing for an aircraft to quickly gain aircraft-specific energy as fast as possible.
References
Aerodynamics | Aircraft specific energy | Chemistry,Engineering | 527 |
449,677 | https://en.wikipedia.org/wiki/Project%20Stormfury | Project Stormfury was an attempt to weaken tropical cyclones by flying aircraft into them and seeding them with silver iodide. The project was run by the United States Government from 1962 to 1983. The hypothesis was that the silver iodide would cause supercooled water in the storm to freeze, disrupting the inner structure of the hurricane, and this led to seeding several Atlantic hurricanes. However, it was later shown that this hypothesis was incorrect. It was determined that most hurricanes do not contain enough supercooled water for cloud seeding to be effective. Additionally, researchers found that unseeded hurricanes often undergo the same structural changes that were expected from seeded hurricanes. This finding called Stormfury's successes into question, as the changes reported now had a natural explanation.
The last experimental flight was flown in 1971, due to a lack of candidate storms and a changeover in NOAA's fleet. Project Stormfury was officially canceled more than a decade after the last modification experiment. Although the project failed to achieve its goal of reducing the destructiveness of hurricanes, its observational data and storm lifecycle research helped improve meteorologists' ability to forecast the movement and intensity of hurricanes.
Hypothesis
Cloud seeding was first attempted by Vincent Schaefer and Irving Langmuir. After witnessing the artificial creation of ice crystals, Langmuir became an enthusiastic proponent of weather modification. Schaefer found that when he dumped crushed dry ice into a cloud, precipitation in the form of snow resulted.
With regard to hurricanes, it was hypothesized that by seeding the area around the eyewall with silver iodide, latent heat would be released. This would promote the formation of a new eyewall. As this new eyewall was larger than the old eyewall, the winds of the tropical cyclone would be weaker due to a reduced pressure gradient. Even a small reduction in the speed of a hurricane's winds would be beneficial: since the damage potential of a hurricane increased as the square of the wind speed, a slight lowering of wind speed would have a large reduction in destructiveness.
Due to Langmuir's efforts, and the research of Schaefer at General Electric, the concept of using cloud seeding to weaken hurricanes gathered momentum. Indeed, Schaefer had caused a major snowstorm on December 20, 1946 by seeding a cloud. This caused GE to drop out for legal reasons. Schaefer and Langmuir assisted the U.S. military as advisors for Project Cirrus, the first large study of cloud physics and weather modification. Its most important goal was to try to weaken hurricanes.
Project Cirrus
Project Cirrus was the first attempt to modify a hurricane. It was a collaboration of the General Electric Corporation, the US Army Signal Corps, the Office of Naval Research, and the US Air Force. After several preparations and initial skepticism by government scientists, the first attempt to modify a hurricane began on October 13, 1947 on Hurricane Cape Sable that was heading west to east and out to sea.
The project's two B-17s and a B-29 of the 53rd Weather Reconnaissance group were dispatched from MacDill Field, Florida, to intercept the hurricane. The seeding B-17 flew along the rainbands of the hurricane, and dropped nearly 180 pounds (82 kilograms) of crushed dry ice into the clouds. The crew reported "Pronounced modification of the cloud deck seeded". It is not known if that was due to the seeding. Next, the hurricane changed direction and made landfall near Savannah, Georgia. The public blamed the seeding, and Irving Langmuir claimed that the reversal had been caused by human intervention. Cirrus was canceled, and lawsuits were threatened. Only the fact that a system in 1906 had taken a similar path, as well as evidence showing that the storm had already begun to turn when seeding began, ended the litigation. This disaster set back the cause of seeding hurricanes for eleven years.
At first the seeding was officially denied and it took years before the government admitted it. According to the September 12, 1965 edition of the Fort Lauderdale News and Sun-Sentinel, in 1947 a hurricane "went whacky" and "Twelve years later it was admitted the storm had in fact been seeded."
Between the projects
The United States Weather Bureau's National Hurricane Research Project, founded in 1955, had as one of its objectives to investigate the scientific validity of hurricane modification methods. To this end, silver iodide dispensers were tested in Hurricane Daisy in August 1958. The flares were deployed outside of the hurricane eyewall, so this was an equipment test rather than a modification experiment. The equipment malfunctioned in all but one of the flights, and no conclusive data was acquired.
The first seeding experiment since the Cirrus disaster was attempted on September 16, 1961, into Hurricane Esther by NHRP and the United States Navy aircraft. Eight cylinders of silver iodide were dropped into Esther's eyewall, and winds were recorded as weakening by 10 percent. The next day, more seeding flights were made. This time, the silver iodide did not fall into the eyewall, and no reduction in windspeed was observed. These two results were interpreted as making the experiment a "success".
The seedings into Hurricane Esther led to the establishment of Project Stormfury in 1962. Project Stormfury was a joint venture of the United States Department of Commerce and the United States Navy.
Project BATON
The objective of Project BATON was the analysis of the life history of thunderstorms. A Department of
Defense research activity supported by the Advanced Research Project Agency, Project BATON sought to expand understanding of storm physics as an aid to weather forecasting, fire prevention, and, possibly, for artificially controlling the weather. Dr. Helmut Weickmann, as an employee of the U.S, Army Signal Research and Development Laboratory, and Dr. Paul MacCready of Meteorology Research, Inc., were joint leaders of the Project BATON team.
During the 1962 July–August storm season in Flagstaff, Arizona, the scientists, selected "guinea pig" storms, and seeded them with chemicals. Effects were thoroughly analyzed from the ground and from the air with time-lapse motion picture cameras, stereo still cameras, storm radar, lightning detectors, and airborne heat sensors. Among the agents inserted in selected clouds were "condensation nuclei" which temporarily increased the number of water droplets in the cloud, and pulverized dry ice, which turns a portion of the cloud to fine snow crystals that remain aloft. The utilization of these agents facilitated study of a storm's characteristics.
Project STORMFURY begins
Robert Simpson became its first director, serving in this capacity until 1965. There were several guidelines used in selecting which storms to seed. The hurricane had to have a less than 10 percent chance of approaching inhabited land within a day; it had to be within range of the seeding aircraft; and it had to be a fairly intense storm with a well-formed eye. The primary effect of these criteria was to make possible seeding targets extremely rare.
No suitable storms formed in the 1962 season. Next year, Stormfury began by conducting experiments on cumulus clouds. From August 17 to 20 of that year, experiments were conducted in 11 clouds, of which six were seeded and five were controls. In five of the six seeded clouds, changes consistent with the working hypothesis were observed.
On August 23, 1963, Hurricane Beulah was the site of the next seeding attempt. It had an indistinct eyewall. In addition, mistakes were made, as the seedings of silver iodide were dropped in the wrong places. As a consequence, nothing happened. The next day, another attempt was made, and the seeders hit their targets. The eyewall was observed to fall apart and be replaced by another eyewall with a larger radius. The sustained winds also fell by twenty percent. All in all, the results of the experiments on Beulah were "encouraging but inconclusive."
In the six years after Beulah, no seedings were conducted for several different reasons. In 1964, measurement and observation equipment was not ready to be used. The year after that, all flights were used for additional experimentation in non-hurricane clouds.
Joanne Simpson became its director beginning in 1965.
While out to sea in August of the 1965 Atlantic hurricane season, Stormfury meteorologists decided that Hurricane Betsy was a good candidate for seeding. However, the storm immediately swung towards land, and on September 1, the planned flights were canceled. For some reason, the press was not notified that there were no seedings, and several newspapers reported that it had begun. As Betsy passed close to the Bahamas and smashed into southern Florida, the public and Congress thought that seeding was underway and blamed Stormfury. It took two months for Stormfury officials to convince Congress that Betsy was not seeded, and the project was allowed to continue. A second candidate, Hurricane Elena, stayed too far out to sea.
After Betsy, two other hurricanes came close to being seeded. Hurricane Faith was considered a likely candidate, but it stayed out of range of the seeding planes. That same year, recon flights were conducted into Hurricane Inez, but there were no seedings. Both the 1967 and 1968 seasons were inactive. Because of that, there were no suitable seeding targets in either of those two seasons.
Dr. R. Cecil Gentry became the director of Stormfury in 1968. There were no more near-seedings until 1969. In the interim, equipment was improved. What once was the primitive method of hand-dumping dry ice was replaced with rocket canisters loaded with silver iodide, and then gun-like devices mounted on the wings of the airplanes that fired silver iodide into the clouds. Observation equipment was improved. Additional reconnaissance data was utilized to modify the working hypothesis. The new theory took cumulus towers outside the eyewall into account. According to the revised theory, by seeding the towers, latent heat would be released. This would trigger the start of new convection, which would then cause a new eyewall. Since the new eyewall was outside the original one, the first eyewall would be choked of energy and fall apart. In addition, since the new eyewall was broader than the old one, the winds would be lower due to a less sharp pressure difference.
Hurricane Debbie in 1969 provided the best opportunity to test the underpinnings of Project Stormfury. In many ways it was the perfect storm for seeding: it did not threaten any land; it passed within range of seeding aircraft; and was intense with a distinct eye. On August 18 and again on August 20, thirteen planes flew out to the storm to monitor and seed it. On the first day, windspeeds fell by 31%. On the second day, windspeeds fell by 18%. Both changes were consistent with Stormfury's working hypothesis. Indeed, the results were so encouraging that "a greatly expanded research program was planned." Among other conclusions was the need for frequent seeding at close to hourly intervals.
The 1970 and 1971 seasons provided no suitable seeding candidates. Despite this, flights were conducted into Hurricane Ginger. Ginger was not a suitable storm for seeding, due to its diffuse, indistinct nature. The seeding had no effect. Ginger was the last seeding done by Project Stormfury.
After the seedings
Atlantic hurricanes meeting all of the criteria were extremely rare, which made duplication of the "success" reached with Hurricane Debbie extremely difficult. Meanwhile, developments outside of meteorology hindered the cause of hurricane modification.
In the early 1970s, the Navy withdrew from the project. Stormfury began to refocus its efforts on understanding, rather than modifying, tropical cyclones. At the same time, the Project's B-17s were nearing the end of their operational lifetimes. At the cost of $30 million (year unknown) two Lockheed P-3's were acquired. Due to the rarity of Atlantic hurricanes meeting the safety requirements, plans were made to move Stormfury to the Pacific and experiment on the large number of typhoons there. This action required many of the same safety requirements as in the Atlantic, but had the advantage of a much higher number of potential subjects.
The plan was to begin again in 1976, and seed typhoons by flying out of Guam. However, political issues blocked the plan. The People's Republic of China announced that it would not be happy if a seeded typhoon changed course and made landfall on its shores, while Japan declared itself willing to put up with difficulties caused by typhoons because that country got more than half of its rainfall from tropical cyclones.
Similar plans to operate Stormfury in the eastern north Pacific or in the Australian region also collapsed.
Failure of the working hypothesis
Multiple eyewalls had been detected in very strong hurricanes before, including Typhoon Sarah and Hurricane Donna. Double eyewalls were usually only seen in very intense systems. They had also been observed post-seeding in some of the seeded storms. At the time, the only observations of rapid changes in eyewall diameter, other than during presumably successful seedings, occurred during rapid changes in storm intensity. It remained unclear whether the seedings caused the secondary eyewalls or whether it was just part of a natural cycle (because correlation does not imply causation). It was initially thought that eyewall changes similar to those observed in seeded but not unseeded systems provided the evidence that Project Stormfury was a success. But if it was later observed that such eyewall changes were common in unseeded systems as well, such observations would throw doubt on the hypothesis and assumptions driving Project Stormfury.
Data and observations did in fact begin to accumulate that debunked Stormfury's working hypothesis. Beginning with Hurricanes Anita and David, flights by hurricane hunting aircraft encountered events similar to what happened in "successfully" seeded storms. Anita itself had a weak example of a concentric eyewall cycle, and David a more dramatic one. In August 1980, Hurricane Allen passed through the Atlantic, Caribbean, and Gulf of Mexico. It also underwent changes in the diameter of its eye and developed multiple eyewalls. All this was consistent with the behavior that would have been expected of Allen had it been seeded. Thus, what Stormfury thought to have accomplished by seeding was also happening on its own.
Other observations in Hurricanes Anita, David, Frederic, and Allen also discovered that tropical cyclones have very little supercooled water and a great deal of ice crystals. The reason that tropical cyclones have little supercooled water is that the updrafts within such a system are too weak to prevent water from either falling as rain or freezing. As cloud seeding needed supercooled water to function, the lack of supercooled water meant that seeding would have no effect.
Those observations called the basis for Project Stormfury into question. In the middle of 1983, Stormfury was finally canceled after the hypothesis guiding its efforts was invalidated.
Legacy
In the sense of weakening hurricanes to reduce their destructiveness, Project Stormfury was a complete failure because it did not distinguish between natural phenomena in tropical cyclones and the impact of human intervention. Millions of dollars had been spent. In the end, "[Project] STORMFURY had two fatal flaws: it was neither microphysically nor statistically feasible."
In addition, Stormfury had been a primary generator of funding for the Hurricane Research Division. While the project was operational, the HRD's budget had been around $4 million (1975 USD; $16 million 2008 USD), with a staff of approximately 100 people. In 2000, the HRD employed 30 people and has a budget of roughly $2.6 million each year.
However, Project Stormfury had positive results as well. Knowledge gained during flights proved invaluable in debunking its hypotheses. Other science resulted in a greater understanding of tropical cyclones. In addition, the Lockheed P-3s were perfectly suitable for gathering data on tropical cyclones, allowing improved forecasting of these monstrous storms. Those planes were still used by the NOAA as of 2005.
Former Cuban president Fidel Castro alleged that Project Stormfury was an attempt to weaponize hurricanes.
See also
Operation Popeye
Weather modification in North America
Alberta Hail Project
Wind shearing
Sea surface temperature
Notes
References
External links
History of Project Stormfury
History of hurricane seeding and modification efforts
1962 establishments in the United States
1983 disestablishments in the United States
Stormfury
Stormfury
Tropical cyclone meteorology
Weather modification in North America | Project Stormfury | Engineering | 3,434 |
26,011,932 | https://en.wikipedia.org/wiki/Barsanti%E2%80%93Matteucci%20engine | The Barsanti-Matteucci engine was the first invented internal combustion engine using the free-piston principle in an atmospheric two cycle engine. In late 1851 or early 1852 Eugenio Barsanti, a professor of mathematics, and Felice Matteucci, an engineer and expert in mechanics and hydraulics, joined forces on a project to exploit the explosion and expansion of a gaseous mix of hydrogen and atmospheric air to transform part of the energy of such explosions into mechanical energy.
Origin
The idea originated almost ten years earlier with Barsanti when, as a young man, he was teaching at St. Michael's College in Volterra, Italy. An engineer from Milan Italy, Luigi de Cristoforis, described in a paper published in the acts of the Lombard Royal Institute of Science, Literature and Art, a pneumatic machine (later built and shown to work) that ran on naphtha and an air mixture, and which constituted the first liquid fuel engine.
Prototypes
During the twelve years of collaboration between Barsanti and Matteucci several prototypes of internal combustion engines were realized. It was the first real internal combustion engine, constituted in its simplest realization by a vertical cylinder in which an explosion of a mixture of air and hydrogen or an illuminating gas shot a piston upwards thereby creating a vacuum in the space underneath. When the piston returned to its original position, due to the action of the atmospheric pressure, it turned a toothed rod connected to a sprocket wheel and transmitted movement to the driving shaft.
Patents
Numerous patents were obtained by the two inventors: the 1857 English and Piedmont patents, the 1861 Piedmont patent of Barsanti, Matteucci and Babacci which was then used as a base to construct the engine of the Escher Wyss company of Zurich and put on exhibit during the first National Expo of Florence in 1861, and the 1861 English patent.
References
External links
History of the Barsanti-Matteucci engine
Italian inventions
History of technology
Internal combustion engine | Barsanti–Matteucci engine | Technology,Engineering | 406 |
41,547,719 | https://en.wikipedia.org/wiki/Trustworthy%20Software%20Foundation | The Trustworthy Software Foundation (TSFdn) is a UK not-for-profit organisation, with stated aim of improving software.
History
TSFdn evolved from a number of previous activities:
A study by the Cabinet Office, Central Sponsor for Information Assurance (CSIA) in 2004-5 which identified a pervasive lack of secure software development practices as a matter for concern
A Department of Trade and Industry (DTI – predecessor of BIS) Global Watch Report in 2006 which noted a relative lack of secure software development practices in the UK
The Technology Strategy Board (TSB) Cyber Security Knowledge Transfer Network (CSKTN) Special Interest Group (SIG) on Secure Software Development (SSD, 2007–8)
The TSB / Foreign and Commonwealth Office (FCO) Science and Innovation Network (SIN) Multinational Workshop “Challenges to building in … information security, privacy and assurance”, held in Paris in March 2009
The Secure Software Development Partnership (SSDP) Study Period, funded jointly by the UK government' TSB and the Centre for the Protection of National Infrastructure (CPNI) organisations, which ran in 2009-2010
The Trustworthy Software Initiative (TSI—originally Software Security, Dependability and Resilience Initiative—SSDRI), a UK public good activity sponsored by CPNI between 2011 and 2016
Objectives
TSFdn primarily aims to provide a living backbone for signposting to diverse but often obscure sources of Good Practice, with a secondary objective to address other aspects of the 2009 Trustworthy Software Roadmap.
Trustworthiness
TSI considers that there are five facets of trustworthiness:
Safety - The ability of the system to operate without harmful states
Reliability - The ability of the system to deliver services as specified
Availability - The ability of the system to deliver services when requested
Resilience - The ability of the system to transform, renew, and recover in timely response to events
Security - The ability of the system to remain protected against accidental or deliberate attacks
This definition of trustworthiness is an extension of a widely used definition of dependability, adding as a 5th Facet of Resilience based on the UK Government approach.
Governance and Operation
TSFdn operates as a not-for-profit Company Limited by Guarantee, jointly owned by the subscriber organisations – UK professional bodies.
It is based at the Cyber Security Centre of the University of Warwick, and is formally linked to a cross section of stakeholders through the Advisory Committee on Trustworthy Software (ACTS).
The Technical Lead remains Ian Bryant, the Technical Director of the predecessor TSI, and the Chair of the ACTS is Sir Edmund Burton KBE, who was the President of the predecessor TSI.
Activities
Updating its Trustworthy Software Framework (TSFr), originally published as British Standards (BS) Publicly Available Specification (PAS) 754, into a British Standard (through BSI Project Committee ICT/00-/09, Chaired by Ian Bryant)
Continuing to engage with partners for promulgation of Software Trustworthiness across Education, in particular through the IAP, BCS and the IET
References
Computer security in the United Kingdom
Information technology management
Information technology organisations based in the United Kingdom
Organisations based in the London Borough of Ealing
Software engineering organizations | Trustworthy Software Foundation | Technology,Engineering | 659 |
37,264,560 | https://en.wikipedia.org/wiki/COBie | Construction Operations Building Information Exchange (COBie) is a United States-originated specification relating to managed asset information including space and equipment. It is closely associated with building information modeling (BIM) approaches to design, construction, and management of built assets.
Purpose
COBie helps organisations to electronically capture and record important project data at the point of origin, including equipment lists, product data sheets, warranties, spare parts lists, and preventive maintenance schedules. This information is essential to support operations, maintenance and asset management once the built asset is in service, replacing reliance on uncoordinated, often paper-based, handover information typically created by people who did not participate in the project and delivered many months after the client has taken occupancy of the building (see figure 1).
COBie has been incorporated into software for planning, design, construction, commissioning, operations, maintenance, and asset management. COBie may take several approved formats include spreadsheet, STEP-Part 21 (also called IFC file format), and ifcXML. The current COBie test data of record was developed by an international team of designers and builders in the US and UK. This information is available under Creative Commons Licence.
History
Initial concept (2006-2007 )
COBie was developed by Bill East, of the US Army Corps of Engineers, while at the Construction Engineering Research Laboratory in 2007. The project was funded with an initial grant from the US National Aeronautics and Space Administration (NASA) and the White House Office of Science and Technology Policy (through National Institute of Standards and Technology). Following this introduction, East has led COBie development through buildingSMART International (BSI; formerly the International Alliance for Interoperability) processes.
Concept to adoption (2008-2015)
From 2008 to 2015, the Construction Engineering Research Lab conducted a series of public events to demonstrate the ability of commercial software to produce and/or consume COBie data (in associated date-related version). In these events, software companies were often arranged at the front of a large conference room in order: planning, design, construction, maintenance management, and asset management. The COBie data flow (COBie is about building equipment only) was demonstrated. Over 90% of those participating delivered information in the COBie spreadsheet format. The other software exported spreadsheet format data from Coordination MVD STEP files so that others could use this data.
In 2009, COBie version 2.26 was published as the buildingSMART International Basic FM Handover Model View Model Definition using the Industry Foundation Class Model 2x3.
In December 2011, COBie 2.26 was approved and included by the US Chapter of buildingSMART International as part of its National Building Information Model (NBIMS-US) standard, version 2. Around this same time, the US buildingSMART alliance was de-listed as an authorized chapter of the buildingSMART International.
In early 2013, buildingSMART was working on a lightweight XML format for COBie, COBieLite, which became available for review in April 2013.
In September 2014, a code of practice regarding COBie was issued as a British Standard: "BS 1192-4:2014 Collaborative production of information Part 4: Fulfilling employer’s information exchange requirements using COBie – Code of practice". This requirement is a one line reference to the National Building Information Modeling Standard - United States (NBIMS-US), Chapter 4.2, the document that eventually published COBie version 2.4.
In March 2015, the buildingSMART USA published COBie version 2.4 in NBIMS-US, Chapter 4.2. This COBie MVD, produced under contract to the Construction Engineering Research Lab, was created by the buildingSMART international Model View Definition support group, and was based on IFC 4. The main standard contains the project's Information Delivery Manual and Model View Definition as well as business case and implementation resources. Annex A defines the mapping from the EXPRESS-based data model to the COBie spreadsheet format. Annex B defines a National Information Exchange Model (NIEM) based XML schema suitable for use to capture transactional COBie data that does not require a full set of building information exchange.
In 2017, the US General Services Administration required COBie as a deliverable in their capital programs in their P-100 document.
Certification
In 2019, buildingSMART international formed the COBie Certification Subcommittee composed of an international team of COBie experts (from US, UK, Ireland, China and Japan) to offer the COBie Certified Professional(TM) examination. This group published the COBie Educational Curriculum and began offering the COBie Certified Professional exam in 2020. This exam is a two-hour 160 question in-depth exam. To support those interested in sitting for this exam, bSI also introduced a program to evaluate and register educational programs whose courses addressed the content found in the COBie Educational Curriculum.
In 2020, buildingSMART international's COBie Certification Subcommittee prepared an introductory "Foundation" level exam. This exam covers the basic facts about the US COBie specification and is available for any bSI Chapter to implement. Unlike the COBie Certified Professional(TM) exam, bSI does require a completion of an authorized training program and the use of a common COBie "book of knowledge". The bSI authorized COBie book of knowledge was published in English in 2021. Translations to German, Portuguese, and Portuguese (Brazilian) are now underway.
buildingSMART International's COBie Certification Subcommittee considers this certification activities to be a transitional activity allowing bSI to support an increasingly widespread use of the US-specification while a future more widely acceptable and improved ISO-based replacement is produced.
ISO replacement
In 2020, buildingSMART international began a project to replace the US-specification with an international standard. The project began by documenting the many lessons learned from the previous 15 years of use of the US specification and updated the original bSI Basic Facility Management Handover Model View Definition. By July 2020, this project had reached the approved activity proposal stage of the bSI standards process. A video about the purpose and content was also published. in 2021, after a delay of over a year due to a dispute regarding the naming rights of the future ISO, it was determined that bSI would no longer include the acronym COBie in its project.
With bSI no longer reliant on the previous US-specific name, its project has improved clarity, approach and scope. Given wide interest in delivering ISO-standards supporting many types of projects, not just buildings, bSI's strategic approach is to develop a set of ISO 16739 based specifications that will entirely replace the US specification (the project is called Facility Management Handover - Equipment Maintenance). The project goal is to directly support the handover of building equipment maintenance information while addressing references to objects outside of the building domain. For example, a potential future project, FM Handover - Tunnel Maintenance, would simply replace the IFC objects for buildings with those that describe maintainable items in the "Tunnel" domain. This strategy will also provide the ISO core for information uses needed not only for FM Handover but also for FM activities themselves. bSI planned to roll out this strategy in a series of position papers during 2022, and to enroll members in the project.
References
Data modeling
Computer-aided design
Construction
Building engineering
Building information modeling | COBie | Engineering | 1,497 |
37,253,906 | https://en.wikipedia.org/wiki/Bruce%20Reed%20%28mathematician%29 | Bruce Alan Reed FRSC is a Canadian mathematician and computer scientist, a former Canada Research Chair in Graph Theory at McGill University. His research is primarily in graph theory. He is a distinguished research fellow of the Institute of Mathematics in the Academia Sinica, Taiwan, and an adjunct professor at the University of Victoria in Canada.
Academic career
Reed earned his Ph.D. in 1986 from McGill, under the supervision of Vašek Chvátal. Before returning to McGill as a Canada Research Chair, Reed held positions at the University of Waterloo, Carnegie Mellon University, and the French National Centre for Scientific Research.
Reed was elected as a fellow of the Royal Society of Canada in 2009, and is the recipient of the 2013 CRM-Fields-PIMS Prize.
In 2021 he left McGill, and subsequently became a researcher at the Academia Sinica and an adjunct professor at the University of Victoria.
Research
Reed's thesis research concerned perfect graphs.
With Michael Molloy, he is the author of a book on graph coloring and the probabilistic method. Reed has also published highly cited papers on the giant component in random graphs with a given degree sequence, random satisfiability problems, acyclic coloring, tree decomposition, and constructive versions of the Lovász local lemma.
He was an invited speaker at the International Congress of Mathematicians in 2002. His talk there concerned a proof by Reed and Benny Sudakov, using the probabilistic method, of a conjecture by Kyoji Ohba that graphs whose number of vertices and chromatic number are (asymptotically) within a factor of two of each other have equal chromatic number and list chromatic number.
Selected publications
Articles
Books
References
External links
Home page
Year of birth missing (living people)
Living people
Canadian computer scientists
Canadian mathematicians
Graph theorists
McGill University Faculty of Science alumni
Academic staff of the University of Waterloo
Carnegie Mellon University faculty
Academic staff of McGill University | Bruce Reed (mathematician) | Mathematics | 391 |
32,501,935 | https://en.wikipedia.org/wiki/Megaprojects%20and%20Risk | Megaprojects and Risk: An Anatomy of Ambition is a 2003 book by Bent Flyvbjerg, Nils Bruzelius, and Werner Rothengatter, published by Cambridge University Press.
Overview
"Megaprojects" is the term applied to multibillion-dollar infrastructure developments such as massive dams and bridges, and to elaborate railways and highways. The book identifies a "megaprojects paradox," pointing out that more of these projects are being implemented, but such projects typically perform very poorly, often with substantial cost overruns and market shortfalls.
Chapters two to four examine the Channel Tunnel, which opened in 1994 at a cost of £4.7 billion following cost-overruns of 80 percent which caused several companies to nearly go bankrupt. Denver International Airport, which cost $5 billion, opened in 1995 following a 200 percent cost overrun, and passenger traffic in 1995 was only half that expected. There were also problems with Hong Kong's Chek Lap Kok Airport, opened in 1998, which had low revenues and negatively affected Hong Kong's economy, initially.
According to the authors, the reason for such poor performances is that many of the participants in the process have incentives to underestimate costs, overestimate revenues, undervalue environmental impact, and overvalue economic development effects. The authors argue that central problems are lack of accountability and inappropriate risk sharing, which can be improved by reforming the institutional arrangements of decision making and by instituting accountability at the project development and evaluation stages.
Reception
According to chief economist and director of transportation policy at Infrastructure Management
Group, Inc., Porter K. Wheeler, "this book makes an important contribution to understanding the infrastructure development process worldwide, with focus on megaprojects." The New Scientist wrote upon publication, "Love them or loathe them, megaprojects capture the imagination. [This] damning analysis concentrates on a series of financial nightmares that should bring even the most casual reader out in a sweat."
The Times of London gave Megaprojects and Risk a cover story and wrote, "Life is too short to read every tome penned by Scandinavian and German social scientists. But Megaprojects and Risk, written by Bent Flyvbjerg, Nils Bruzelius and Werner Rothengatter, is a cracker. In lurid and startling detail it examines dozens of vast construction schemes around the world."
The International Journal of Urban and Regional Research wrote, "a timely intervention ... Flyvbjerg et al. have presented us with something close to a manifesto that should really be in the hands of every planning minister, regional and city planner, journalist and activist involved in mega-project development ... by the end I was wishing that more social scientists had such a lightness of touch and precise use of illustration. ... highly insightful."
The Geographical Journal wrote, "this is the first [book] of its kind. It is concise and clear ... The subject matter is extremely interesting, timely and relevant. It sits within the tradition of ‘Victims of Groupthink’ and ‘Great Planning Disasters’ and attempts to find practical solutions. It is also highly suggestive of wider application beyond its immediate concerns."
The Financial Express of India found the book, "a perfect complement to the richly textured arguments closer to home of Arundhati Roy in her damning indictment of the Narmada Dam and the Sardar Sarovar Project in The Greater Common Good ... The authors [of Megaprojects and Risk] provide a peep-show into social psychology [with] a wealth of empirical evidence ... Do read this book."
See also
List of cancelled nuclear plants in the United States
When Technology Fails
Normal Accidents: Living with High-Risk Technologies
Northeast blackout of 2003
Brittle Power: Energy Strategy for National Security
Small Is Beautiful
Small Is Profitable
References
External links
2003 non-fiction books
2003 in the environment
Oil megaprojects
Risk management
Books about economic policy
Infrastructure | Megaprojects and Risk | Engineering | 817 |
20,002,862 | https://en.wikipedia.org/wiki/X-parameters | X-parameters are a generalization of S-parameters and are used for characterizing the amplitudes and relative phase of harmonics generated by nonlinear components under large input power levels. X-parameters are also referred to as the parameters of the Poly-Harmonic Distortion (PHD) nonlinear behavioral model.
Description
X-parameters represent a new category of nonlinear network parameters for high-frequency design (Nonlinear vector network analyzers are sometimes called large signal network analyzers.)
X-parameters are applicable to both large-signal and small-signal conditions, for linear and nonlinear components. They are an extension of S-parameters meaning that, in the limit of a small signal, X-parameters reduce to S-parameters.
They help overcome a key challenge in RF engineering, namely that nonlinear impedance differences, harmonic mixing, and nonlinear reflection effects occur when components are cascaded under large signal operating conditions. This means that there is a nonlinear and as such non-trivial relationship between the properties of the individual cascaded components and the composite properties of the resulting cascade. This situation is unlike that at DC, where one can simply add the values of resistors connected in series. X-parameters help solve this cascading problem: if the X-parameters of a set of components are measured individually, the X-parameters (and hence the non-linear transfer function) can be calculated of any cascade made from them. Calculations based on X-parameters are usually performed within a harmonic balance simulator environment.
Development
X-parameters were developed and introduced by Keysight Technologies as functionality included in N5242A Nonlinear Vector Network Analyzer, and the W2200 Advanced Design System in 2008. N5242A is a Keysight product that were formerly part of Agilent.
X-parameters are the parameters of the polyharmonic distortion modeling work of Dr. Jan Verspecht and Dr. David E. Root.
See also
Two-port network
Notes
External links
Fundamentally Changing Nonlinear Microwave Design by David Vye Editor, Microwave Journal Vol. 53 No. 3 March 2010 Page 22] (former location)
Electrical parameters
Two-port networks
Transfer functions | X-parameters | Engineering | 429 |
25,387,283 | https://en.wikipedia.org/wiki/Testing%20Maturity%20Model | The Testing Maturity Model (TMM) was based on the Capability Maturity Model, and first produced by the Illinois Institute of Technology.
Its aim to be used in a similar way to CMM, that is to provide a framework for assessing the maturity of the test processes in an organisation, and so providing targets on improving maturity.
Each level from 2 upwards has a defined set of processes and goals, which lead to practices and sub-practices.
The TMM has been since replaced by the Test Maturity Model integration and is now managed by the TMMI Foundation.
See also
Enterprise Architecture Assessment Framework
References
The article describing this concept was first published in: Crosstalk, August and September 1996
"Developing a Testing Maturity Model: Parts I and II", Ilene Burnstein, Taratip Suwannasart, and C.R. Carlson,
Illinois Institute of Technology (article not in online archives at Crosstalk online anymore)
Software testing
Maturity models | Testing Maturity Model | Engineering | 190 |
464,877 | https://en.wikipedia.org/wiki/Information%20management | Information management (IM) is the appropriate and optimized capture, storage, retrieval, and use of information. It may be personal information management or organizational. Information management for organizations concerns a cycle of organizational activity: the acquisition of information from one or more sources, the custodianship and the distribution of that information to those who need it, and its ultimate disposal through archiving or deletion and extraction.
This cycle of information organisation involves a variety of stakeholders, including those who are responsible for assuring the quality, accessibility and utility of acquired information; those who are responsible for its safe storage and disposal; and those who need it for decision making. Stakeholders might have rights to originate, change, distribute or delete information according to organisational information management policies.
Information management embraces all the generic concepts of management, including the planning, organizing, structuring, processing, controlling, evaluation and reporting of information activities, all of which is needed in order to meet the needs of those with organisational roles or functions that depend on information. These generic concepts allow the information to be presented to the audience or the correct group of people. After individuals are able to put that information to use, it then gains more value.
Information management is closely related to, and overlaps with, the management of data, systems, technology, processes and – where the availability of information is critical to organisational success – strategy. This broad view of the realm of information management contrasts with the earlier, more traditional view, that the life cycle of managing information is an operational matter that requires specific procedures, organisational capabilities and standards that deal with information as a product or a service.
History
Emergent ideas out of data management
In the 1970s, the management of information largely concerned matters closer to what would now be called data management: punched cards, magnetic tapes and other record-keeping media, involving a life cycle of such formats requiring origination, distribution, backup, maintenance and disposal. At this time the huge potential of information technology began to be recognised: for example a single chip storing a whole book, or electronic mail moving messages instantly around the world, remarkable ideas at the time. With the proliferation of information technology and the extending reach of information systems in the 1980s and 1990s, information management took on a new form. Progressive businesses such as BP transformed the vocabulary of what was then "IT management", so that "systems analysts" became "business analysts", "monopoly supply" became a mixture of "insourcing" and "outsourcing", and the large IT function was transformed into "lean teams" that began to allow some agility in the processes that harness information for business benefit. The scope of senior management interest in information at BP extended from the creation of value through improved business processes, based upon the effective management of information, permitting the implementation of appropriate information systems (or "applications") that were operated on IT infrastructure that was outsourced. In this way, information management was no longer a simple job that could be performed by anyone who had nothing else to do, it became highly strategic and a matter for senior management attention. An understanding of the technologies involved, an ability to manage information systems projects and business change well, and a willingness to align technology and business strategies all became necessary.
Positioning information management in the bigger picture
In the transitional period leading up to the strategic view of information management, Venkatraman, a strong advocate of this transition and transformation, proffered a simple arrangement of ideas that succinctly brought together the management of data, information, and knowledge (see the figure) argued that:
Data that is maintained in IT infrastructure has to be interpreted in order to render information.
The information in our information systems has to be understood in order to emerge as knowledge.
Knowledge allows managers to take effective decisions.
Effective decisions have to lead to appropriate actions.
Appropriate actions are expected to deliver meaningful results.
This is often referred to as the DIKAR model: Data, Information, Knowledge, Action and Result, it gives a strong clue as to the layers involved in aligning technology and organisational strategies, and it can be seen as a pivotal moment in changing attitudes to information management. The recognition that information management is an investment that must deliver meaningful results is important to all modern organisations that depend on information and good decision-making for their success.
Theoretical background
Behavioural and organisational theories
It is commonly believed that good information management is crucial to the smooth working of organisations, and although there is no commonly accepted theory of information management per se, behavioural and organisational theories help. Following the behavioural science theory of management, mainly developed at Carnegie Mellon University and prominently supported by March and Simon, most of what goes on in modern organizations is actually information handling and decision making. One crucial factor in information handling and decision making is an individual's ability to process information and to make decisions under limitations that might derive from the context: a person's age, the situational complexity, or a lack of requisite quality in the information that is at hand – all of which is exacerbated by the rapid advance of technology and the new kinds of system that it enables, especially as the social web emerges as a phenomenon that business cannot ignore. And yet, well before there was any general recognition of the importance of information management in organisations, March and Simon argued that organizations have to be considered as cooperative systems, with a high level of information processing and a vast need for decision making at various levels. Instead of using the model of the "economic man", as advocated in classical theory they proposed "administrative man" as an alternative, based on their argumentation about the cognitive limits of rationality. Additionally they proposed the notion of satisficing, which entails searching through the available alternatives until an acceptability threshold is met - another idea that still has currency.
Economic theory
In addition to the organisational factors mentioned by March and Simon, there are other issues that stem from economic and environmental dynamics. There is the cost of collecting and evaluating the information needed to take a decision, including the time and effort required. The transaction cost associated with information processes can be high. In particular, established organizational rules and procedures can prevent the taking of the most appropriate decision, leading to sub-optimum outcomes. This is an issue that has been presented as a major problem with bureaucratic organizations that lose the economies of strategic change because of entrenched attitudes.
Strategic information management
Background
According to the Carnegie Mellon School an organization's ability to process information is at the core of organizational and managerial competency, and an organization's strategies must be designed to improve information processing capability and as information systems that provide that capability became formalised and automated, competencies were severely tested at many levels. It was recognised that organisations needed to be able to learn and adapt in ways that were never so evident before and academics began to organise and publish definitive works concerning the strategic management of information, and information systems. Concurrently, the ideas of business process management and knowledge management although much of the optimistic early thinking about business process redesign has since been discredited in the information management literature. In the strategic studies field, it is considered of the highest priority the understanding of the information environment, conceived as the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information. This environment consists of three interrelated dimensions which continuously interact with individuals, organizations, and systems. These dimensions are the physical, informational, and cognitive.
Aligning technology and business strategy with information management
Venkatraman has provided a simple view of the requisite capabilities of an organisation that wants to manage information well – the DIKAR model (see above). He also worked with others to understand how technology and business strategies could be appropriately aligned in order to identify specific capabilities that are needed. This work was paralleled by other writers in the world of consulting, practice, and academia.
A contemporary portfolio model for information
Bytheway has collected and organised basic tools and techniques for information management in a single volume. At the heart of his view of information management is a portfolio model that takes account of the surging interest in external sources of information and the need to organise un-structured information external so as to make it useful (see the figure).
Such an information portfolio as this shows how information can be gathered and usefully organised, in four stages:
Stage 1: Taking advantage of public information: recognise and adopt well-structured external schemes of reference data, such as post codes, weather data, GPS positioning data and travel timetables, exemplified in the personal computing press.
Stage 2: Tagging the noise on the World Wide Web: use existing schemes such as post codes and GPS data or more typically by adding “tags”, or construct a formal ontology that provides structure. Shirky provides an overview of these two approaches.
Stage 3: Sifting and analysing: in the wider world the generalised ontologies that are under development extend to hundreds of entities and hundreds of relations between them and provide the means to elicit meaning from large volumes of data. Structured data in databases works best when that structure reflects a higher-level information model – an ontology, or an entity-relationship model.
Stage 4: Structuring and archiving: with the large volume of data available from sources such as the social web and from the miniature telemetry systems used in personal health management, new ways to archive and then trawl data for meaningful information. Map-reduce methods, originating from functional programming, are a more recent way of eliciting information from large archival datasets that is becoming interesting to regular businesses that have very large data resources to work with, but it requires advanced multi-processor resources.
Competencies to manage information well
In 2004, the management system "Information Management Body of Knowledge" was first published on the World Wide Web
and set out to show that the required management competencies to derive real benefits from an investment in information are complex and multi-layered. The framework model that is the basis for understanding competencies comprises six "knowledge" areas and four "process" areas:
The information management knowledge areas
The IMBOK is based on the argument that there are six areas of required management competency, two of which ("business process management" and "business information management") are very closely related.
Information technology: The pace of change of technology and the pressure to constantly acquire the newest technological products can undermine the stability of the infrastructure that supports systems, and thereby optimises business processes and delivers benefits. It is necessary to manage the "supply side" and recognise that technology is, increasingly, becoming a commodity.
Information system: While historically information systems were developed in-house, over the years it has become possible to acquire most of the software systems that an organisation needs from the software package industry. However, there is still the potential for competitive advantage from the implementation of new systems ideas that deliver to the strategic intentions of organisations.
Business processes and Business information: Information systems are applied to business processes in order to improve them, and they bring data to the business that becomes useful as business information. Business process management is still seen as a relatively new idea because it is not universally adopted, and it has been difficult in many cases; business information management is even more of a challenge.
Business benefit: What are the benefits that we are seeking? It is necessary not only to be brutally honest about what can be achieved, but also to ensure the active management and assessment of benefit delivery. Since the emergence and popularisation of the Balanced scorecard there has been huge interest in business performance management but not much serious effort has been made to relate business performance management to the benefits of information technology investments and the introduction of new information systems until the turn of the millennium.
Business strategy: Although a long way from the workaday issues of managing information in organisations, strategy in most organisations simply has to be informed by information technology and information systems opportunities, whether to address poor performance or to improve differentiation and competitiveness. Strategic analysis tools such as the value chain and critical success factor analysis are directly dependent on proper attention to the information that is (or could be) managed
The information management processes
Even with full capability and competency within the six knowledge areas, it is argued that things can still go wrong. The problem lies in the migration of ideas and information management value from one area of competency to another. Summarising what Bytheway explains in some detail (and supported by selected secondary references):
Projects: Information technology is without value until it is engineered into information systems that meet the needs of the business by means of good project management.
Business change: The best information systems succeed in delivering benefits through the achievement of change within the business systems, but people do not appreciate change that makes new demands upon their skills in the ways that new information systems often do. Contrary to common expectations, there is some evidence that the public sector has succeeded with information technology induced business change.
Business operations: With new systems in place, with business processes and business information improved, and with staff finally ready and able to work with new processes, then the business can get to work, even when new systems extend far beyond the boundaries of a single business.
Performance management: Investments are no longer solely about financial results, financial success must be balanced with internal efficiency, customer satisfaction, and with organisational learning and development.
Summary
There are always many ways to see a business, and the information management viewpoint is only one way. Other areas of business activity will also contribute to strategy – it is not only good information management that moves a business forwards. Corporate governance, human resource management, product development and marketing will all have an important role to play in strategic ways, and we must not see one domain of activity alone as the sole source of strategic success. On the other hand, corporate governance, human resource management, product development and marketing are all dependent on effective information management, and so in the final analysis our competency to manage information well, on the broad basis that is offered here, can be said to be predominant.
Operationalising information management
Managing requisite change
Organizations are often confronted with many information management challenges and issues at the operational level, especially when organisational change is engendered. The novelty of new systems architectures and a lack of experience with new styles of information management requires a level of organisational change management that is notoriously difficult to deliver. As a result of a general organisational reluctance to change, to enable new forms of information management, there might be (for example): a shortfall in the requisite resources, a failure to acknowledge new classes of information and the new procedures that use them, a lack of support from senior management leading to a loss of strategic vision, and even political manoeuvring that undermines the operation of the whole organisation. However, the implementation of new forms of information management should normally lead to operational benefits.
Galbraith's early work
In early work, taking an information processing view of organisation design, Jay Galbraith has identified five tactical areas to increase information processing capacity and reduce the need for information processing.
Developing, implementing, and monitoring all aspects of the "environment" of an organization.
Creation of slack resources so as to decrease the load on the overall hierarchy of resources and to reduce information processing relating to overload.
Creation of self-contained tasks with defined boundaries and that can achieve proper closure, and with all the resources at hand required to perform the task.
Recognition of lateral relations that cut across functional units, so as to move decision power to the process instead of fragmenting it within the hierarchy.
Investment in vertical information systems that route information flows for a specific task (or set of tasks) in accordance to the applied business logic.
Matrix organisation
The lateral relations concept leads to an organizational form that is different from the simple hierarchy, the "matrix organization". This brings together the vertical (hierarchical) view of an organisation and the horizontal (product or project) view of the work that it does visible to the outside world. The creation of a matrix organization is one management response to a persistent fluidity of external demand, avoiding multifarious and spurious responses to episodic demands that tend to be dealt with individually.
See also
Balanced scorecard
Business process
Content management
Data management
Information excellence
Information Management Body of Knowledge
Information Resources Management Journal
Information system
Information technology
Journal of Global Information Management
Knowledge management
Master of Information Management
Project management
Records management
Strategic management
References
External links
Information
Information systems
Works about information | Information management | Technology | 3,346 |
3,596,573 | https://en.wikipedia.org/wiki/Python%20Software%20Foundation%20License | The Python Software Foundation License (PSFL) is a BSD-style, permissive software license which is compatible with the GNU General Public License (GPL). Its primary use is for distribution of the Python project software and its documentation. Since the license is permissive, it allows proprietization of the derivations. The PSFL is listed as approved on both FSF's approved licenses list, and OSI's approved licenses list.
This license is also known as "Python License 2.0.1".
In 2000, Python (specifically version 2.1) was briefly available under the Python License, which is incompatible with the GPL. The reason given for this incompatibility by Free Software Foundation was that "this Python license is governed by the laws of the 'State of Virginia', in the USA", which the GPL does not permit.
Guido van Rossum, Python's creator, was awarded the 2001 Free Software Foundation Award for the Advancement of Free Software for changing the license to fix this incompatibility.
See also
Python Software Foundation
Software using the Python Software Foundation license (category)
References
External links
The Python Software Foundation License
Free and open-source software licenses
Python (programming language) | Python Software Foundation License | Engineering | 254 |
33,987,434 | https://en.wikipedia.org/wiki/Ubiquiti | Ubiquiti Inc. (formerly Ubiquiti Networks, Inc.) is an American technology company founded in San Jose, California, in 2003. Now based in New York City, Ubiquiti manufactures and sells wireless data communication and wired products for enterprises and homes under multiple brand names. On October 13, 2011, Ubiquiti had its initial public offering (IPO) at 7.04 million shares, at $15 per share, raising $30.5 million.
Products
Ubiquiti's first product line was its "Super Range" mini-PCI radio card series, which was followed by other wireless products.
The company's Xtreme Range (XR) cards operated on non-standard IEEE 802.11 bands, which reduced the impact of congestion in the 2.4 GHz and 5.8 GHz bands. In August 2007 a group of Italian amateur radio operators set a distance world record for point-to-point links in the 5.8 GHz spectrum. Using two XR5 cards and a pair of 35 dBi dish antennas, the Italian team was able to establish a 304 km (about 188 mi) link at data rates between 4 and 5 Mbit/s.
The company (under its "Ubiquiti Labs" brand) also manufactures a home-oriented wireless mesh network router and access point combination device, as a consumer-level product called AmpliFi.
Brands
Ubiquiti product lines include UniFi, AmpliFi, EdgeMax, UISP, airMAX, airFiber, GigaBeam, and UFiber.
Their most well known product line is UniFi which is focused on home, prosumer, business wired and wireless networking in addition to IP cameras, physical access control systems, and VoIP phones. The EdgeMax product line is dedicated to wired networking, containing only routers and switches. UISP, announced in 2020, is a range of products for internet service providers.
airMAX is a product line dedicated to creating point-to-point (PtP) and point-to-multi-point (PtMP) links between networks. airFiber and UFiber are used by wireless and fiber Internet service providers (ISP) respectively.
Software products
Ubiquiti develops a variety of software controllers for their various products including access points, routers, switches, cameras, and locks. These controllers manage all connected devices and provide a single point for configuration and administration. The software is included as part of UniFi OS, an operating system that runs on devices called UniFi OS Consoles (UniFi Dream Machine, Dream Wall, Dream Router, Cloud Key).
The UniFi Network controller can alternatively be installed on Linux, FreeBSD, macOS, or Windows, while the other applications included with UniFi OS such as UniFi Protect and UniFi Access must be installed on a UniFi OS Console device.
WiFiman is an internet speed test and network analyzer tool that is integrated into most Ubiquiti products. It has mobile apps and a web version.
Security issues
U-Boot configuration extraction
In 2013, a security issue was discovered in the version of the U-Boot boot loader shipped on Ubiquiti's devices. It was possible to extract the plaintext configuration from the device without leaving a trace using Trivial File Transfer Protocol (TFTP) and an Ethernet cable, revealing information such as passwords.
While the issue is fixed in current versions of Ubiquiti hardware, despite many requests and acknowledgment that they are using GPL-protected application, Ubiquiti refused to provide the source code for the GNU General Public License (GPL)-licensed U-Boot. This made it impractical for Ubiquiti's customers to fix the issue. The GPL-licensed code was released eventually.
Upatre Trojan
It was reported by online reporter Brian Krebs, on June 15, 2015, that "Recently, researchers at the Fujitsu Security Operations Center in Warrington, UK began tracking [the] Upatre [trojan software] being served from hundreds of compromised home routers – particularly routers powered by MikroTik and Ubiquiti's airOS".
Bryan Campbell of the Fujitsu Security Operations Center in Warrington, UK was reported as saying: "We have seen literally hundreds of wireless access points and routers connected in relation to this botnet, usually AirOS ... The consistency in which the botnet is communicating with compromised routers in relation to both distribution and communication leads us to believe known vulnerabilities are being exploited in the firmware which allows this to occur."
2021 alleged data breach and lawsuit
In January 2021, a potential data breach of cloud accounts was reported, with customer credentials having potentially been exposed to an unauthorized third party.
In March 2021 security blogger Brian Krebs reported that a whistleblower disclosed that Ubiquiti's January statement downplayed the extent of the data breach in an effort to protect the company's stock price. Furthermore, the whistleblower claimed that the company's response to the breach put the security of its customers at risk. Ubiquiti responded to Krebs's reporting in a blog post, stating that the attacker "never claimed to have accessed any customer information" and "unsuccessfully attempted to extort the company by threatening to release stolen source code and specific IT credentials." Ubiquiti further wrote that they "believe that customer data was not the target of, or otherwise accessed in connection with, the incident."
On December 1, 2021, the United States Attorney for the Southern District of New York charged a former high-level employee of Ubiquiti for data theft and wire fraud, alleging that the "data breach" was in fact an inside job aimed at extorting the company for millions of dollars. The indictment also claimed that the employee caused further damage "by causing the publication of misleading news articles about the company’s handling of the breach that he perpetrated, which were followed by a significant drop in the company’s share price associated with the loss of billions of dollars in its market capitalization." The Verge reported that the indictment shed new light on the supposed breach and appeared to back up Ubiquiti's statement that no customer data was compromised.
In March 2022, Ubiquiti filed a lawsuit against Brian Krebs, alleging defamation for his reporting on their security issues. Both parties resolved their dispute outside the court in September 2022.
Legal difficulties
United States sanctions against Iran
In March 2014, Ubiquiti agreed to pay $504,225 to the Office of Foreign Assets Control after it allegedly violated U.S. sanctions against Iran.
Open-source licensing compliance
In 2015, Ubiquiti was accused of violating the terms of the GPL license for open-source code used in their products. The original source of the complaint updated their website on May 24, 2017, when the issue was resolved. In 2019, Ubiquiti was reported as again being in violation of the GPL.
Other
In 2015, Ubiquiti revealed that it lost $46.7 million when its finance department was tricked into sending money to someone posing as an employee.
References
External links
Companies based in New York City
Electronics companies established in 2005
Companies listed on the Nasdaq
Networking companies of the United States
Networking hardware companies
Computer companies of the United States
Computer hardware companies
2005 establishments in California
American companies established in 2005 | Ubiquiti | Technology | 1,552 |
48,400,446 | https://en.wikipedia.org/wiki/Droid%20Turbo%202 | The Moto X Force is a high-end Android smartphone made by Motorola Mobility. Inside the United States, it is branded as the Droid Turbo 2, available exclusively in the United States for the Verizon Droid brand. It was released on October 27, 2015. The phone is marketed as having "the world's first shatterproof screen."
The Droid Turbo 2 would be the one of the last smartphones to carry the Droid branding with Verizon.
Hardware
The Moto X Force features a 2 GHz and 1.5 GHz octa-core Snapdragon 810 processor, 3 gigabytes of RAM, 32 or 64 gigabytes of internal storage which can be expanded up to 2 terabytes with a MicroSD card. It has a 5.4-inch AMOLED display, a 3,760 mAh battery as well as support for Motorola's TurboPower and Qualcomm Quick Charge 2.0, as well as PMA and Qi wireless charging standards, and a 21-megapixel main camera flanked by a 5 MP front camera with a flash.
This was the first Droid phone to be customizable through Motorola's Motomaker service, which allows customers to select various materials such as soft touch plastic, Ballistic Nylon from the original Droid Turbo, pebbled Horween leather and Saffiano leather, as well as aluminum frame, screen bezel and accent colors. The service also allows users to add a custom greeting which shows up during the Moto X Force's initial startup screen, and customers who choose the pebbled leather backs can also get a custom engraving. The 64 GB version comes with a free "Design Refresh" which allows owners to trade in their phone for a newly designed one within a year of the original purchase date. This is also the first Droid device since the Motorola Droid X2 that does not feature a DuPont Kevlar backing, a Motorola Droid trademark that started with the original Droid Razr.
Shatterproof screen
This is the first Motorola smartphone that features Motorola's "ShatterShield" technology, which consists of two touchscreen elements, reinforced by an internal aluminum frame to make it resistant to bending and cracking, although this does not protect against scratches or other superficial screen damage. The top layer of the display is designed to be user-replaceable. The screen and case also have a water repellent nanocoating to protect the device from liquids that could damage internal components.
Variants
There are three main models of the Droid Turbo 2 or Moto X Force
References
External links
Mobile phones introduced in 2015
Motorola smartphones
Android (operating system) devices
Mobile phones with infrared transmitter
Discontinued flagship smartphones | Droid Turbo 2 | Technology | 559 |
66,345,698 | https://en.wikipedia.org/wiki/EyeHarp | The EyeHarp is an electronic musical instrument controlled by the player's eye or head movements. It combines eye tracking hardware and specially designed software, which has one component for defining chords and arpeggios, and another to change those definitions and play melodies. People with severely impaired motor function can use this instrument to play music or as an aid to learning or composition.
History
The idea for the EyeHarp was born in 2010 when a friend of musician and computer scientist Zacharias Vamvakousis was involved in a serious motorcycle accident which left him quadriplegic. Vamvakousis noticed a distinct lack of accessible musical instruments for people with disabilities, so he began designing the EyeHarp to create opportunities for people with physical disabilities to make music. The development of the EyeHarp started in 2011 in Barcelona under the auspices of Pompeu Fabra University.
In 2019, Vamvakousis founded the EyeHarp association, a non-profit organisation which works to give people with disabilities access to cheap musical education and assistive technology.
See also
Disability in the arts
References
External links
The EyeHarp as covered by ERT, Greece's public broadcaster
The EyeHarp Organisation
The EyeHarp at Pompeu Fabra University
Electronic musical instruments
Electronic
Audio engineering | EyeHarp | Engineering | 264 |
61,234,353 | https://en.wikipedia.org/wiki/Wyze%20Labs | Wyze Labs, Inc. (formerly Wyzecam), also known as Wyze, is an American technology company based in Seattle, Washington, that specializes in smart home products and wireless cameras. It is a start-up of former Amazon employees.
History
Wyze was incorporated on July 19, 2017, by four co-founders: CEO Yun Zhang, CPO Dongsheng Song, CMO Dave Crosby, and Elana Fishman. Wyze released their first product, the WyzeCam v1, on October 24, 2017. Shortly after, the Wyze Cam V2 was announced on February 18, and (due to a hardware defect) started shipping in early April of the same year. On October 24, 2018, Wyze had sold 1 million units of the Wyze Cam. On January 31, 2019, Wyze announced a $20 million investment from Norwest Venture Partners. In March 2020, when the COVID-19 pandemic hit the world, Wyze was on the verge of shutting down. But the company managed to survive the pandemic by reducing the amount of money it originally wanted to raise from 50 million to 10 million, and by adding the Cam Plus subscription plan, among other things. At the end of 2022, Wyze announced 2023 as the “year of the camera”, and released three new cameras as of March 2, 2023
Disputes and security concerns
In 2019 Sensormatic, a wholly-owned subsidiary of Johnson Controls, sued Wyze, alleging seven patent violations. Wyze prevailed in the lawsuit, as of September 2020.
In December 2019, the company acknowledged that a server leak had exposed the details of roughly 2.4 million customers. The company's response included logging all users out of their accounts, requiring all users to reauthenticate.
In 2021, Xiaomi submitted a report to Amazon alleging that Wyze had infringed upon its 2019 "Autonomous Cleaning Device and Wind Path Structure of Same" robot vacuum patent. On July 15, 2021, Wyze filed a lawsuit against Xiaomi in the U.S. District Court for the Western District of Washington, arguing that prior art exists and asking the court for a declaratory judgment that Xiaomi's 2019 robot vacuum patent is invalid.
In 2022, security firm Bitdefender announced that Wyze had discontinued WyzeCam v1 because of a security vulnerability that Bitdefender had reported to Wyze three years before, which is an unusually long time for a vulnerability to go unreported to the public. Wyze did not make any public announcement about the vulnerability.
On September 8, 2023, some users reported on Reddit that the web viewer was showing camera feeds that were not their own. A Wyze spokesperson said this was due to a web caching issue.
In February 2024, about 13,000 Wyze home security customers were shown someone else’s home.
References
External links
Smart home hubs
Companies based in Seattle
2017 establishments in Washington (state)
Video surveillance companies
Home automation companies
Heating, ventilation, and air conditioning companies
Headphones manufacturers
Lighting brands
Internet of things companies
IOS software
Android (operating system) software
sah:Wyze Labs | Wyze Labs | Technology | 677 |
43,028,463 | https://en.wikipedia.org/wiki/List%20of%20botanists%20by%20author%20abbreviation%20%28I%E2%80%93J%29 |
A–H
To find entries for A–H, use the table of contents above.
I
I.A.Abbott – Isabella Aiona Abbott (1919–2010)
I.A.Pilát – Ignatz Anton Pilát (1820–1870)
I.Baker – Irene Baker (1918–1989)
I.Barua – Iswar Chandra Barua (born 1960)
I.Bjørnstadt (also Nordal) – Inger Nordal (born 1944)
Ickert-Bond – Stefanie M. Ickert-Bond (fl. 2001–2013)
I.C.Martind. – Isaac Comly Martindale (1842–1893)
I.C.Nielsen – Ivan Christian Nielsen (1946–2007)
Ida – Reijiro Ida (fl. 1969)
I.Deg. – Isa Degener (1924–2018)
I.D.Illar. – Irina D. Illarionova (fl. 2008)
Ietsw. – Jaan H. Ietswaart (born 1940)
I.F.Lewis – Ivey Foreman Lewis (1882–1964)
I.G.Stone – Ilma Grace Stone (1913–2001)
I.Gut. – Rosa Ivonne Gutiérrez-Sánchez (fl. 2018)
I.Hagen – Ingebrigt Severin Hagen (1852–1917)
I.H.Boas – Isaac Herbert Boas (1878–1955)
I.Hedberg – Inga Hedberg (1927–2024)
I.I.Abramov – (1912–1990)
Iinuma – Yokusai Iinuma (1782–1865)
I.Keller – Ida Augusta Keller (1866–1932)
Ikonn. – (1931–2005)
Ik.Takah. – Ikuro Takahashi (1892–1981)
Iliff – James Iliff (1923–2014)
Iljin – (1889–1967)
Iljinsk. – Irina Alekseevna Iljinskaja (1921–2011)
I.Löw – Immanuel Löw (1854–1944)
Iltis – Hugh Hellmut Iltis (1925–2016)
Imbach – Emil J. Imbach (1897–1970)
I.M.Haring – Inez M. Haring (1875–1968)
I.M.Johnst. – Ivan Murray Johnston (1898–1960)
I.M.Liddle – Iris M. Liddle (fl. 1995)
Immelman – Kathleen Leonore Immelman (born 1955)
I.M.Oliv. – Inge Magdalene Oliver (1947–2003)
I.M.Turner – Ian Mark Turner (born 1963)
Incarv. – Pierre Nicolas le Chéron (d')Incarville (1706–1757)
Ingold – Cecil Terence Ingold (1905–2010)
Ingram – Collingwood Ingram (1880–1981)
Ingw. – Walter Edward Theodore Ingwersen (1885–1960)
Inocencio – (born 1969)
Inoue – Inoue Hiroshi (1932–1989)
I.O.Cook – Ian O. Cook (fl. 1991)
I.Oliv. – Ian Oliver (born 1954)
I.Piña – Ignacio Piña Luján (fl. 1980)
I.Pop – Ioan Pop (1922–2018)
I.Rácz – István Rácz (born 1952)
Irmsch. – (1887–1968)
Irwin – James Bruce Irwin (1921–2012)
Isaac – Frances Margaret Leighton (later Isaac) (1909–2006) (F.M.Leight. is also used)
I.Sastre – Ines Sastre (born 1955)
Isawumi – Moses A. Isawumi (fl. 1995)
I.S.Chen – Ih Sheng Chen (fl. 1972)
Isely – Duane Isely (1918–2000)
I.Sinclair – Isabella Sinclair (1842–1900)
Ising – (1884–1973)
I.S.Nelson – Ira Schreiber Nelson (1911–1965)
I.Sprague – Isaac Sprague (1811–1895)
I.Telford – Ian R.H. Telford (born 1941)
I.Thomps. – Ian R. Thompson (fl. 1996)
Ito – Keisuke Ito (1803–1901)
I.Verd. – Inez Clare Verdoorn (1896–1989)
Iversen – Johannes Iversen (1904–1972)
Ives – Joseph Christmas Ives (1828–1868)
I.W.Bailey – Irving Widmer Bailey (1884–1967)
I.W.Hutchison – Isobel Wylie Hutchison (1889–1982)
I.Williams – (1912–2001)
J
J.A.Armstr. – James Andrew Armstrong (born 1950)
Jaaska – Vello Jaaska (born 1936)
Jabbour – Florian Jabbour (fl. 2011)
Jack – William Jack (1795–1822)
Jackes – Betsy Rivers Jackes (born 1935)
Jacks. – George Jackson (1780–1811)
J.A.Clark – Josephine Adelaide Clark (1856–1929)
Jac.M.Burke – Jacinta Marie Burke (fl. 2006)
Jacobi – (1805–1874)
Jacobs – Maxwell Ralph Jacobs (1905–1979)
Jacobsen – Hans Jacobsen (1815–1891)
Jacobsson – Stig Jacobsson (born 1938)
Jacq. – Nikolaus Joseph von Jacquin (1727–1817)
Jacq.-Fél. – (born 1907)
Jacquem. – Venceslas Victor Jacquemont (1801–1832)
Jacques – Henri Antoine Jacques (1782–1866)
Jacquinot – Honoré Jacquinot (1814–1887)
Jacz. – Arthur Arthurovič Jaczewski (1863–1932)
Jaderh. – Axel Elof Jaderholm (1868–1927)
Jafri – Saiyad Masudal Hasan Jafri (1927–1986)
J.Agardh – Jacob Georg Agardh (1813–1901)
J.A.Guim. – (1862–1922)
Jahand. – Émile Jahandiez (1876–1938)
Jakubz. – Moisej Markovič Jakubziner (1898–1979)
Jalal – Jeewan Singh Jalal (born 1979)
Jalas – Arvo Jaakko Juhani Jalas (1920–1999)
Jalink – Leonardo Martinus Jalink (born 1956)
Jallad – Walid Jallad (fl. 1975)
J.Allam. – Jean-Nicolas-Sébastien Allamand (1713 or 1716–1787 or 1793)
J.A.Martind. – Joseph Anthony Martindale (1837–1914)
J.A.McDonald – (fl. 1982)
James – Thomas Potts James (1803–1882)
Jameson – William Jameson (1796–1873)
J.A.Muir – John A. Muir (fl. 1973)
Janch. – Erwin Emil Alfred Janchen (1882–1970)
Jančovič. – Soňa Jančovičová (fl. 2011)
Jancz. – Edward Janczewski (1846–1918)
J.Anderson – James Anderson (fl. 1868)
J.Andrews – J. Andrews (fl. 1952)
Janes – Jasmine K. Janes (fl. 2010)
Janisch. – (1875–1944)
Janka – Victor von Janka (1837–1900)
Janošík – Lukáš Janošík ()
Janse – Johannes Albertus Janse (1911–1977)
J.Anthony – John Anthony (1891–1972)
J.A.Nyberg – Jane A. Nyberg ()
J.A.Palmer – Julius Auboineau Palmer (1840–1899)
Jarman – S. Jean Jarman
J.A.Schmidt – Johann Anton Schmidt (1823–1905)
Játiva – (fl. 1963)
Jaub. – Hippolyte François Jaubert (1798–1874)
Jáv. – Sándor (Alexander) Jávorka (1883–1961)
J.Bauhin – Johann Bauhin (1541–1613)
J.B.Beck – James B. Beck (fl. 2010)
J.B.Comber – James Boughtwood Comber (1929–2005)
J.B.Fisch. – Johann Baptist Fischer (1803–1832)
J.B.Kirkp. – (born 1946)
J.B.Mackay – John Bain Mackay (1795–1888)
J.B.Nelson – John B. Nelson (born 1951)
J.B.Petersen – Johannes Boye Petersen (1887–1961)
J.B.Phipps – (born 1934)
J.Bradbury – John Bradbury (1768–1823)
J.Bradshaw – John Bradshaw (1863–1939)
J.Breitenb. – Josef Breitenbach (1927–1998)
J.B.Rohr – (1686–1742)
J.B.Sinclair – James Burton Sinclair (born 1927)
J.Buchholz – John Theodore Buchholz (1888–1951)
J.B.Williams – John Beaumont Williams (1932–2005)
J.Carey – John Carey (1797–1880)
J.C.Clausen – Jens Christen (Christian) Clausen (1891–1969)
J.C.Costa – José Carlos Augusta da Costa (born 1955)
J.C.F.Hopkins – John Collier Frederick Hopkins (1898–1981)
J.C.Fisch. – Johann Carl Fischer (1804–1885)
J.C.Gomes – José Corrêa Gomes, Jr. (1919–1965)
J.C.Hickman – James Craig Hickman (1941–1993)
J.Clayton – John Clayton (1694–1773)
J.C.Manning – John Charles Manning (born 1962)
J.C.Martínez – J. Carlos Martínez Macchiavello (born 1931)
J.C.Mikan – Johann Christian Mikan (1769–1844)
J.C.Nelson – James Carlton Nelson (1867–1944)
J.Commelijn – Jan Commelin (also known by Jan Commelijn or Johannes Commelinus) (1629–1692)
J.Compton – James A. Compton (born 1953)
J.C.Parks – James C. Parks (1942–2002)
J.C.Prag. – Jan C. Prager (born 1934)
J.C.Ross – James Clark Ross (1800–1862)
J.C.Siqueira – Josafá Carlos de Siqueira (born 1953)
J.C.Sowerby – James de Carle Sowerby (1787–1871)
J.C.Vogel – Johannes Vogel (born 1963)
J.C.Wendl. – Johann Christoph Wendland (1755–1828)
J.D.Arm. – James D. Armitage (born 1976)
J.D.Bacon – John Dudley Bacon (born 1943)
J.D.Briggs – John D. Briggs (fl. 1994)
J.D.Mitch. – John D. Mitchell (fl. 1993)
J.Dransf. – John Dransfield (born 1945)
J.D.Ray – James Davis Ray Jr. (1918–1990)
J.Drumm. – James Drummond (1784–1863)
J.D.Sauer – Jonathan Deininger Sauer (1918–2008)
J.D.Schultze – Johannes Dominik Schultze (1752–1790)
J.Duan – Jun Duan (fl. 2010)
J.E.Alexander – James Edward Alexander (1803–1885)
Jeanes – Jeffrey A. Jeanes (fl. 2000)
Jean White – Jean White-Haney (1877–1953)
Jebb – Matthew H. P. Jebb (born 1958)
J.E.Br. – John Ednie Brown (1848–1899)
Jefferies – R.L. Jefferies (fl. 1987)
Jeffrey – (1866–1943)
J.E.Gray – John Edward Gray (1800–1875)
Jekyll – Gertrude Jekyll (1843–1932)
J.E.Lange – Jakob Emanuel Lange (1864–1941) (father of Knud Morten Lange)
J.Ellis – John Ellis (1710–1776)
J.E.Palmér – Johan Ernst Palmér (1863–1946)
J.E.Pohl – Johann Ehrenfried Pohl (1746–1800)
Jeps. – Willis Linn Jepson (1867–1946)
Jérémie – Joël Jérémie (born 1944)
J.E.Reid – Jordan E. Reid (fl. 2009)
J.E.Sowerby – John Edward Sowerby (1825–1870)
Jess. – Karl Friedrich Wilhelm Jessen (1821–1889)
Jessop – (born 1939)
Jessup – Laurence W. Jessup (born 1947)
Jeswiet – Jacob Jeswiet (1879–1966)
Jeuken – M. Jeuken (fl. 1952)
J.Everett – Joy Everett (born 1953)
J.E.Vidal – (1914–2020)
J.E.Wright – Jorge Eduardo Wright (1922–2005)
J.E.Zetterst. – (1828–1880)
J.Fabr. – Johan Christian Fabricius (1745–1808)
J.F.Bailey – John Frederick Bailey (1866–1938)
J.F.Clark – Judson Freeman Clark (1870–1942)
J.F.Cowell – John Francis Cowell (1852–1915)
J.F.Gmel. – Johann Friedrich Gmelin (1748–1804)
J.Fisch. – Jacob Benjamin Fischer (1730–1793)
J.-F.Leroy – (1915–1999)
J.Florence – Jacques Florence (born 1951)
J.F.Macbr. – James Francis Macbride (1892–1976)
J.F.Matthews – James F. Matthews (born 1935)
J.F.Maxwell – James F. Maxwell (1949–2015)
J.Forbes – James Forbes (1773–1861)
J.Frost – John Frost (1803–1840)
J.Fryer – Jeanette Fryer (fl. 1993)
J.Gay – Jacques Etienne Gay (1786–1864)
J.G.Cooper – James Graham Cooper (1830–1902)
J.Gerard – John Gerard (1545–1612)
J.Gerlach – (born 1970)
J.G.Gmel. – Johann Georg Gmelin (1709–1755)
J.G.Jack – John George Jack (1861–1949)
J.G.Kühn – Julius Gotthelf Kühn (1825–1910)
J.G.Nelson – John Gudgeon Nelson (1818–1882)
J.Gröntved – Julius Gröntved (or Grøntved) (1899–1967)
J.G.Sm. – (1866–1957)
J.Guan Wang – Jia-Guan Wang (fl. 2022)
J.G.West – Judith Gay West (born 1949)
J.H.Adam – Jumaat Haji Adam (born 1956)
J.H.Chau – John H. Chau (fl. 2017)
J.H.Christ – John Henry ("Heinie") Christ (1896–1972)
J.Harriman – John Harriman (1760–1831)
J.H.Kirkbr. – Joseph Harold Kirkbride (born 1943)
J.Hogg – John Hogg (1800–1869)
J.Houz. – Jean Houzeau de Lehaie (1867–1959)
J.H.Ross – James Henderson Ross (born 1941)
J.H.Schaffn. – John Henry Schaffner (1866–1939)
J.H.So – Ji Hyeon So (fl. 2017)
J.H.Thomas – John Hunter Thomas (1928–1999)
J.H.Wallace – John Hume Wallace (born 1918)
J.H.Willis – James Hamlyn Willis (1910–1995)
Jian W.Li – Jian Wu Li (fl. 2014)
Jílek – Bohumil Jílek (1905–1972)
J.Jacobsen – Jens Peter Jacobsen (1847–1885)
J.J.Amann – Jean Jules Amann (1859–1939)
J.J.Atwood – John Jacob Atwood (born 1981)
J.J.Bruhl – Jeremy James Bruhl (born 1956)
J.J.Engel – John Jay Engel (born 1941)
J.Jiménez Alm. – (1905–1982)
J.J.Kickx – Jean Jacques Kickx (1842–1887)
J.J.Rodr – Juan Joaquín Rodríguez y Femenías (1839–1905)
J.J.Scheuchzer – Johann Jacob Scheuchzer (1672–1733)
J.J.Sm. – Johannes Jacobus Smith (1867–1947)
J.Jundz. – (1794–1877)
J.Juss. – Joseph de Jussieu (1704–1779)
J.J.Verm. – Jaap J. Vermeulen (born 1955)
J.J.Wood – Jeffrey James Wood (1952–2019)
J.K.Bartlett – John Kenneth Bartlett (1945–1986)
J.Kern – Johannes Hendrikus Kern (1903–1974)
J.Kern. – Johann Simon von Kerner (1755–1830)
J.Kickx – Jean Kickx, Sr. (1775–1831)
J.Kickx f. – Jean Kickx, Jr. (1803–1864)
J.K.Morton – John Kenneth Morton (1928–2011)
J.Koenig – Johann Gerhard Koenig (1728–1785)
J.Kost. – Joséphine Thérèse Koster (1902–1986)
J.K.Towns. – John Kirk Townsend (1809–1851)
J.Lachm. – (1832–1860)
J.L.Clark – John Littner Clark (born 1969)
J.Lee – James Lee (1715–1795)
J.Léonard – Jean Joseph Gustave Léonard (1920–2013)
J.L.Garrido – José Luis Garrido (fl. 2012)
J.L.Gentry – Johnnie Lee Gentry (born 1939)
J.Lowe – Josiah Lincoln Lowe (1905–1997)
J.L.Palmer – Johann Ludwig Palmer (1784–1836)
J.L.Parm. – Jean Louis Jacques Henri Parmentier (1777–1865)
J.L.Porter – John L. Porter (born 1964)
J.L.Schultz – Joanna L. Schultz (born 1963)
J.Lundb. – Johannes Lundberg (fl. 2001)
J.MacGill. – John MacGillivray (1822–1867)
J.Mackay – James Townsend Mackay (1775–1862)
J.Macrae – James Macrae (died 1830)
J.Martyn – John Martyn (1699–1768)
J.Mathew – Jose Mathew (born 1985)
J.M.Bigelow – John Milton Bigelow (1804–1878)
J.M.Black – John McConnell Black (1855–1951)
J.M.Clarke – John Mason Clarke (1857–1925)
J.M.Coult. – John Merle Coulter (1851–1928)
J.M.C.Rich. – Jean Michel Claude Richard (1787–1868)
J.M.Gillett – John Montague Gillett (1918–2014)
J.M.Hart – J.M. Hart (fl. 1998)
J.M.H.Shaw – Julian Mark Hugh Shaw (born 1955)
J.Milne – Josephine Milne (born 1957)
J.M.Kain – Joanna M. Kain (1930–2017)
J.M.MacDougal – John Mochrie MacDougal (born 1954)
J.M.Macoun – James Melville Macoun (1862–1920)
J.M.Monts. – (born 1955)
J.M.Muñoz – Jesús M. Muñoz (born 1955)
J.Moll – Jan Willem Moll (1851–1933)
J.M.Porter – James Mark Porter (born 1956)
J.M.Powell – Jocelyn Marie Powell (born 1939)
J.M.Schopf – James Morton Schopf (1911–1978)
J.M.Taylor – Joan M. Taylor (born 1929)
J.M.Tucker – John Maurice Tucker (1916–2008)
J.Muir – John Muir (1838–1914)
J.Muñoz – Jesús Muñoz (born 1964)
J.Murata – (born 1952)
J.Murray – John Murray (1841–1914)
J.M.Waller – James Martin Waller (born 1938)
J.M.Ward – Josephine M. Ward (fl. 1997)
J.M.Watson – John Michael Watson (1936–2024)
J.M.Webber – John Milton Webber (1897–1984)
J.M.Wood – John Medley Wood (1827–1915)
Jn.Dalton – John Dalton (1766–1844)
J.Nelson – John Nelson (1826–1867)
J.N.Haage – (1826–1878)
J.N.Zhang – Jin Ning Zhang (fl. 1998)
Jobson – Peter Craig Jobson (born 1965)
Johan-Olsen (also Sopp) – Olav Johan Sopp (1860–1931)
Johans. – Frits Johansen (1882–1957)
Johanss. – (1856–1928)
John Muir – John Muir (1874–1947)
John Parkinson – John Parkinson (1567–1650)
Johnst. – George Johnston (1797–1855)
Johow – Federico Johow (also as Friedrich Richard Adelbert (or Adelbart) Johow) (1859–1933)
Jones – William Jones (1746–1794)
Jongkind – (born 1954)
Jonst. – John Jonston (also as Johannes Jonston or Joannes Jonstonus) (1603–1675)
Jord. – Claude Thomas Alexis Jordan (1814–1897)
Jordanov – Daki Jordanov (1893–1978)
Josekutty – Elayanithottathil Joseph Josekutty (fl. 2016)
Joshi – Amar Chaud Joshi (1908–1971)
Jos.Kern. – Josef Kerner (1829–1906)
Jos.Martin – Joseph Martin (fl. 1788–1826)
J.Ott – Jonathan Ott (born 1949)
Jovet – (1896–1991)
Jovet-Ast – Suzanne Jovet-Ast (1914–2006)
Joy Thomps. – Joy Thompson (1923–2018)
J.Palmer – Joanne Palmer (born 1960)
J.P.Anderson – Jacob Peter Anderson (1874–1953)
J.Parn. – John Adrian Naicker Parnell (born 1954)
J.Pfeiff. – Johan Philip Pfeiffer (1888–1947)
J.-P.Frahm – Jan-Peter Frahm (1945–2014)
J.P.Nelson – Jane P. Nelson (fl. 1980)
J.Poiss. – (1833–1919)
J.P.Pigott – Julian Patrick Pigott (fl. 2001)
J.Prado – Jefferson Prado (born 1964)
J.Prak.Rao – Jonnakuti Prakasa Rao ()
J.Presl – Jan Svatopluk Presl (1791–1849)
J.P.Yue – (fl. 2004)
J.Raynal – (1933–1979)
J.R.Clarkson – John Richard Clarkson (born 1950)
J.R.Drumm. – James Ramsay Drummond (1851–1921)
J.Rémy – Ezechiel Jules Rémy (1826–1893)
J.R.Edm. – John Richard Edmondson (born 1948)
J.R.Forst. – Johann Reinhold Forster (1729–1798)
J.R.I.Wood – John Richard Ironside Wood (born 1944)
J.R.Lee – John Ramsay Lee (1868–1959)
J.R.Rohrer – Joseph Raphael Rohrer (born 1954)
J.Roth – Johannes Rudolph Roth (1815–1858)
J.Rousseau – (1905–1970)
J.Roux – Jean Roux (1876–1939)
J.R.Perkins – John Russell Perkins (born 1868)
J.R.Rolfe – Jeremy Richard Rolfe (fl. 2013)
J.R.Wheeler – Judith Roderick Wheeler (born 1944)
J.Scheff. – Jozef Scheffer (1903–1949)
J.Scheuchzer – Johannes Gaspar Scheuchzer (1684–1738)
J.Schiller – Josef Schiller (1877–1960)
J.Schneid. – Josef Schneider (died 1885)
J.Schröt. – Joseph Schröter (1837–1894)
J.Schultze-Motel – Jürgen Schultze-Motel (born 1930)
J.Schust. – (1886–1949)
J.S.Ma – Jin Shuang Ma (born 1955)
J.Scott – John Scott (1838–1880)
J.Scriba – Julius Karl Scriba (1848–1905)
J.Steiner – Julius Steiner (1844–1918)
J.Sinclair – James Sinclair (1913–1968)
J.Sm. – John Smith (1798–1888)
J.S.Martin – James Stillman Martin (1914–2000)
J.S.Mill. – (born 1953)
J.S.Muell. – John Miller (1715–c.1792) aka Johann Sebastian Mueller
J.Soulié – Jean André Soulié (1858–1905)
J.S.Pringle – James Scott Pringle (1937–2024)
J.Stewart – (1936–2011)
J.St.-Hil. – Jean Henri Jaume Saint-Hilaire (1772–1845)
J.Stirl. – James Stirling (1852–1909)
J.Swamy – Jetti Swamy (fl. 2016)
J.T.Baldwin – John Thomas Baldwin (1910–1974)
J.T.Curtis – John Thomas Curtis (1913–1961)
J.T.Johanss. – (fl. 1988)
J.T.Howell – John Thomas Howell (1903–1994)
J.T.Hunter – John T. Hunter (fl. 1967)
J.T.Palmer – James Terence Palmer (born 1923)
J.T.Pan – (born 1935)
J.T.Pereira – Joan T. Pereira (fl. 1994)
J.T.Quekett – John Thomas Quekett (1815–1861) (brother of Edwin John Quekett)
J.Traill – James Traill (died 1853)
J.T.Wall – J. T. Wall (fl. 1934)
J.T.Waterh. – (1924–1983)
J.T.Williams – John Trevor Williams (1938–2015)
Judd – Walter Stephen Judd (born 1951)
Jum. – Henri Lucien Jumelle (1866–1935)
Jungh. – (Friedrich) Franz Wilhelm Junghuhn (1809–1864)
Junius – Hadrianus Junius (1511–1575)
Juss. – Antoine Laurent de Jussieu (1748–1836)
Juswara – Lina Susanti Juswara (born 1971)
Juz. – (1893–1959)
J.V.Lamour. – Jean Vincent Félix Lamouroux (1779–1825)
J.V.Schneid. – Julio Valentin Schneider (born 1967)
J.V.Stone – Judi V. Stone (born 1946)
J.V.Thomps. – John Vaughan Thompson (1779–1847)
J.Wallis – John Wallis (1714–1793)
J.W.Baker – Jason W. Baker (born 1981)
J.W.Benn. – John Whitchurch Bennett (fl. 1842)
J.W.Cribb – Joan Winifred Cribb (1930–2023)
J.W.Dawson – John Wyndham Dawson (1928–2019)
J.Wen – Jun Wen (born 1963)
J.West – James West (1875–1939)
J.W.Green – John William Green (born 1930)
J.W.Grimes – James Walter Grimes (born 1953)
J.W.Horn – James W. Horn (fl. 2009)
J.White R.N. – John White (1757–1832)
J.W.Ingram – (born 1924)
J.W.Mast. – John William Masters (1792–1873)
J.W.Moore – (1901–1990) (not to be confused with the American politician of the same name)
J.Woods – Joseph Woods Jr. (1776–1864)
J.W.Powell – John Wesley Powell (1834–1902)
J.W.Robbins – James Watson Robbins (1801–1879)
J.W.Sturm – Johann Wilhelm Sturm (1808–1865)
J.W.Thomson – John Walter Thomson (1913–2009)
J.W.Weinm. – Johann Wilhelm Weinmann (1683–1741)
J.W.White – James Walter White (1846–1932)
J.W.Zetterst. – Johan Wilhelm Zetterstedt (1785–1874)
J.W.Zhai – Jun Wen Zhai (born 1985)
J.Zahlbr. – (1782–1851)
J.Z.Weber – Joseph Zvonko Weber (1930–1996)
K–Z
To find entries for K–Z, use the table of contents above.
1 | List of botanists by author abbreviation (I–J) | Biology | 6,160 |
61,021,184 | https://en.wikipedia.org/wiki/Progesterone/hydroxyprogesterone%20heptanoate/%CE%B1-tocopherol%20palmitate | Progesterone/hydroxyprogesterone heptanoate/α-tocopherol palmitate (P4/OHPH/VE), sold under the brand name Tocogestan, is a combination medication of progesterone (P4), a short-acting progestogen, hydroxyprogesterone heptanoate (OHPH), a long-acting progestogen, and α-tocopherol palmitate, a prodrug of α-tocopherol and form of vitamin E, which was previously used in France to support pregnancy in women but is no longer available. It contained 50 mg P4, 200 mg OHPH, and 250 mg in 2 mL oil solution, was provided in the form of 2 mL ampoules, and was administered by intramuscular injection.
See also
List of combined sex-hormonal preparations § Progestogens
References
Abandoned drugs
Combination drugs
Progestogens
Vitamin E | Progesterone/hydroxyprogesterone heptanoate/α-tocopherol palmitate | Chemistry | 206 |
13,859,391 | https://en.wikipedia.org/wiki/Pangamic%20acid | Pangamic acid, also called pangamate, is the name given to a chemical compound discovered by Ernst T. Krebs Sr. His son, Ernst T. Krebs Jr., promoted it as a medicinal compound for use in treatment of a wide range of diseases. They also termed this chemical "vitamin B15", though it is not a true vitamin, has no nutritional value, has no known use in the treatment of any disease, and has been called a "quack remedy". Although a number of compounds labelled "pangamic acid" have been studied or sold (including the 1951 d-gluconodimethylamino acetic acid), no chemical compound, including those claimed by the Krebses to be pangamic acid, has been scientifically verified to have the characteristics that defined the original description of the compound.
The Krebses derived the term "pangamic" to describe this compound which they asserted to be ubiquitous and highly concentrated in seeds (pan meaning "universal" and gamic meaning "seed").
Chemistry
Pangamic acid is the name given to the chemical compound with the empirical formula and a molecular weight of 281 which appeared to be an ester derived from d-gluconic acid and dimethylglycine. In 1943, the Krebses applied for a patent for a process for extracting this chemical compound which they reported had been previously isolated from apricot seeds, and received the patent in 1949. A 1951 paper by the Krebses reported the first isolation of this compound using this patented process, but did not include enough information to confirm that this compound was actually isolated. In 1955, the Krebses received a patent for another synthesizing process for "N-substituted glycine esters of gluconic acid", but the patent contained no supporting data to confirm the process was able to synthesize compounds described by the patent, including pangamic acid.
Subsequent attempts at synthesizing this ester by other researchers found Krebs' purported methods of producing pangamic acid were not reproducible, and research into pangamic acid have focused on compounds of various chemical compositions. A review noted that of all the chemicals described in research about pangamic acid, "[n]ot a single product labeled 'pangamate' or 'B15' has been established in a scientifically verifiable manner to conform to the empiric formula" described by the Krebses. Analysis of a sample of a compound called "pangamic acid" which was provided by a co-worker of the Krebses in the 1950s showed only lactose upon further evaluation by nuclear magnetic resonance spectroscopy. Thus, "pangamic acid" is more a label used to describe one of any number of chemical compounds rather than a particular substance.
Chemical compounds sold as "pangamic acid" for medicinal purposes have also had various chemical compositions, and suppliers of "pangamic acid" have regularly changed the identity of the chemical compounds sold under this label. One anecdote noted that the Food and Drug Administration (FDA) has seized lots of "calcium pangamate" sold by General Nutrition Center (GNC), which agreed to stop selling the compound in those bottles after the FDA filed suit to stop sales. Afterwards, it was noted that GNC was still selling something in the same bottles with the same labels, likely a different compound. Due to ambiguity in situations like this, the FDA considers it "not an identifiable substance".
To summarize, substances that have been claimed to be pangamic acid include:
d-gluconodimethylamino acetic acid (Krebes 1951), never synthesized. Alternative Soviet synthesis of calcium salt also fails to reproduce.
Variety of mixtures containing dimethylamine. Result of attempts to synthesize the 1951 compound. Possibly mutagenic.
Diisopropylamine dichloroacetate (Krebes 1955 patent "analogue"), synthesized. Readily hydrolyzes to known-toxic compounds.
Pharmacologically inert materials, ranging from "synthesis attempts" containing calcium gluconate to pure lactose.
Clinical claims and research
The Krebses' original patent claimed pangamic acid could be used for detoxification as well as treatment of asthma, skin conditions, joint pain, and nerve pain, with none of these claims supported by evidence in the patent application. Early promotion for pangamic acid included use by race horses as well as humans. Although given the name "Vitamin B15" by the Krebses, there is no evidence that it meets the definition of a vitamin as there is no evidence it is a nutrient needed by the body.
Much of the clinical research on pangamic acid took place in the former Soviet Union, though that research often did not describe which of the many compounds called "pangamic acid" was used in the study. This research was also of limited quality due to being overwhelmingly anecdotal in nature (as opposed to controlled experimentation) and ignoring short and long term safety in human use.
Although more recent claims include treatment of a wide variety of conditions including cancer, heart disease, schizophrenia as well as providing improvement in oxygen utilization, there is no significant evidence for any of these claims or that it is safe for human use. One review noted that it meets "the criteria that define a quack remedy".
Safety
Positive results from mutagenicity analysis via the Ames test of compounds commonly found in preparations labelled "pangamic acid" including diisopropylamine dichloroacetate, diisopropylamine, dichloroacetate, as well as dimethylglycine mixed with sodium nitrite suggests there may be concern for the development of cancer with the use of these substances.
Legal status
The FDA has recommended seizing any chemicals advertised as pangamic acid and restraining the importation and interstate shipment of pangamic acid on the grounds that pangamic acid and pangamic acid products are unsafe for use and have no known nutritional properties. Pangamic acid's distribution in Canada has been prohibited by the then-named Canadian Food and Drug Directorate.
See also
List of unproven and disproven cancer treatments
References
Alternative cancer treatments
Alternative medicine
Amines
Health fraud
Pseudoscience
Sugar acids | Pangamic acid | Chemistry | 1,319 |
75,272,374 | https://en.wikipedia.org/wiki/NGC%20871 | NGC 871 is a barred spiral galaxy in the Aries constellation. Its discovery and first description was realized by William Herschel on October 14, 1784 and the findings made public through his Catalogue of Nebulae and Clusters of Stars in 1786.
By using the galaxies' radial velocities and distances as a grouping factor, astronomers assign this galaxy to LGG (Lyon Groups of Galaxies) 53 along with 8 other members (UGC 1693, UGC 1761, NGC 876, NGC 877, UGC 1817, IC 1791 and UGC 1773).
Galaxy group NGC 871/6/7
At the current epoch, most galaxies can be found in medium-density group environments, where tidal interactions play an important role in galactic evolution. Several nearby, gas-rich groups exhibit clear signs of these interactions, giving the opportunity for scientists to study how galaxies are formed and interact with each other.
In 2012 astronomers conducted an extensive survey to measure the Hl emissions from NGC 871 and other galaxies in LGG 53, using the Giant Metrewave Radio Telescope and Canada–France–Hawaii Telescope. This galaxy group first attracted attention due to its gas-rich interaction as well as harboring AGC (Arecibo General Catalog) 749170, a galaxy with a mass of ~ M☉. This group resides in a common HI distribution with a total HI mass of Mhl 6 x 10 M☉. Such a massive structure is very rare in the local Universe (galaxies with a Mhl > M☉ represents less 2 per cent of cases) and each large spiral in NGC 871/6/7 seems to exceeds this value.
The study suggests seven of the eight gas-rich detections (three spirals and four dwarfs) contain stellar components and appear to be standard dark-matter dominated galaxies that were built during the epoch of galaxy assembly. AGC 749170 however is probably the result of major mergers and very active tidal interaction, resulting in the massive structure we can observe today.
See also
New General Catalogue
List of NGC objects
References
External links
NASA/IPAC Extragalactic Database - Extensive database of NGC objects.
Aries (constellation)
Barred spiral galaxies
0871
J02171073+1432521
8722
Astronomical objects discovered in 1784
Discoveries by William Herschel | NGC 871 | Astronomy | 468 |
30,682,536 | https://en.wikipedia.org/wiki/Zeeman%20slower | In atomic physics, a Zeeman slower is a scientific instrument that is commonly used in atomic physics to slow and cool a beam of hot atoms to speeds of several meters per second and temperatures below a kelvin. The gas-phase atoms used in atomic physics are often generated in an oven by heating a solid or liquid atomic sample to temperatures where the vapor pressure is high enough that a substantial number of atoms are in the gas phase. These atoms effuse out of a hole in the oven with average speeds on the order of hundreds of m/s and large velocity distributions (due to their high temperature). The Zeeman slower is attached close to where the hot atoms exit the oven and are used to slow them to less than 10 m/s (slowing) with a very small velocity spread (cooling).
A Zeeman slower consists of a cylinder, through which an atomic beam travels, a pump laser that counterpropagates with respect to the beam's direction, and a magnetic field (commonly produced by a solenoid-like coil) that points along the cylinder's axis with a spatially varying magnitude. The pump laser, which is required to be near-resonant with atomic transition, Doppler-slows a certain velocity class within the velocity distribution of the beam. The spatially varying magnetic field is designed to Zeeman-shift the resonant frequency to match the decreasing Doppler shift as the atoms are slowed to lower velocities while they propagate through the Zeeman slower, allowing the pump laser to be continuously resonant and provide a slowing force.
History
The Zeeman slower was first developed by Harold J. Metcalf and William D. Phillips (who was awarded 1/3 of the 1997 Nobel Prize in Physics in part work for his work on the Zeeman slower). The achievement of these low temperatures led the way for the experimental realization of Bose–Einstein condensation, and a Zeeman slower can be part of such an apparatus.
Principle
According to the principles of Doppler cooling, an atom modelled as a two-level atom can be cooled using a laser. If it moves in a specific direction and encounters a counter-propagating laser beam resonant with its transition, it is very likely to absorb a photon. The absorption of this photon gives the atom a "kick" in the direction that is consistent with momentum conservation and brings the atom to its excited state. However, this state is unstable, and some time later the atom decays back to its ground state via spontaneous emission (after a time on the order of nanoseconds; for example, in rubidium-87, the excited state of the D2 transition has a lifetime of 26.2 ns). The photon will be reemitted (and the atom will again increase its speed), but its direction will be random. When averaging over a large number of these processes applied to one atom, one sees that the absorption process decreases the speed always in the same direction (as the absorbed photon comes from a monodirectional source), whereas the emission process does not lead to any change in the speed of the atom because the emission direction is random. Thus the atom is being effectively slowed down by the laser beam.
There is nevertheless a problem in this basic scheme because of the Doppler effect. The resonance of the atom is rather narrow (on the order of a few megahertz), and after having decreased its momentum by a few recoil momenta, it is no longer in resonance with the pump beam because in its frame, the frequency of the laser has shifted. The Zeeman slower uses the fact that a magnetic field can change the resonance frequency of an atom using the Zeeman effect to tackle this problem.
The average acceleration (due to many photon absorption events over time) of an atom with mass , a cycling transition with frequency , and linewidth , that is in the presence of a laser beam that has wavenumber , and intensity (where is the saturation intensity of the laser) is
In the rest frame of the atoms with velocity in the atomic beam, the frequency of the laser beam is shifted by . In the presence of a magnetic field , the atomic transition is Zeeman-shifted by an amount (where is the magnetic moment of the transition). Thus, the effective detuning of the laser from the zero-field resonant frequency of the atoms is
The atoms for which will experience the largest acceleration, namely
where , and .
The most common approach is to require that we have a magnetic field profile that varies in the direction such that the atoms experience a constant acceleration as they fly along the axis of the slower. It has been recently shown, however, that a different approach yields better results.
In the constant-deceleration approach we get
where is the maximal velocity class that will be slowed; all the atoms in the velocity distribution that have velocities will be slowed, and those with velocities will not be slowed at all. The parameter (which determines the required laser intensity) is normally chosen to be around 0.5. If a Zeeman slower were to be operated with , then after absorbing a photon and moving to the excited state, the atom would preferentially re-emit a photon in the direction of the laser beam (due to stimulated emission), which would counteract the slowing process.
Realization
The required form of the spatially inhomogeneous magnetic field as we showed above has the form
This field can be realized a few different ways. The most popular design requires wrapping a current-carrying wire with many layered windings where the field is strongest (around 20–50 windings) and few windings where the field is weak. Alternative designs include a single-layer coil that varies in the pitch of the winding and an array of permanent magnets in various configurations.
Outgoing atoms
The Zeeman slower is usually used as a preliminary step to cool the atoms in order to trap them in a magneto-optical trap. Thus it aims at a final velocity of about 10 m/s (depending on the atom used), starting with a beam of atoms with a velocity of a few hundred meters per second. The final speed to be reached is a compromise between the technical difficulty of having a long Zeeman slower and the maximal speed allowed for an efficient loading into the trap.
A limitation of setup can be the transverse heating of the beam. It is linked to the fluctuations of the speed along the three axes around its mean values, since the final speed was said to be an average over a large number of processes. These fluctuations are linked to the atom having a Brownian motion due to the random reemission of the absorbed photon. They may cause difficulties when loading the atoms in the next trap.
References
Atomic physics
Cooling technology
Scientific instruments | Zeeman slower | Physics,Chemistry,Technology,Engineering | 1,394 |
44,844,408 | https://en.wikipedia.org/wiki/Deal%20%28unit%29 | Deal is an obsolete unit of measurement formerly used in the UK and US to measure wood. In the late 18th and early 19th centuries, a deal originally referred to a wooden board between 12 and 14 feet long that was traded as a maritime commodity.
Definition
Deal (UK) is equal to 7 ft × 6 ft × in.
Deal (US) is equal to 12 ft × 11 in × in.
Whole deal is equal to 12 ft × 11 in × in.
Split deal is equal to 12 ft × 8 ft × 16 in.
Conversion
1 Deal (UK) ≡ 8.75 cubic feet ≡ 105 board feet ≡ 0.24777240768 m3
1 Deal (US) ≡ 1.375 cubic feet ≡ 16.5 board feet ≡ 0.0389356640640 m3
1 Whole deal ≡ 0.573 (or 55/96) cubic foot ≡ 6.875 (or 55/8) board feet ≡ 0.01622319336 m3
1 Split deal ≡ 128 cubic feet ≡ 1536 board feet ≡ 3.624556363776 m3
See also
List of obsolete units of measurement
References
Units of volume
Customary units of measurement | Deal (unit) | Mathematics | 244 |
20,363,345 | https://en.wikipedia.org/wiki/Log%20jam | A log jam is a naturally occurring phenomenon characterized by a dense accumulation of tree trunks and pieces of large wood across a vast section of a river, stream, or lake. ("Large wood" is commonly defined to be pieces of wood more than in diameter and more than long.) Log jams in rivers and streams often span the entirety of the water's surface from bank to bank. Log jams form when trees floating in the water become entangled with other trees floating in the water or become snagged on rocks, large woody debris, or other objects anchored underwater. They can build up slowly over months or years, or they can happen instantaneously when large numbers of trees are swept into the water after natural disasters. A notable example caused by a natural disaster is the log jam that occurred in Spirit Lake following a landslide triggered by the eruption of Mount St. Helens. Unless they are dismantled by natural causes or humans, log jams can grow quickly, as more wood arriving from upstream becomes entangled in the mass. Log jams can persist for many decades, as is the case with the log jam in Spirit Lake.
Historically in North America, large natural "log rafts" were common across the continent prior to European settlement. The most famous natural wood raft is the Great Raft on the Red River in Louisiana, which prior to its removal in the 1830s affected between of the main channel. It has been suggested that such extensive log rafts may have been common in Europe in prehistory. Currently, the largest known log jam is over 3 million tonnes in the Mackenzie River in Canada's Northwest Territories. It contains more than 400,000 caches of wood and stores 3.4 million tons of carbon, equivalent to a year's emissions from 2.5 million cars.
Log jams are not to be confused with man-made timber rafts created by loggers or the intentional release of large masses of trees into the water during a log drive to a sawmill.
Effects on river geomorphology
Log jams alter flow hydraulics by diverting flow towards the bed or banks, increasing flow resistance and creating upstream pools, diverting flow onto the floodplain and damming the channel, causing water to spill over the structure. These altered channel hydraulics change local patterns of erosion and deposition, which can create greater variety in local geomorphology and thus create provision and variety of habitat for instream living organisms. The formation of a log jam against one bank typically concentrates flow in the wood-free portion of the channel, increasing velocity through this section and promoting scour of the riverbed. The formation of channel-spanning log jams can lead to the formation of an upstream pool, water spilling over the structure generating a "plunge pool" immediately downstream.
The hydraulic and geomorphological effects of log jams are highly dependent on the slope of the river (and thus the potential power of the stream); in steep channels, log jams tend to form channel-spanning steplike structures with an associated downstream scour pool, whereas in large lowland rivers with low slopes, log jams tend to be partial structures primarily acting to deflect flow with minimal geomorphological change.
Effects on ecology
Log jams provide important fish habitat. The pools created and sediment deposited by formation of log jams create prime spawning grounds for many species of salmon. These pools also provide refuge for fish during low water levels when other parts of a stream may be nearly dry. Log jams can provide refuge, as velocity shelters, during high-flow periods.
It has been suggested that log jams are an aspect of trees acting as ecosystem engineers to alter river habitats to promote tree growth. In dynamic braided rivers, such as the Tagliamento River in Italy, where the dominant tree species is the black poplar, fallen trees form log jams when they are deposited on bars; fine sediment is deposited around these log jams, and sprouting seedlings are able to stabilise braid bars and promote the formation of stable islands in the river. These stable islands are then prime areas for establishment of seedlings and further vegetation growth, which in turn can eventually provide more fallen trees to the river and thus form more log jams.
In large rivers in the Pacific Northwest of the United States, it has been shown there is a lifecycle of tree growth and river migration, with large trees falling into the channel as banks erode, then staying in place and acting as focal points for log jam formation. These log jams act as hard points, resisting further erosion and channel migration. The areas of floodplain behind these log jams then become stable enough for more large trees to grow, which can in turn become potential log jam anchor points in the future.
Metaphorical usage
"Logjam" or "log jam" can be used metaphorically to mean "deadlock" or "impasse." It can be used either more literally, to mean a physical impasse, or more metaphorically, to mean an impasse in a process due to differing opinions, legal or technical issues, etc. Here are two example sentences:
"The presence of an ambulance on the side of the highway created a logjam of rubberneckers who just had to have a look." (more literal).
"He was called in to try to break the logjam in the negotiations." (more metaphorical).
See also
Beaver dam, a wooden dam created by beavers
Great Raft
River morphology
Stream restoration
1886 St. Croix River log jam
References
Aquatic ecology
Geomorphology
Rivers | Log jam | Biology | 1,142 |
22,778,065 | https://en.wikipedia.org/wiki/Inductive%20type | In type theory, a system has inductive types if it has facilities for creating a new type from constants and functions that create terms of that type. The feature serves a role similar to data structures in a programming language and allows a type theory to add concepts like numbers, relations, and trees. As the name suggests, inductive types can be self-referential, but usually only in a way that permits structural recursion.
The standard example is encoding the natural numbers using Peano's encoding. It can be defined in Coq as follows:
Inductive nat : Type :=
| 0 : nat
| S : nat -> nat.
Here, a natural number is created either from the constant "0" or by applying the function "S" to another natural number. "S" is the successor function which represents adding 1 to a number. Thus, "0" is zero, "S 0" is one, "S (S 0)" is two, "S (S (S 0))" is three, and so on.
Since their introduction, inductive types have been extended to encode more and more structures, while still being predicative and supporting structural recursion.
Elimination
Inductive types usually come with a function to prove properties about them. Thus, "nat" may come with (in Coq syntax):
nat_elim : (forall P : nat -> Prop, (P 0) -> (forall n, P n -> P (S n)) -> (forall n, P n)).
In words: for any predicate "P" over natural numbers, given a proof of "P 0" and a proof of "P n -> P (n+1)", we get back a proof of "forall n, P n". This is the familiar induction principle for natural numbers.
Implementations
W- and M-types
W-types are well-founded types in intuitionistic type theory (ITT). They generalize natural numbers, lists, binary trees, and other "tree-shaped" data types. Let be a universe of types. Given a type : and a dependent family : → , one can form a W-type . The type may be thought of as "labels" for the (potentially infinitely many) constructors of the inductive type being defined, whereas indicates the (potentially infinite) arity of each constructor. W-types (resp. M-types) may also be understood as well-founded (resp. non-well-founded) trees with nodes labeled by elements : and where the node labeled by has ()-many subtrees. Each W-type is isomorphic to the initial algebra of a so-called polynomial functor.
Let 0, 1, 2, etc. be finite types with inhabitants 11 : 1, 12, 22:2, etc. One may define the natural numbers as the W-type
with : 2 → is defined by (12) = 0 (representing the constructor for zero, which takes no arguments), and (22) = 1 (representing the successor function, which takes one argument).
One may define lists over a type : as where
and 11 is the sole inhabitant of 1. The value of corresponds to the constructor for the empty list, whereas the value of corresponds to the constructor that appends to the beginning of another list.
The constructor for elements of a generic W-type has type
We can also write this rule in the style of a natural deduction proof,
The elimination rule for W-types works similarly to structural induction on trees. If, whenever a property (under the propositions-as-types interpretation) holds for all subtrees of a given tree it also holds for that tree, then it holds for all trees.
In extensional type theories, W-types (resp. M-types) can be defined up to isomorphism as initial algebras (resp. final coalgebras) for polynomial functors. In this case, the property of initiality (res. finality) corresponds directly to the appropriate induction principle. In intensional type theories with the univalence axiom, this correspondence holds up to homotopy (propositional equality).
M-types are dual to W-types, they represent coinductive (potentially infinite) data such as streams. M-types can be derived from W-types.
Mutually inductive definitions
This technique allows some definitions of multiple types that depend on each other. For example, defining two parity predicates on natural numbers using two mutually inductive types in Coq:
Inductive even : nat -> Prop :=
| zero_is_even : even O
| S_of_odd_is_even : (forall n:nat, odd n -> even (S n))
with odd : nat -> Prop :=
| S_of_even_is_odd : (forall n:nat, even n -> odd (S n)).
Induction-recursion
Induction-recursion started as a study into the limits of ITT. Once found, the limits were turned into rules that allowed defining new inductive types. These types could depend upon a function and the function on the type, as long as both were defined simultaneously.
Universe types can be defined using induction-recursion.
Induction-induction
Induction-induction allows definition of a type and a family of types at the same time. So, a type and a family of types .
Higher inductive types
This is a current research area in Homotopy Type Theory (HoTT). HoTT differs from ITT by its identity type (equality). Higher inductive types not only define a new type with constants and functions that create elements of the type, but also new instances of the identity type that relate them.
A simple example is the type, which is defined with two constructors, a basepoint;
and a loop;
The existence of a new constructor for the identity type makes a higher inductive type.
See also
Coinduction permits (effectively) infinite structures in type theory.
References
External links
Induction-Recursion Slides
Induction-Induction Slides
Higher Inductive Types: a tour of the menagerie
Type theory | Inductive type | Mathematics | 1,315 |
38,249,017 | https://en.wikipedia.org/wiki/Rider%27s%20British%20Merlin | Rider's British Merlin was one of the earliest almanacs to be published, issued from 1656 until at least 1830.
Content
The almanac contained the calendar, weather, and astronomical and astrological information that a typical almanac of the period would contain. The pages for each month of the year were accompanied by advice on what, and what not to eat and drink, and otherwise how to keep in good health. There were horticultural notes with abundant attention paid to herbs, fruit and vegetables.
The lengthiest sections of this little book listed annual fairs in England and Wales of fixed and moveable date. The first would generally be associated with a saint's day, while the second would be of the type "second Monday in October". This list of town names and dates represented important information in the days before Agricultural Advisers, Trade Fairs and Job Offices, when the fairs played an important role not only in buying and selling, but also in exhibiting innovations in husbandry, in information exchange and in the hiring of labour.
Who Was "Rider"?
It is generally held that Cardanus Rider is a pseudonym, and near-anagram: the letters rearrange as Ric_ard Saunder_. Richard Saunders was an English physician and astrologer, born in 1613, and who died (sources differ) either in 1675, 1687, or 1692.
The National Archives in London hold a book by Saunders on palmistry, with horoscopes; also attributed to him is The Astrological Judgment and Practice of Physick, published in 1677, although the fact that it includes charts from as early as 1616 to 1618 has led doubts to be cast on the actual authorship. Be that as it may, its subject matter was dear to the heart of "Cardanus Rider"; it stands as one of the earliest astro-medical treatises in the English language. Using the terminology of his day, the writer speaks of humours and winds, of conditions hot, cold or dry, of the cholerick and melancholy, of illnesses produced by the planets in the various signs of the zodiac, when to administer medicines based on planetary hours, and much more.
References
External links
Rider's British Merlin - Special collections - University of Glasgow
18th-century books
Almanacs
Agriculture books
Astronomy books
Astrological texts | Rider's British Merlin | Astronomy | 472 |
18,298,594 | https://en.wikipedia.org/wiki/Crossing%20number%20%28graph%20theory%29 | In graph theory, the crossing number of a graph is the lowest number of edge crossings of a plane drawing of the graph . For instance, a graph is planar if and only if its crossing number is zero. Determining the crossing number continues to be of great importance in graph drawing, as user studies have shown that drawing graphs with few crossings makes it easier for people to understand the drawing.
The study of crossing numbers originated in Turán's brick factory problem, in which Pál Turán asked for a factory plan that minimized the number of crossings between tracks connecting brick kilns to storage sites. Mathematically, this problem can be formalized as asking for the crossing number of a complete bipartite graph. The same problem arose independently in sociology at approximately the same time, in connection with the construction of sociograms. Turán's conjectured formula for the crossing numbers of complete bipartite graphs remains unproven, as does an analogous formula for the complete graphs.
The crossing number inequality states that, for graphs where the number of edges is sufficiently larger than the number of vertices, the crossing number is at least proportional to . It has applications in VLSI design and incidence geometry.
Without further qualification, the crossing number allows drawings in which the edges may be represented by arbitrary curves. A variation of this concept, the rectilinear crossing number, requires all edges to be straight line segments, and may differ from the crossing number. In particular, the rectilinear crossing number of a complete graph is essentially the same as the minimum number of convex quadrilaterals determined by a set of points in general position. The problem of determining this number is closely related to the happy ending problem.
Definitions
For the purposes of defining the crossing number, a drawing of an undirected graph is a mapping from the vertices of the graph to disjoint points in the plane, and from the edges of the graph to curves connecting their two endpoints. No vertex should be mapped onto an edge that it is not an endpoint of, and whenever two edges have curves that intersect (other than at a shared endpoint) their intersections should form a finite set of proper crossings, where the two curves are transverse. A crossing is counted separately for each of these crossing points, for each pair of edges that cross. The crossing number of a graph is then the minimum, over all such drawings, of the number of crossings in a drawing.
Some authors add more constraints to the definition of a drawing, for instance that each pair of edges have at most one intersection point (a shared endpoint or crossing). For the crossing number as defined above, these constraints make no difference, because a crossing-minimal drawing cannot have edges with multiple intersection points. If two edges with a shared endpoint cross, the drawing can be changed locally at the crossing point, leaving the rest of the drawing unchanged, to produce a different drawing with one fewer crossing. And similarly, if two edges cross two or more times, the drawing can be changed locally at two crossing points to make a different drawing with two fewer crossings. However, these constraints are relevant for variant definitions of the crossing number that, for instance, count only the numbers of pairs of edges that cross rather than the number of crossings.
Special cases
As of April 2015, crossing numbers are known for very few graph families. In particular, except for a few initial cases, the crossing number of complete graphs, bipartite complete graphs, and products of cycles all remain unknown, although there has been some progress on lower bounds.
Complete bipartite graphs
During World War II, Hungarian mathematician Pál Turán was forced to work in a brick factory, pushing wagon loads of bricks from kilns to storage sites. The factory had tracks from each kiln to each storage site, and the wagons were harder to push at the points where tracks crossed each other, from which Turán was led to ask his brick factory problem: how can the kilns, storage sites, and tracks be arranged to minimize the total number of crossings? Mathematically, the kilns and storage sites can be formalized as the vertices of a complete bipartite graph, with the tracks as its edges. A factory layout can be represented as a drawing of this graph, so the problem becomes:
what is the minimum possible number of crossings in a drawing of a complete bipartite graph?
Kazimierz Zarankiewicz attempted to solve Turán's brick factory problem; his proof contained an error, but he established a valid upper bound of
for the crossing number of the complete bipartite graph . This bound has been conjectured to be the optimal number of crossings for all complete bipartite graphs.
Complete graphs and graph coloring
The problem of determining the crossing number of the complete graph was first posed by Anthony Hill, and appeared in print in 1960. Hill and his collaborator John Ernest were two constructionist artists fascinated by mathematics. They not only formulated this problem but also originated a conjectural formula for this crossing number, which Richard K. Guy published in 1960. Namely, it is known that there always exists a drawing with
crossings. This formula gives values of for ; see sequence in the On-line Encyclopedia of Integer Sequences.
The conjecture is that there can be no better drawing, so that this formula gives the optimal number of crossings for the complete graphs. An independent formulation of the same conjecture was made by Thomas L. Saaty in 1964.
Saaty further verified that this formula gives the optimal number of crossings for and Pan and Richter showed that it also is optimal for .
The Albertson conjecture, formulated by Michael O. Albertson in 2007, states that, among all graphs with chromatic number , the complete graph has the minimum number of crossings. That is, if the conjectured formula for the crossing number of the complete graph is correct, then every -chromatic graph has crossing number at least equal to the same formula. The Albertson conjecture is now known to hold for .
Cubic graphs
The smallest cubic graphs with crossing numbers 1–11 are known . The smallest 1-crossing cubic graph is the complete bipartite graph , with 6 vertices. The smallest 2-crossing cubic graph is the Petersen graph, with 10 vertices. The smallest 3-crossing cubic graph is the Heawood graph, with 14 vertices. The smallest 4-crossing cubic graph is the Möbius-Kantor graph, with 16 vertices. The smallest 5-crossing cubic graph is the Pappus graph, with 18 vertices. The smallest 6-crossing cubic graph is the Desargues graph, with 20 vertices. None of the four 7-crossing cubic graphs, with 22 vertices, are well known. The smallest 8-crossing cubic graphs include the Nauru graph and the McGee graph or (3,7)-cage graph, with 24 vertices. The smallest 11-crossing cubic graphs include the Coxeter graph with 28 vertices.
In 2009, Pegg and Exoo conjectured that the smallest cubic graph with crossing number 13 is the Tutte–Coxeter graph and the smallest cubic graph with crossing number 170 is the Tutte 12-cage.
Connections to the bisection width
The 2/3-bisection width of a simple graph is the minimum number of edges whose removal results in a partition of the vertex set into two separated sets so that no set has more than vertices. Computing is NP-hard. Leighton proved that , provided that has bounded vertex degrees. This fundamental inequality can be used to derive an asymptotic lower bound for , when , or an estimate of it is known. In addition, this inequality has algorithmic application. Specifically, Bhat and Leighton used it (for the first time) for deriving an upper bound on the number of edge crossings in a drawing which is obtained by a divide and conquer approximation algorithm for computing .
Complexity and approximation
In general, determining the crossing number of a graph is hard; Garey and Johnson showed in 1983 that it is an NP-hard problem. In fact the problem remains NP-hard even when restricted to cubic graphs and to near-planar graphs (graphs that become planar after removal of a single edge). A closely related problem, determining the rectilinear crossing number, is complete for the existential theory of the reals.
On the positive side, there are efficient algorithms for determining whether the crossing number is less than a fixed constant . In other words, the problem is fixed-parameter tractable. It remains difficult for larger , such as . There are also efficient approximation algorithms for approximating on graphs of bounded degree which use the general and previously developed framework of Bhat and Leighton. In practice heuristic algorithms are used, such as the simple algorithm which starts with no edges and continually adds each new edge in a way that produces the fewest additional crossings possible. These algorithms are used in the Rectilinear Crossing Number distributed computing project.
The crossing number inequality
For an undirected simple graph with vertices and edges such that the crossing number is always at least
This relation between edges, vertices, and the crossing number was discovered independently by Ajtai, Chvátal, Newborn, and Szemerédi, and by Leighton . It is known as the crossing number inequality or crossing lemma.
The constant is the best known to date, and is due to Ackerman. The constant can be lowered to , but at the expense of replacing with the worse constant of .
The motivation of Leighton in studying crossing numbers was for applications to VLSI design in theoretical computer science. Later, Székely also realized that this inequality yielded very simple proofs of some important theorems in incidence geometry, such as Beck's theorem and the Szemerédi-Trotter theorem, and Tamal Dey used it to prove upper bounds on geometric k-sets.
Variations
If edges are required to be drawn as straight line segments, rather than arbitrary curves, then some graphs need more crossings. The rectilinear crossing number is defined to be the minimum number of crossings of a drawing of this type. It is always at least as large as the crossing number, and is larger for some graphs. It is known that, in general, the rectilinear crossing number can not be bounded by a function of the crossing number. The rectilinear crossing numbers for through are , () and values up to are known, with requiring either 7233 or 7234 crossings. Further values are collected by the Rectilinear Crossing Number project.
A graph has local crossing number if it can be drawn with at most crossings per edge, but not fewer.
The graphs that can be drawn with at most crossings per edge are also called -planar.
Other variants of the crossing number include the pairwise crossing number (the minimum number of pairs of edges that cross in any drawing) and the odd crossing number (the number of pairs of edges that cross an odd number of times in any drawing). The odd crossing number is at most equal to the pairwise crossing number, which is at most equal to the crossing number. However, by the Hanani–Tutte theorem, whenever one of these numbers is zero, they all are. surveys many such variants.
See also
Planarization, a planar graph formed by replacing each crossing by a new vertex
Three utilities problem, the puzzle that asks whether can be drawn with 0 crossings
References
Topological graph theory
Graph invariants
Graph drawing
Geometric intersection
NP-complete problems | Crossing number (graph theory) | Mathematics | 2,332 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.