id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
4,275,935
https://en.wikipedia.org/wiki/Sleeping%20Beauty%20problem
The Sleeping Beauty problem, also known as the Sleeping Beauty paradox, is a puzzle in decision theory in which an ideally rational epistemic agent is told she will be awoken from sleep either once or twice according to the toss of a coin. Each time she will have no memory of whether she has been awoken before, and is asked what her degree of belief that “the outcome of the coin toss is Heads” ought to be when she is first awakened. History The problem was originally formulated in unpublished work in the mid-1980s by Arnold Zuboff (the work was later published as "One Self: The Logic of Experience") followed by a paper by Adam Elga. A formal analysis of the problem of belief formation in decision problems with imperfect recall was provided first by Michele Piccione and Ariel Rubinstein in their paper: "On the Interpretation of Decision Problems with Imperfect Recall" where the "paradox of the absent minded driver" was first introduced and the Sleeping Beauty problem discussed as Example 5. The name "Sleeping Beauty" was given to the problem by Robert Stalnaker and was first used in extensive discussion in the Usenet newsgroup rec.puzzles in 1999. A more recent paper by Peter Winkler discussing different sides of the problem was published in The American Mathematical Monthly in 2017. The problem As originally published by Elga, the problem was: Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads? The only significant difference from Zuboff's unpublished versions is the number of potential wakings; Zuboff used a large number. Elga created a schedule within which to implement his solution, and this has become the canonical form of the problem: Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake: If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only. If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday. In either case, she will be awakened on Wednesday without interview and the experiment ends. Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: "What is your credence now for the proposition that the coin landed heads?" Solutions This problem continues to produce ongoing debate. Thirder position The thirder position argues that the probability of heads is 1/3. Adam Elga argued for this position originally as follows: Suppose Sleeping Beauty is told and she comes to fully believe that the coin landed tails. By even a highly restricted principle of indifference, given that the coin lands tails, her credence that it is Monday should equal her credence that it is Tuesday, since being in one situation would be subjectively indistinguishable from the other. In other words, P(MondayTails) = P(Tuesday|Tails), and thus P(Tails and Tuesday) = P(Tails and Monday). Suppose now that Sleeping Beauty is told upon awakening and comes to fully believe that it is Monday. Guided by the objective chance of heads landing being equal to the chance of tails landing, it should hold that P(Tails|Monday) = P(Heads|Monday), and thus P(Tails and Tuesday) = P(Tails and Monday) = P(Heads and Monday). Since these three outcomes are exhaustive and exclusive for one trial (and thus their probabilities must add to 1), the probability of each is then 1/3 by the previous two steps in the argument. An alternative argument is as follows: Credence can be viewed as the amount a rational risk-neutral bettor would wager if the payoff for being correct is 1 unit (the wager itself being lost either way). In the heads scenario, Sleeping Beauty would spend her wager amount one time, and receive 1 money for being correct. In the tails scenario, she would spend her wager amount twice, and receive nothing. Her expected value is therefore to gain 0.5 but also lose 1.5 times her wager, thus she should break even if her wager is 1/3. Halfer position David Lewis responded to Elga's paper with the position that Sleeping Beauty's credence that the coin landed heads should be 1/2. Sleeping Beauty receives no new non-self-locating information throughout the experiment because she is told the details of the experiment. Since her credence before the experiment is P(Heads) = 1/2, she ought to continue to have a credence of P(Heads) = 1/2 since she gains no new relevant evidence when she wakes up during the experiment. This directly contradicts one of the thirder's premises, since it means P(Tails|Monday) = 1/3 and P(Heads|Monday) = 2/3. Double halfer position The double halfer position argues that both P(Heads) and P(Heads|Monday) equal 1/2. Mikaël Cozic, in particular, argues that context-sensitive propositions like "it is Monday" are in general problematic for conditionalization and proposes the use of an imaging rule instead, which supports the double halfer position. Ambiguous-question position Another approach to the Sleeping Beauty problem is to assert that the problem, as stated, is ambiguous. This view asserts that the thirder and halfer positions are both correct answers, but to different questions. The key idea is that the question asked of Sleeping Beauty, "what is your credence that the coin came up heads", is ambiguous. The question must be disambiguated based on the particular event whose probability we wish to measure. The two disambiguations are: "what is your credence that the coin landed heads in the act of tossing" and "what is your credence that the coin landed heads in the toss to set up this awakening"; to which, the correct answers are 1/2 and 1/3 respectively. Another way to see the two different questions is to simplify the Sleeping Beauty problem as follows. Imagine tossing a coin, if the coin comes up heads, a green ball is placed into a box; if, instead, the coin comes up tails, two red balls are placed into a box. We repeat this procedure a large number of times until the box is full of balls of both colours. A single ball is then drawn from the box. In this setting, the question from the original problem resolves to one of two different questions: "what is the probability that a green ball was placed in the box" and "what is the probability a green ball was drawn from the box". These questions ask for the probability of two different events, and thus can have different answers, even though both events are causally dependent on the coin landing heads. (This fact is even more obvious when one considers the complementary questions: "what is the probability that two red balls were placed in the box" and "what is the probability that a red ball was drawn from the box".) This view evidently violates the principle that, if event A happens if and only if event B happens, then we should have equal credence for event A and event B. This principle is not applicable because the sample spaces are different. Connections to other problems Credence about what precedes awakenings is a core question in connection with the anthropic principle. Variations Extreme Sleeping Beauty This differs from the original in that there are one million and one wakings if tails comes up. It was formulated by Nick Bostrom. Sailor's Child problem The Sailor's Child problem, introduced by Radford M. Neal, is somewhat similar. It involves a sailor who regularly sails between ports. In one port there is a woman who wants to have a child with him, across the sea there is another woman who also wants to have a child with him. The sailor cannot decide if he will have one or two children, so he will leave it up to a coin toss. If Heads, he will have one child, and if Tails, two children (one with each woman; presumably the children will never meet). But if the coin lands on Heads, which woman would have his child? He would decide this by looking at The Sailor's Guide to Ports and the woman in the port that appears first would be the woman that he has a child with. You are his child. You do not have a copy of The Sailor's Guide to Ports. What is the probability that you are his only child, thus the coin landed on Heads (assume a fair coin)? See also Bayesian probability Boy or girl paradox Doomsday argument Monty Hall problem References Other works discussing the Sleeping Beauty problem Colombo, M., Lai, J. & Crupi, V. Sleeping Beauty Goes to the Lab: The Psychology of Self-Locating Evidence. Rev.Phil.Psych. 10, 173–185 (2019). https://doi.org/10.1007/s13164-018-0381-8 Neal, R. (2006). Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning, preprint Titelbaum, M. (2013). Quitting Certainties, 210–229, 233–237, 241–249, 250, 276–277 External links Terry Horgan: Sleeping Beauty Awakened: New Odds at the Dawn of the New Day (review paper with references) Anthropic Preprint Archive: The Sleeping Beauty Problem: An archive of papers on this problem Phil Papers Entry on Sleeping Beauty (a complete bibliography of papers on the problem) Twoplustwo thread discussing the sleeping beauty problem in depth Epistemology Probability problems Probability theory paradoxes Puzzles
Sleeping Beauty problem
[ "Mathematics" ]
2,151
[ "Probability problems", "Mathematical problems", "Probability theory paradoxes", "Mathematical paradoxes" ]
4,276,393
https://en.wikipedia.org/wiki/Group%20with%20operators
In abstract algebra, a branch of mathematics, a group with operators or Ω-group is an algebraic structure that can be viewed as a group together with a set Ω that operates on the elements of the group in a special way. Groups with operators were extensively studied by Emmy Noether and her school in the 1920s. She employed the concept in her original formulation of the three Noether isomorphism theorems. Definition A group with operators can be defined as a group together with an action of a set on : that is distributive relative to the group law: For each , the application is then an endomorphism of G. From this, it results that a Ω-group can also be viewed as a group G with an indexed family of endomorphisms of G. is called the operator domain. The associate endomorphisms are called the homotheties of G. Given two groups G, H with same operator domain , a homomorphism of groups with operators from to is a group homomorphism satisfying for all and A subgroup S of G is called a stable subgroup, -subgroup or -invariant subgroup if it respects the homotheties, that is for all and Category-theoretic remarks In category theory, a group with operators can be defined as an object of a functor category GrpM where M is a monoid (i.e. a category with one object) and Grp denotes the category of groups. This definition is equivalent to the previous one, provided is a monoid (if not, we may expand it to include the identity and all compositions). A morphism in this category is a natural transformation between two functors (i.e., two groups with operators sharing same operator domain M ). Again we recover the definition above of a homomorphism of groups with operators (with f the component of the natural transformation). A group with operators is also a mapping where is the set of group endomorphisms of G. Examples Given any group G, (G, ∅) is trivially a group with operators Given a module M over a ring R, R acts by scalar multiplication on the underlying abelian group of M, so (M, R) is a group with operators. As a special case of the above, every vector space over a field K is a group with operators (V, K). Applications The Jordan–Hölder theorem also holds in the context of groups with operators. The requirement that a group have a composition series is analogous to that of compactness in topology, and can sometimes be too strong a requirement. It is natural to talk about "compactness relative to a set", i.e. talk about composition series where each (normal) subgroup is an operator-subgroup relative to the operator set X, of the group in question. See also Group action Notes References Group actions (mathematics) Universal algebra
Group with operators
[ "Physics", "Mathematics" ]
596
[ "Fields of abstract algebra", "Universal algebra", "Group actions", "Symmetry" ]
4,276,418
https://en.wikipedia.org/wiki/Total%20ring%20of%20fractions
In abstract algebra, the total quotient ring or total ring of fractions is a construction that generalizes the notion of the field of fractions of an integral domain to commutative rings R that may have zero divisors. The construction embeds R in a larger ring, giving every non-zero-divisor of R an inverse in the larger ring. If the homomorphism from R to the new ring is to be injective, no further elements can be given an inverse. Definition Let be a commutative ring and let be the set of elements that are not zero divisors in ; then is a multiplicatively closed set. Hence we may localize the ring at the set to obtain the total quotient ring . If is a domain, then and the total quotient ring is the same as the field of fractions. This justifies the notation , which is sometimes used for the field of fractions as well, since there is no ambiguity in the case of a domain. Since in the construction contains no zero divisors, the natural map is injective, so the total quotient ring is an extension of . Examples For a product ring , the total quotient ring is the product of total quotient rings . In particular, if A and B are integral domains, it is the product of quotient fields. For the ring of holomorphic functions on an open set D of complex numbers, the total quotient ring is the ring of meromorphic functions on D, even if D is not connected. In an Artinian ring, all elements are units or zero divisors. Hence the set of non-zero-divisors is the group of units of the ring, , and so . But since all these elements already have inverses, . In a commutative von Neumann regular ring R, the same thing happens. Suppose a in R is not a zero divisor. Then in a von Neumann regular ring a = axa for some x in R, giving the equation a(xa − 1) = 0. Since a is not a zero divisor, xa = 1, showing a is a unit. Here again, . In algebraic geometry one considers a sheaf of total quotient rings on a scheme, and this may be used to give the definition of a Cartier divisor. The total ring of fractions of a reduced ring Proof: Every element of Q(A) is either a unit or a zero divisor. Thus, any proper ideal I of Q(A) is contained in the set of zero divisors of Q(A); that set equals the union of the minimal prime ideals since Q(A) is reduced. By prime avoidance, I must be contained in some . Hence, the ideals are maximal ideals of Q(A). Also, their intersection is zero. Thus, by the Chinese remainder theorem applied to Q(A), . Let S be the multiplicatively closed set of non-zero-divisors of A. By exactness of localization, , which is already a field and so must be . Generalization If is a commutative ring and is any multiplicatively closed set in , the localization can still be constructed, but the ring homomorphism from to might fail to be injective. For example, if , then is the trivial ring. Citations References Commutative algebra Ring theory de:Lokalisierung (Algebra)#Totalquotientenring
Total ring of fractions
[ "Mathematics" ]
734
[ "Fields of abstract algebra", "Commutative algebra", "Ring theory" ]
4,276,759
https://en.wikipedia.org/wiki/Avometer
AVOmeter is a British trademark for a line of multimeters and electrical measuring instruments; the brand is now owned by the Megger Group Limited. The first Avometer was made by the Automatic Coil Winder and Electrical Equipment Co. in 1923, and measured direct voltage, direct current and resistance. Possibly the best known multimeter of the range was the Model 8, which was produced in various versions from May 1951 until 2008; the last version was the Mark 7. The multimeter is often called simply an AVO, because the company logo carries the first letters of 'amps', 'volts' and 'ohms'. The design concept is due to the Post Office engineer Donald Macadie, who at the time of the introduction of the original AVOmeter in 1923 was a senior officer in the Post Office Factories Department in London. Technical features The original AVOmeter was designed to measure direct current (3 ranges, 0.12, 1.2 & 12 A), direct voltage (3 ranges, 12, 120 & 600 V) and resistance (single range, 0 - 10,000 ohms, 225 ohms mid-scale). All ranges could be selected by a single rotary switch which set both the function and the range value. A second switch brought a rheostat into circuit in series with the instrument and could be used to control the current through a device under test and the meter. The movement drew 12 mA for full-scale deflection and used a "universal shunt" permanently in parallel with the movement which increased the input terminal full-scale current to 16.6 mA, corresponding to 60 ohms per volt. It had a knife edge pointer and an anti-parallax mirror. Additional patents were taken out in Czechoslovakia (1923), Austria, France, Germany, and Switzerland (1924). A US patent followed in 1926 The case of the original AVOmeter was a comb-jointed oak box with an ebonite lower front panel. The upper part of the front panel was cast aluminium. After around three years production, the volume of sales was sufficient to justify a redesign of the instrument, now with a movement whose full-scale current was 6 mA. The redesigned meter had 13 ranges and was constructed on a one piece phenolic moulding with the characteristic "kidney" shaped window. The back case was a deep drawn aluminium can on the back of which was a summary of the operating instructions, a feature of all future AVOmeters. The movement was originally protected by a short length of wire, selected to act as a fuse, soldered to supports on the back of the movement. Later versions had a calibrated, screw-in, fuse on the front panel. After copper oxide instrument rectifiers became available in the late 1920s, a 20-range "Universal" version of the AVOmeter was introduced in 1931 having both direct and alternating voltage current ranges. Unlike many similar multimeter designs, all Universal AVOmeters, with the exception of the short-lived "High Resistance (HR) AVOmeter" (c. 1948 - 1951), could measure up to either 10 A or 12 A (AC) depending on the model. From 1933, the number of available voltage and current ranges in Universal AVOmeters was doubled by incorporating a dual sensitivity movement circuit. The higher sensitivity was selected by a push button switch marked ÷2 (Divide by two) signifying that the pointer indication should be halved. For the Model 8, this feature was not used but the push button was retained for reversing the direction of deflection of the moving coil. A design feature of AVOmeters was simplicity of use and towards this end, all measurements could generally be made using only two input terminals. However, the AVOmeter HR had additional 2500 V (AC) and (DC) ranges which used the corresponding 1000 V ranges, and were connected through two additional terminals at the top corners of the front panel. This feature was continued in the Model 8 and, with an increase to 3000 V to match their 1 - 3 - 10 ranges sequence, in the Model 9, Marks II and IV and the Model 8 Mark V. The 3000 V ranges were deleted in the Model 8 Marks 6 and 7 due to concerns for compliance with contemporary safety standards. This also led to a significant cost saving by eliminating the high voltage multiplier resistors. As an ohmmeter the Model 8 Mark II measures from 1 Ω up to 20 MΩ in three ranges. The instrument has an accuracy of ±1% of FSD on DC current ranges, ±2% of FSD on DC voltage ranges, ±2.25% of FSD on all AC ranges and ±5% of reading (at centre scale only) on resistance ranges. Its maximum current draw of 50 μA at full-scale deflection (corresponding to 20,000 ohms per volt) is sufficient in most cases to reduce voltage measurement error due to circuit loading by the meter to an acceptable level. The AVOmeter design incorporates an electrical interlock which prevents AC & DC ranges being selected simultaneously. For example, none of the DC ranges, current or voltage, can be connected unless the AC switch is set to its "DC" position. On a Model 8, this is the position with the AC switch arrow vertical. Similarly, to use the AC ranges, the DC switch must be set to its "AC" position. With the DC switch set to its "AC" position and the AC switch set to "DC", no current can flow through the instrument. However whenever any moving coil instrument is likely to be subjected to heavy shock in transit, it is good practice to damp the movement by short circuiting the moving coil using a heavy gauge wire connected across the terminals. On earlier Avometers, this may be done by short-circuiting the input terminals and selecting the most sensitive direct current range. The Model 8 Mark V, 6 & 7 were provided with an "OFF" position on the DC switch which both disconnected the meter's terminals and short-circuited the moving coil. AVOmeters designed from 1936 onwards were fitted with an overload cut-out operated by the moving coil frame hitting either forward or reverse sprung end stops. The Model 7 was the first type to use the end stop cut-out and it also featured an acceleration trip which, in the event of heavy overloads, could open the cut-out before the pointer had reached two-thirds of full scale. The acceleration cut-out was not however used in the Model 8. From the Mark III version, the Model 8 had further protection by a fuse on its resistance ranges and fuse protection was provided on all ranges of the Model 8 Marks 6 & 7. AVO multimeters were almost ubiquitous in British manufacturing and service industry, research and development and higher and further education. They were also widely used by utilities, government agencies and the British armed forces. A number of special versions were produced to British Admiralty and Air Ministry specifications and for other customers. The Model 8 Marks V, 6 & 7 were designed to meet a NATO specification and were standard issue to NATO services. Many commercial and military service manuals specified that values for measurements of current or voltage had been made with a Model 7 or Model 8 AVOmeter. Advertisements of the late 1930s compared the utility of the AVOmeter to the slide rule. Even nowadays it can still be found in regular use. The earlier versions of models 7, 8 and 9 had a design flaw which resulted in many instruments sustaining damage to the movement in transit. Users would habitually 'switch off' the instrument by setting the AC switch to 'DC' and the DC switch to 'AC'. With the switches at these settings, the movement is completely undamped. The operating manuals for the affected instruments did contain a note that they should not be switched to 'AC' and 'DC' (or the blank position either side of the 'AC' and 'DC') though failed to explain why. The problem was solved on later instruments by providing the DC switch with an 'OFF' position (see illustration above). Present times Despite continuing demand from customers, production was stopped in 2008, reportedly due to increasing problems with suppliers of mechanical parts. The last meter to leave the factory was an AVOmeter Eight Mk 7 (Serial Number 6110-610/081208/5166) which was presented in February 2010 to the winner of a competition run by the Megger company. Principal models General purpose multimeters "The AVOmeter" - 1923 to 1928 7 ranges direct current, direct voltage and resistance (DC) AVOmeter - 1928 to 1939, Originally 13 ranges, later extended to 22 ranges through use of "divide by two" push button switch Universal AVOmeter - 1931 to 1939, originally 20 ranges, later extended 34 and 36 ranges through use of "divide by two" push button switch, replaced by Model 40 Universal AVOmeter Model 40 1939 to c. 1986. A development of the 36-range Universal AVOmeter incorporating automatic cut-out and internal construction similar to the Model 7 (Basic ranges to 12 A and 1200 V, the former extendable with accessory current shunts). 167 ohms/volt. "High Sensitivity" meters principally for Radio and Electronics Universal AVOmeter 50-range Later known as Model 7 (1936 to c. 1986): A "High Sensitivity" multimeter for radio servicing. (Basic ranges to 10 A and 1000 V, the former extendable with accessory current shunts. A power factor and wattage unit was also available). 500 ohms/volt with divide by two button in normal position, 1000 ohms per volt with divide by two button pressed. AVOmeter model 8: May 1951 to November 2008 (7 'Marks') (Basic ranges to 10 A and 1000, 2500 or 3000 V depending on Mk.). 20,000 ohms/volt DC, 1000 ohms/volt AC. AVOmeter model 9: Essentially similar to model 8 but with international symbols rather than letter markings for the DC and AC switches (Basic ranges to 10 A and 3000 V). 20,000 ohms/volt DC, 1000 ohms/volt AC. (The features of the Models 8 and 9 were combined from the Model 8 Mark V of 1972, when the Model 9 was discontinued). Special Purpose Multimeters AVOmeter model 12: Designed for automotive use. (Ranges 3.6 A & 36 A, 9 V, 18 V & 36 V DC, current ranges extendable with accessory shunts), 9 V, 18 V, 90 V & 360 V (AC). Heavy Duty AVOmeter: A smaller rugged multimeter with a single selector switch. Originally designed at the request of the Great Western Railway for railway signalling purposes but first supplied after the GWR became the Western Region of British Railways in 1948. Later also sold with alternative ranges for the commercial market. (Basic ranges to 10 A and 1000 V). "Minor" Models AVOminor (1935 to 1952) - A small instrument with direct current, direct voltage and resistance ranges only. Ranges selected by plugging leads into required socket. Universal AVOminor (1936 to 1952) - A small instrument with AC & DC ranges selected by plugging leads into required socket. AVO Multiminor: Replacement for earlier 'Minor' AVOmeters. All ranges and functions selected by a single rotary switch. No automatic protection. A smaller version similar in size to small portable test meters. (Basic ranges to 1 A, DC only and 1000 V, both extendable with external multiplier and shunts). 10,000 ohms/volt DC, 1000 ohms/volt AC. Clamp meter: Principally for higher currents (Ranges 300 A, 600 A, 1200 A, 150 V, 300 V & 600 V all AC only). Sensitivity unknown. All current and voltage ranges for above are both AC and DC unless otherwise stated. Other products The company manufactured geiger counters for civil defence use during the 1950s and 60s. The Automatic Coil Winder & Electrical Equipment Co. Ltd. made many other types of instruments, including a line of valve (vacuum tube) testers. References External links Model 7 Avometer Model 8 Avometer Electronic test equipment Electrical test equipment
Avometer
[ "Technology", "Engineering" ]
2,516
[ "Electronic test equipment", "Measuring instruments", "Electrical test equipment" ]
4,276,787
https://en.wikipedia.org/wiki/Java%20Research%20License
The Java Research License (JRL) is a software distribution license created by Sun in an effort to simplify and relax the terms from the "research section" of the Sun Community Source License. Sun's J2SE 1.6.0, Mustang, is licensed under the JRL as well as many projects at Java.net. The JRL was introduced in 2003 to try to "make things a lot more friendly to people doing academic research" into the Java language, before the core of Java was made open source in 2006. Although the JRL has elements of an open source license, the terms forbid any commercial use and are thus incompatible with both the Free Software Definition and the Open Source Definition. The JRL is a research license to be used for non-commercial academic uses. See also Sun Microsystems OpenJDK References External links (archived) Software licensing Sun Microsystems
Java Research License
[ "Technology" ]
182
[ "Computing stubs", "Software stubs" ]
4,277,015
https://en.wikipedia.org/wiki/Captopril%20challenge%20test
The captopril challenge test (CCT) is a non-invasive medical test that measures the change in renin plasma-levels in response to administration of captopril, an angiotensin converting enzyme inhibitor. It is used to assist in the diagnosis of renal artery stenosis. It is not generally considered a useful test for children, and more suitable options are available for adult cases. Procedure Plasma concentration of renin is measured prior to and following the administration of captopril. The CCT is considered positive if the renin levels increase substantially or the baseline renin level is abnormally high. In adults CCT in adults is known to have high sensitivity, but a low specificity. Subtraction angiography is considered a more suitable test for renal artery stenosis in adults. See also Captopril suppression test - used to diagnose primary aldosteronism References Blood tests Dynamic endocrine function tests
Captopril challenge test
[ "Chemistry" ]
190
[ "Blood tests", "Chemical pathology" ]
4,277,022
https://en.wikipedia.org/wiki/David%20Kirk%20%28scientist%29
David Blair Kirk (born 1960) is a computer scientist and former chief scientist and vice president of architecture at NVIDIA. As of 2019, he is a Venture Partner at DigitalDx Ventures, and an independent consultant and advisor. Kirk holds B.S. and M.S. degrees in Mechanical Engineering from the Massachusetts Institute of Technology and M.S. and Ph.D. degrees in Computer Science from the California Institute of Technology. From 1989 to 1991, Kirk was an engineer for Apollo Systems Division of Hewlett-Packard. From 1993 to 1996, Kirk was Chief Scientist and Head of Technology for Crystal Dynamics, a video game manufacturing company. From 1997 to 2009 he was NVIDIA's chief scientist and he is an NVIDIA Fellow. In 2002, Kirk received the ACM SIGGRAPH Computer Graphics Achievement Award for his significant contributions to bringing high performance graphics hardware to the mass market. In 2006, Kirk was elected a member of the National Academy of Engineering for his role in bringing high-performance graphics to personal computers. Kirk is the inventor of 50 patents and patent applications relating to graphics design and underlying graphics algorithms. Books References External links NVIDIA Corporate Biography 1960 births Living people Nvidia people Computer graphics professionals Members of the United States National Academy of Engineering
David Kirk (scientist)
[ "Technology" ]
261
[ "Computing stubs", "Computer specialist stubs" ]
4,277,086
https://en.wikipedia.org/wiki/Wigner%E2%80%93Weyl%20transform
In quantum mechanics, the Wigner–Weyl transform or Weyl–Wigner transform (after Hermann Weyl and Eugene Wigner) is the invertible mapping between functions in the quantum phase space formulation and Hilbert space operators in the Schrödinger picture. Often the mapping from functions on phase space to operators is called the Weyl transform or Weyl quantization, whereas the inverse mapping, from operators to functions on phase space, is called the Wigner transform. This mapping was originally devised by Hermann Weyl in 1927 in an attempt to map symmetrized classical phase space functions to operators, a procedure known as Weyl quantization. It is now understood that Weyl quantization does not satisfy all the properties one would require for consistent quantization and therefore sometimes yields unphysical answers. On the other hand, some of the nice properties described below suggest that if one seeks a single consistent procedure mapping functions on the classical phase space to operators, the Weyl quantization is the best option: a sort of normal coordinates of such maps. (Groenewold's theorem asserts that no such map can have all the ideal properties one would desire.) Regardless, the Weyl–Wigner transform is a well-defined integral transform between the phase-space and operator representations, and yields insight into the workings of quantum mechanics. Most importantly, the Wigner quasi-probability distribution is the Wigner transform of the quantum density matrix, and, conversely, the density matrix is the Weyl transform of the Wigner function. In contrast to Weyl's original intentions in seeking a consistent quantization scheme, this map merely amounts to a change of representation within quantum mechanics; it need not connect "classical" with "quantum" quantities. For example, the phase-space function may depend explicitly on the reduced Planck constant ħ, as it does in some familiar cases involving angular momentum. This invertible representation change then allows one to express quantum mechanics in phase space, as was appreciated in the 1940s by Hilbrand J. Groenewold and José Enrique Moyal. In more generality, Weyl quantization is studied in cases where the phase space is a symplectic manifold, or possibly a Poisson manifold. Related structures include the Poisson–Lie groups and Kac–Moody algebras. Definition of the Weyl quantization of a general observable The following explains the Weyl transformation on the simplest, two-dimensional Euclidean phase space. Let the coordinates on phase space be , and let be a function defined everywhere on phase space. In what follows, we fix operators P and Q satisfying the canonical commutation relations, such as the usual position and momentum operators in the Schrödinger representation. We assume that the exponentiated operators and constitute an irreducible representation of the Weyl relations, so that the Stone–von Neumann theorem (guaranteeing uniqueness of the canonical commutation relations) holds. Basic formula The Weyl transform (or Weyl quantization) of the function is given by the following operator in Hilbert space, Throughout, ħ is the reduced Planck constant. It is instructive to perform the and integrals in the above formula first, which has the effect of computing the ordinary Fourier transform of the function , while leaving the operator . In that case, the Weyl transform can be written as . We may therefore think of the Weyl map as follows: We take the ordinary Fourier transform of the function , but then when applying the Fourier inversion formula, we substitute the quantum operators and for the original classical variables and , thus obtaining a "quantum version of ." A less symmetric form, but handy for applications, is the following, In the position representation The Weyl map may then also be expressed in terms of the integral kernel matrix elements of this operator, Inverse map The inverse of the above Weyl map is the Wigner map (or Wigner transform), which was introduced by Eugene Wigner, which takes the operator back to the original phase-space kernel function , For example, the Wigner map of the oscillator thermal distribution operator is If one replaces in the above expression with an arbitrary operator, the resulting function may depend on the reduced Planck constant , and may well describe quantum-mechanical processes, provided it is properly composed through the star product, below. In turn, the Weyl map of the Wigner map is summarized by Groenewold's formula, Weyl quantization of polynomial observables While the above formulas give a nice understanding of the Weyl quantization of a very general observable on phase space, they are not very convenient for computing on simple observables, such as those that are polynomials in and . In later sections, we will see that on such polynomials, the Weyl quantization represents the totally symmetric ordering of the noncommuting operators and . For example, the Wigner map of the quantum angular-momentum-squared operator L2 is not just the classical angular momentum squared, but it further contains an offset term , which accounts for the nonvanishing angular momentum of the ground-state Bohr orbit. Properties Weyl quantization of polynomials The action of the Weyl quantization on polynomial functions of and is completely determined by the following symmetric formula: for all complex numbers and . From this formula, it is not hard to show that the Weyl quantization on a function of the form gives the average of all possible orderings of factors of and factors of :where , and is the set of permutations on N elements. For example, we have While this result is conceptually natural, it is not convenient for computations when and are large. In such cases, we can use instead McCoy's formula This expression gives an apparently different answer for the case of from the totally symmetric expression above. There is no contradiction, however, since the canonical commutation relations allow for more than one expression for the same operator. (The reader may find it instructive to use the commutation relations to rewrite the totally symmetric formula for the case of in terms of the operators , , and and verify the first expression in McCoy's formula with .) It is widely thought that the Weyl quantization, among all quantization schemes, comes as close as possible to mapping the Poisson bracket on the classical side to the commutator on the quantum side. (An exact correspondence is impossible, in light of Groenewold's theorem.) For example, Moyal showed the Theorem: If is a polynomial of degree at most 2 and is an arbitrary polynomial, then we have . Weyl quantization of general functions If is a real-valued function, then its Weyl-map image is self-adjoint. If is an element of Schwartz space, then is trace-class. More generally, is a densely defined unbounded operator. The map is one-to-one on the Schwartz space (as a subspace of the square-integrable functions). See also Canonical commutation relation Deformation quantization Heisenberg group Moyal bracket Weyl algebra Functor Pseudo-differential operator Wigner quasi-probability distribution Stone–von Neumann theorem Phase space formulation of quantum mechanics Kontsevich quantization formula Gabor–Wigner transform Oscillator representation References Further reading (Sections I to IV of this article provide an overview over the Wigner–Weyl transform, the Wigner quasiprobability distribution, the phase space formulation of quantum mechanics and the example of the quantum harmonic oscillator.) Terence Tao's 2012 notes on Weyl ordering Mathematical quantization Mathematical physics Foundational quantum physics Concepts in physics
Wigner–Weyl transform
[ "Physics", "Mathematics" ]
1,589
[ "Applied mathematics", "Theoretical physics", "Foundational quantum physics", "Quantum mechanics", "Mathematical quantization", "nan", "Mathematical physics" ]
4,277,163
https://en.wikipedia.org/wiki/NGC%205548
NGC 5548 is a Type I Seyfert galaxy with a bright, active nucleus. This activity is caused by matter flowing onto a 65 million solar mass () supermassive black hole at the core. Morphologically, this is an unbarred lenticular galaxy with tightly-wound spiral arms, while shell and tidal tail features suggest that it has undergone a cosmologically-recent merger or interaction event. NGC 5548 is approximately 245 million light years away and appears in the constellation Boötes. The apparent visual magnitude of NGC 5548 is approximately 13.3 in the V band. In 1943, this galaxy was one of twelve nebulae listed by American astronomer Carl Keenan Seyfert that showed broad emission lines in their nuclei. Members of this class of objects became known as Seyfert galaxies, and they were noted to have a higher than normal surface brightness in their nuclei. Observation of NGC 5548 during the 1960s with radio telescopes showed an enhanced level of radio emission. Spectrograms of the nucleus made in 1966 showed that the energized region was confined to a volume a few parsecs across, where temperature were around and the plasma had a dispersion velocity of ±450 km/s. Among astronomers, the accepted explanation for the active nucleus in NGC 5548 is the accretion of matter onto a supermassive black hole (SMBH) at the core. This object is surrounded by an orbiting disk of accreted matter drawn in from the surroundings. As material is drawn into the outer parts of this disk, it becomes photoionized, producing broad emission lines in the optical and ultraviolet bands of the electromagnetic spectrum. A wind of ionized matter, organized in filamentary structures at distances of 1–14 light days from the center, is flowing outward in the direction perpendicular to the accretion disk plane. The mass of the central black hole can be estimated based on the properties of the emission lines in the core region. Combined measurements yield an estimated mass of . In other words, it is some 65 million times the mass of the Sun. This result is consistent with other methods of estimating the mass of the SMBH in the nucleus of NGC 5548. Matter is falling onto this black hole at the estimated rate of per year, whereas mass is flowing outward from the core at or above the rate of each year. The inner part of the accretion disk surrounding the SMBH forms a thick, hot corona spanning several light hours that is emitting X-rays. When this radiation reaches the optically thick part of the accretion disk at a radius of around 1–2 light days, the X-rays are converted into heat. References External links 5548 Unbarred lenticular galaxies Seyfert galaxies Markarian 1509 Boötes
NGC 5548
[ "Astronomy" ]
574
[ "Boötes", "Constellations" ]
4,277,314
https://en.wikipedia.org/wiki/Cellular%20repeater
A cellular repeater (also known as cell phone signal booster or cell phone signal amplifier) is a type of bi-directional amplifier used to improve cell phone reception. A cellular repeater system commonly consists of a donor antenna that receives and transmits signal from nearby cell towers, coaxial cables, a signal amplifier, and an indoor rebroadcast antenna. Common components Donor antenna A "donor antenna" is typically installed by a window or on the roof a building and used to communicate back to a nearby cell tower. A donor antenna can be any of several types, but is usually directional or omnidirectional. An omnidirectional antenna (which broadcast in all directions) is typically used for a repeater system that amplify coverage for all cellular carriers. A directional antenna is used when a particular tower or carrier needs to be isolated for improvement. The use of a highly directional antenna can help improve the donor's signal-to-noise ratio, thus improving the quality of signal redistributed inside a building. Indoor antenna Some cellular repeater systems can also include an omnidirectional antenna for rebroadcasting the signal indoors. Depending on attenuation from obstacles, the advantage of using an omnidirectional antenna is that the signal will be equally distributed in all directions. Motor vehicle antenna When it is raining and the motor vehicle windows are closed a cell phone could lose between 50% and 100% of its reception. To rectify the reception an antenna is placed outside the vehicle and is wired to the inside of the vehicle to another antenna and amplifier to transmit the mobile phone signal to the cell phone inside the vehicle. Signal amplifier Cellular repeater systems include a signal amplifier. Standard GSM channel selective repeaters (operated by telecommunication operators for coverage of large areas and big buildings) have output power around 2 W, high power repeaters have output power around 10 W. The power gain is calculated by the following equation: A repeater needs to secure sufficient isolation between the donor and the service antenna. When the isolation is lower than actual gain plus a margin (of typically 5–15 dB), the repeater may go into in loop oscillation. This oscillation can cause interference to the cellular network. The isolation may be improved by antenna type selection in a macro environment, which involves adjusting the angle between the donor and service antennas (ideally 180°), space separation (typically the vertical distance in the case of the tower installation between donor and service antenna is several meters), insertion into an attenuating environment (e.g. installing a metal mesh between donor and service antennas), and/or reduction of reflections (no near obstacles in front of the donor antenna such as trees or buildings). Isolation can be also improved by integrated feature called ICE (interference cancellation equipment) offered in some products (e.g., NodeG, RFWindow). Activation of this feature has a negative impact on internal delay (higher delay => approximately +5 μs up to standard rep. delay) and consequently a shorter radius from donor site. Amplification and filtering introduce a delay (typically between 5 and 15 μs), depending on the type of repeater and features used. Additional distance also adds propagation delay. Reasons for weak signal Rural areas In many rural areas the housing density is too low to make construction of a new base station commercially viable. Installing a home cellular repeater may remedy this. In flat rural areas the signal is unlikely to suffer from multipath interference. Building construction material Certain construction materials can attenuate cell phone signal strength. Older buildings, such as churches, often block cellular signals. Any building that has a significant thickness of concrete, or a large amount of metal used in its construction, will attenuate the signal. Concrete floors are often poured onto a metal pan, which completely blocks most radio signals. Some solid foam insulation and some fiberglass insulation used in roofs or exterior walls have foil backing, which can reduce transmittance. Energy efficient windows and metal window screens are also very effective at blocking radio signals. Some materials have peaks in their absorption spectra, which decrease signal strength. Building size Large buildings, such as warehouses, hospitals, and factories, often lack cellular reception. Low signal strength also tends to occur in underground areas (such as basements, and in shops and restaurants located towards the centre of shopping malls). In these cases, an external antenna is usually used. Multipath interference Even in urban areas (which usually have strong cellular signals throughout), there may be dead zones caused by destructive interference of waves. These usually have an area of a few blocks and will usually only affect one of the two frequency ranges used by cell phones. This happens because different wavelengths of the different frequencies interfere destructively at different points. Directional antennas can be helpful at overcoming this issue since they may be used to select a single path from several (see Multipath interference for more details). Diffraction and general attenuation The longer wavelengths have the advantage of diffracting more, and so line of sight is not as necessary to obtain a good signal. Because the frequencies that cell phones use are too high to reflect off the ionosphere as shortwave radio waves do, cell phone waves cannot travel via the ionosphere. (See Diffraction and Attenuation for more details). Different operating frequencies Repeaters are available for all of the GSM frequency bands. Some repeaters will handle different types of networks (such as multi-mode GSM and UMTS). Repeater systems are available for certain Satellite phone systems, allowing these to be used indoors without a clear line of sight to the satellite. Regional approval Approval in the US by the FCC It used to be legal to use the low power devices available for home and small scale use in commercial areas (offices, shops, bars, etc.). On February 20, 2013, the FCC released a Report & Order, thus establishing two Safe Harbors and defining the use of "network safe" consumer boosters on licensed spectrum. The Safe Harbors represent a compromise solution between Technology Manufacturers and Wireless Operators. Only a few companies have a product compatible with the new FCC regulations. The FCC has defined two types of repeaters: Wide-band (or broadband) signal boosters are usually repeaters that amplify all frequencies from cell phone carriers. Because interferences can be generated from such boosters, the manufacturers who apply to the FCC must limit their gain (among other things), to 65 dB (for the low LTE 700 MHz bands) to 72 dB (for higher frequencies such as AWS). By limiting the system gain, such boosters are only useful when the outdoor signal is relatively high, and need a complex outdoor installation of specific antennas. Carrier specific (or provider specific) signal boosters. These boosters are only designed to boost those frequencies (and signal) that belong to a particular carrier. Usually, such carrier specific boosters do not produce interferences on other carrier's frequencies, and are allowed to have much larger system gains (sometimes 100 dB). In these conditions, such devices boost signal in a larger coverage area, and can still be efficient when outdoor carrier signals are weak, but are only boosting the signal for the carrier it is designed to operate. These new rules by the FCC were implemented on March 1, 2014. Approval in the UK by Ofcom and the UK market In May 2011, Ofcom stated the following: Installation or use of repeater devices (as with any radio equipment) is a criminal offence unless two conditions are satisfied: That the equipment is CE marked, indicating that the manufacturer has declared that it complies with all relevant EU regulatory requirements, including the Radio equipment and Telecommunications Terminal Equipment (R&TTE) Directive; That the use of the equipment is specifically authorised in the UK, either via a licence or by regulations made by Ofcom to exempt the use from licensing. Under WT Act 2006 section 1.15, the wireless act also allows an exemption if the device does not "involve undue interference with wireless telegraphy". This is expected to follow the US-style regulations where a mobile repeater must have protection built in against interference. Ofcom stated that "Repeater devices transmit or re-transmit in the cellular frequency bands. Only the mobile network operators are licensed to use equipment that transmits in these bands. Installation or use of repeater devices by anyone without a licence is a criminal offence under Section 8 of the WT Act 2006." Repeaters operating in rural and less densely populated areas do not pose a quantifiable problem. See also Base Station Subsystem Femtocell – type of cellular repeater Home Node B – a femtocell Coverage noticer Dead zone (cell phone) Waves Repeater References External links Full Text of the FCC Report and Order on Use and Design of Signal Boosters Radio electronics Mobile technology Telecommunications infrastructure
Cellular repeater
[ "Technology", "Engineering" ]
1,827
[ "Radio electronics", "nan" ]
4,277,344
https://en.wikipedia.org/wiki/Valvetrain
A valvetrain is a mechanical system that controls the operation of the intake and exhaust valves in an internal combustion engine. The intake valves control the flow of air/fuel mixture (or air alone for direct-injected engines) into the combustion chamber, while the exhaust valves control the flow of spent exhaust gases out of the combustion chamber once combustion is completed. Layout The valvetrain layout is largely dependent on the location of the camshaft. The common valvetrain configurations for piston engines, in order from oldest to newest, are: Flathead engine: The camshaft and the valves are located in the engine block below the combustion chamber. Overhead valve engine: The camshaft remains in the block, however the valves are located in the cylinder head above the combustion chamber. Overhead camshaft engine: The valves and camshafts are in the cylinder head above the combustion chamber. Components The valvetrain consists of all the components responsible for transferring the rotational movement of the camshaft into the opening and closing of the intake and exhaust valves. Typical components are listed below in order from the crankshaft to the valves. Camshaft The timing and lift profile of the valve opening events are controlled by the camshafts, through use of a carefully shaped lobe on a rotating shaft. The camshaft is driven by the crankshaft and, in the case of a four-stroke engine, rotates at half the speed of the crankshaft. Motion is transferred from the crankshaft to the camshaft most commonly by a rubber timing belt, a metallic timing chain or a set of gears. Pushrod Pushrods are long, slender metal rods that are used in overhead valve engines to transfer motion from the camshaft (located in the engine block) to the valves (located in the cylinder head). The bottom end of a pushrod is fitted with a lifter, upon which the camshaft makes contact. The camshaft lobe moves the lifter upwards, which moves the pushrod. The top end of the pushrod pushes on the rocker arm, which opens the valve. Rocker arm / Finger / Bucket tappet Depending on the design used, the valves are actuated by a rocker arm, finger, or bucket tappet. Overhead valve engines use rocker arms, which are actuated from below indirectly (through the pushrods) by the cam lobes. Overhead camshaft engines use fingers or bucket tappets, which are actuated from above directly by the cam lobes. Valves Most modern engines use poppet valves, although sleeve valves, slide valves and rotary valves have also been used at times. Poppet valves are typically opened by the camshaft lobe or rocker arm, and closed by a coiled spring called a valve spring. Valve float occurs when the valve spring is unable to control the inertia of the valvetrain at high engine speeds (RPM). See also Cam-in-block Camless piston engine References Engine components
Valvetrain
[ "Technology" ]
585
[ "Engine components", "Engines" ]
4,277,518
https://en.wikipedia.org/wiki/Condensing%20boiler
Condensing boilers are water heaters typically used for heating systems that are fueled by gas or oil. When operated in the correct circumstances, a heating system can achieve high efficiency (greater than 90% on the higher heating value) by condensing water vapour found in the exhaust gases in a heat exchanger to preheat the circulating water. This recovers the latent heat of vaporisation, which would otherwise have been wasted. The condensate is sent to a drain. In many countries, the use of condensing boilers is compulsory or encouraged with financial incentives. For the condensation process to work properly, the return temperature of the circulating water must be around or below, so condensing boilers are often run at lower temperatures, around or below, which can require larger pipes and radiators than non-condensing boilers. Nevertheless, even partial condensing is more efficient than a conventional non-condensing boiler. Operational principle In a conventional boiler, fuel is burned and the hot gases produced pass through a heat exchanger where much of their heat is transferred to water, thus raising the water's temperature. One of the hot gases produced in the combustion process is water vapour (steam), which arises from burning the hydrogen content of the fuel. A condensing boiler extracts additional heat from the waste gases by condensing this water vapour to liquid water, thus recovering its latent heat of vaporization. A typical increase of efficiency can be as much as 10–12%. While the effectiveness of the condensing process varies depending on the temperature of the water returning to the boiler, it is always at least as efficient as a non-condensing boiler. The condensate produced is slightly acidic (pH of 3–5), so suitable materials must be used in areas where liquid is present. Aluminium alloys and stainless steel are most commonly used at high temperatures. In low-temperature areas, plastics are most cost effective (e.g., uPVC and polypropylene). The production of condensate also requires the installation of a heat exchanger condensate drainage system. In a typical installation, this is the only difference between a condensing and non-condensing boiler. To economically manufacture a condensing boiler's heat exchanger (and for the appliance to be manageable at installation), the smallest practical size for its output is preferred. This approach has resulted in heat exchangers with high combustion-side resistance, often requiring the use of a combustion fan to move the products through narrow passageways. This has also had the benefit of providing the energy for the flue system as the expelled combustion gases are usually below 100 °C (212 °F) and, as such, have a density close to that of air, with little buoyancy. The combustion fan helps to pump exhaust gas to the outside. Usage Condensing boilers are now largely replacing earlier, conventional designs in powering domestic central heating systems in Europe and, to a lesser degree, in North America. The Netherlands was the first country to adopt them broadly. Efficiency Condensing boiler manufacturers claim that up to 98% thermal efficiency can be achieved, compared to 70%–80% with conventional designs (based on the higher heating value of fuels). Typical models offer efficiencies around 90%, which brings most brands of condensing gas boiler in to the highest available categories for energy efficiency. In the UK, this is a SEDBUK (Seasonal Efficiency of Domestic Boilers in the UK) Band A efficiency rating, while in North America they typically receive an Eco Logo and/or Energy Star Certification. Boiler performance is based on the efficiency of heat transfer and highly dependent on boiler size/output and emitter size/output. System design and installation are critical. Matching the radiation to the Btu/Hr output of the boiler and consideration of the emitter/radiator design temperatures determines the overall efficiency of the space and domestic water heating system. One reason for an efficiency drop is because the design and/or implementation of the heating system gives return water (heat transfer fluid) temperatures at the boiler of over 55 °C (131 °F), which prevents significant condensation in the heat exchanger. Better education of both installers and owners could be expected to raise efficiency towards the reported laboratory values. Natural Resources Canada also suggests ways to make better use of these boilers, such as combining space and water heating systems. Some boilers (e.g. Potterton) can be switched between two flow temperatures such as 63 °C (145 °F) and 84 °C (183 °F), only the former being "fully condensing." However, boilers are normally installed with higher flow temperature by default because a domestic hot water cylinder is generally heated to 60 °C (140 °F), and this takes too long to achieve with a flow temperature only three degrees higher. Nevertheless, even partial condensing is more efficient than a traditional boiler. Most non-condensing boilers could be forced to condense through simple control changes. Doing so would reduce fuel consumption considerably, but would quickly destroy any mild steel or cast-iron components of a conventional high-temperature boiler due to the corrosive nature of the condensate. For this reason, most condensing boiler heat-exchangers are made from stainless steel or aluminum/silicon alloy. External stainless steel economizers can be retrofitted to non-condensing boilers to allow them to achieve condensing efficiencies. Temperature control valves are used to blend hot supply water into the return to avoid thermal shock or condensation inside of the boiler. The lower the return temperature to the boiler the more likely it will be in condensing mode. If the return temperature is kept below approximately 55 °C (131 °F), the boiler should still be in condensing mode making low temperature applications such as radiant floors and even old cast iron radiators a good match for the technology. Most manufacturers of new domestic condensing boilers produce a basic "fit all" control system that results in the boiler running in condensing mode only on initial heat-up, after which the efficiency drops off. This approach should still exceed that of older models (see the following three documents published by the Building Research Establishment: Information Papers 10-88 and 19-94; General Information Leaflet 74; Digest 339. See also Application Manual AM3 1989: Condensing Boilers by Chartered Institution of Building Services Engineers). By way of contrast Weather compensation systems are designed to adjust the system based on inside, outside, boiler inlet, and boiler outlet temperatures. Control The control of the domestic condensing boiler is crucial to ensuring that it operates in the most economic and fuel efficient way. The burners are usually controlled by an embedded system with built-in logic to control the output of the burner to match the load and give best performance. Almost all have modulating burners. These allow the power to be reduced to match the demand. Boilers have a turndown ratio which is the ratio of the maximum power output to the minimum power output for which combustion can be maintained. If the control system determines that the demand falls below the minimum power output, then the boiler will cycle off until the water temperature has fallen, and then will reignite and heat the water. Reliability Condensing boilers are claimed to have a reputation for being less reliable and may also suffer if worked on by installers and plumbers who may not understand their operation. Claims of unreliability have been contradicted by research carried out by the UK-based Building Research Establishment (see Building Research Establishment). In particular, the problem of 'pluming' arose with early installations of condensing boilers, in which a white plume of condensed vapour (as minuscule droplets) becomes visible at the outlet flue. Although unimportant to boiler operation, visible pluming was an aesthetic issue that caused much opposition to condensing boilers. A more significant issue is the slight (pH 3-4) acidity of the condensate liquid. Where this is in direct contact with the boiler's heat exchanger, particularly for thin aluminium sheet, it may give rise to more rapid corrosion than for traditional non-condensing boilers. Older boilers may also have used thick cast heat exchangers, rather than sheet, which had slower time constants for their response but were also resistant, by their sheer mass, to any corrosion. The acidity of the condensate means that only some materials may be used: stainless steel and aluminium are suitable, mild steel, copper or cast iron are not. Poor design or construction standards may have made the heat exchangers of some early condensing boilers less long-lived. Initial testing and annual monitoring of the heat transfer fluid in condensing boilers with aluminium or stainless steel heat exchangers is highly recommended. Maintenance of a slightly alkaline (pH 8 to 9) liquid with anti-corrosion and buffering agents reduces corrosion of the aluminium heat exchanger. Some professionals believe that the condensate produced on the combustion side of the heat exchanger may corrode an aluminium heat exchanger and shorten boiler life. Statistical evidence is not yet available since condensing boilers with aluminium heat exchangers have not been in use long enough. Building Research Establishment The Building Research Establishment, which is the UK's major research body for the building industry, produced a leaflet on domestic condensing boilers. According to the Building Research Establishment: modern condensing boilers are as reliable as standard boilers condensing boilers are no more difficult to service, nor do they require more frequent servicing servicing is not expensive; the only (minor) additional task is to check the correct function of the condensate drain condensing boilers are not difficult to install under all operating conditions, condensing boilers are always more efficient than standard boilers Exhaust The condensate expelled from a condensing boiler is acidic, with a pH between 3 and 4. Condensing boilers require a drainpipe for the condensate produced during operation. This consists of a short length of polymer pipe with a vapour trap to prevent exhaust gases from being expelled into the building. The acidic nature of the condensate may be corrosive to cast iron plumbing, waste pipes and concrete floors but poses no health risk to occupants. A neutralizer, typically consisting of a plastic container filled with marble or limestone aggregate or "chips" (alkaline) can be installed to raise the pH to acceptable levels. If a gravity drain is not available, then a small condensate pump must also be installed to lift it to a proper drain. The primary and secondary heat exchangers are constructed of materials that will withstand this acidity, typically aluminum or stainless steel. Since the final exhaust from a condensing boiler has a lower temperature than the exhaust from an atmospheric boiler 38 °C (100 °F) vs. 204 °C (400 °F) a mechanical fan is always required to expel it, with the additional benefit of allowing the use of low-temperature exhaust piping (typically PVC in domestic applications) without insulation or conventional chimney requirements. Indeed, the use of conventional masonry chimney, or metal flue is specifically prohibited due to the corrosive nature of the flue products, with the notable exception of specially rated stainless steel and aluminum in certain models. The preferred/common vent material for most condensing boilers available in North America is PVC, followed by ABS and CPVC. Polymer venting allows for the added benefit of flexibility of installation location including sidewall venting saving unnecessary penetrations of the roof. Cost Condensing boilers are up to 50% more expensive to buy and install than conventional types in the UK and the US. However, , at UK prices the extra cost of installing a condensing instead of conventional boiler should be recovered in around two to five years through lower fuel use (for verification, see the following three documents published by the Building Research Establishment: Information Papers 10-88 and 19-94; General Information Leaflet 74; Digest 339; see also Case studies in Application Manual AM3 1989: Condensing Boilers by Chartered Institution of Building Services Engineers), and two to five years at US prices. Exact figures will depend on the efficiency of the original boiler installation, boiler utilisation patterns, the costs associated with the new boiler installation, and how frequently the system is used. The cost of these boilers is dropping as the mass takeup enforced by government takes effect and the manufacturers withdraw older, less efficient models, but production cost is higher than older types as condensing boilers are more complex. The increased complexity of condensing boilers is as follows: Increased size of heat exchanger, or the addition of a second heat exchanger (It is important that the heat exchangers are designed to be resistant to acid attack from the "wet" flue gases.) The necessity of a fan-assisted flue due to the cooler flue gases having less buoyancy, though many non-condensing boilers also have this feature The need to drain the condensate produced With respect to modern boilers, there are no other differences between condensing and non-condensing boilers. Reliability, as well as initial cost and efficiency, affects total cost of ownership. One major independent UK firm of plumbers stated in 2005 that it had made thousands of call-outs to mend condensing boilers, and that the greenhouse gas emissions from its vans were probably greater than the savings made by the shift to eco-conscious boilers. However, the same article points out that the Heating and Hotwater Information Council, together with some installers, have found that modern condensing boilers are just as reliable as standard boilers. Phase-out Gallery See also Energy efficiency in British housing Flue-gas condensation OpenTherm References External links European Patent Office Patent for drain free elimination of condensate. Condensing boilers page on the website of the UK National Energy Foundation Comparisons of different boiler systems available in North America on the Natural Resources Canada website Exploded diagram of condensing boiler (Malvern Boilers 30) (pdf file - 71KB) On the design of residential condensing gas boilers Boilers Residential heating appliances
Condensing boiler
[ "Chemistry" ]
2,916
[ "Boilers", "Pressure vessels" ]
4,277,566
https://en.wikipedia.org/wiki/Milliken%E2%80%93Taylor%20theorem
In mathematics, the Milliken–Taylor theorem in combinatorics is a generalization of both Ramsey's theorem and Hindman's theorem. It is named after Keith Milliken and Alan D. Taylor. Let denote the set of finite subsets of , and define a partial order on by α<β if and only if max α<min β. Given a sequence of integers and , let Let denote the k-element subsets of a set S. The Milliken–Taylor theorem says that for any finite partition , there exist some and a sequence such that . For each , call an MTk set. Then, alternatively, the Milliken–Taylor theorem asserts that the collection of MTk sets is partition regular for each k. References . . Ramsey theory Theorems in discrete mathematics
Milliken–Taylor theorem
[ "Mathematics" ]
166
[ "Discrete mathematics", "Combinatorics", "Theorems in discrete mathematics", "Combinatorics stubs", "Mathematical problems", "Mathematical theorems", "Ramsey theory" ]
4,277,567
https://en.wikipedia.org/wiki/Schwartz%20space
In mathematics, Schwartz space is the function space of all functions whose derivatives are rapidly decreasing. This space has the important property that the Fourier transform is an automorphism on this space. This property enables one, by duality, to define the Fourier transform for elements in the dual space of , that is, for tempered distributions. A function in the Schwartz space is sometimes called a Schwartz function. Schwartz space is named after French mathematician Laurent Schwartz. Definition Let be the set of non-negative integers, and for any , let be the n-fold Cartesian product. The Schwartz space or space of rapidly decreasing functions on is the function spacewhere is the function space of smooth functions from into , and Here, denotes the supremum, and we used multi-index notation, i.e. and . To put common language to this definition, one could consider a rapidly decreasing function as essentially a function such that , , , ... all exist everywhere on and go to zero as faster than any reciprocal power of . In particular, (, ) is a subspace of the function space (, ) of smooth functions from into . Examples of functions in the Schwartz space If is a multi-index, and a is a positive real number, then Any smooth function f with compact support is in S(Rn). This is clear since any derivative of f is continuous and supported in the support of f, so ( has a maximum in Rn by the extreme value theorem. Because the Schwartz space is a vector space, any polynomial can be multiplied by a factor for a real constant, to give an element of the Schwartz space. In particular, there is an embedding of polynomials into a Schwartz space. Properties Analytic properties From Leibniz's rule, it follows that is also closed under pointwise multiplication: If then the product . In particular, this implies that is an -algebra. More generally, if and is a bounded smooth function with bounded derivatives of all orders, then . The Fourier transform is a linear isomorphism . If then is Lipschitz continuous and hence uniformly continuous on . is a distinguished locally convex Fréchet Schwartz TVS over the complex numbers. Both and its strong dual space are also: complete Hausdorff locally convex spaces, nuclear Montel spaces, ultrabornological spaces, reflexive barrelled Mackey spaces. Relation of Schwartz spaces with other topological vector spaces If , then . If , then is dense in . The space of all bump functions, , is included in . See also Bump function Schwartz–Bruhat function Nuclear space References Sources Topological vector spaces Smooth functions Fourier analysis Function spaces Schwartz distributions
Schwartz space
[ "Mathematics" ]
533
[ "Topological vector spaces", "Function spaces", "Vector spaces", "Space (mathematics)" ]
4,277,819
https://en.wikipedia.org/wiki/B.%20Hick%20and%20Sons
B. Hick and Sons, subsequently Hick, Hargreaves & Co, was a British engineering company based at the Soho Ironworks in Bolton, England. Benjamin Hick, a partner in Rothwell, Hick and Rothwell, later Rothwell, Hick & Co., set up the company in partnership with two of his sons, John (1815–1894) and Benjamin Jr (1818–1845) in 1833. Locomotives The company's first steam locomotive Soho, named after the works was a goods type, built in 1833 for carrier John Hargreaves. In 1834 an unconventional, gear-driven four-wheeled rail carriage was conceived for Bolton solicitor and banker, Thomas Lever Rushton (1810–1883). The engine was the first 3-cylinder locomotive and its design incorporated turned iron wheel rims with aerodynamic plate discs as an alternative to conventional spokes. The 3-cylinder concept evolved into Hick's experimental horizontal boiler A 2-2-2 locomotive about 1840, adopting the principle features of the vertical boiler engine. The A design appears not to have been put into production. More locomotives were built over the 1830s, some for export to the United States including a Fulton for the Pontchartrain Railroad in 1834, New Orleans and Carrollton for the St. Charles Streetcar Line in New Orleans in 1835 and a second New Orleans for the same line in 1837. A 10 hp stationary engine was supplied to the Carrollton Railroad Company in Jefferson Parish, Louisiana, for ironworking purposes, but damaged by fire in 1838. Two tender locomotives Potomak and Louisa were delivered to the Fredericksburg and Potomac Railroad and a third, Virginia to the Raleigh and Gaston Railroad in North Carolina during 1836. Between 1837 and 1840 the company subcontracted for Edward Bury and Company, supplying engines to the Midland Counties Railway, London and Birmingham Railway, North Union Railway, Manchester and Leeds Railway and indirectly to the Grand Crimean Central Railway via the London and North Western Railway in 1855. Engines were built for the Taff Vale Railway, Edinburgh and Glasgow Railway, Cheshire, Lancashire and Birkenhead Railway, Chester and Birkenhead Railway, Eastern Counties Railway, Liverpool and Manchester Railway, North Midland Railway, Paris and Versailles Railway and Bordeaux Railway. In 1841 the Birmingham and Gloucester Railway successfully used American Norris locomotives on the notorious Lickey Incline and Hick built three similar locomotives for the line. Between 1844 and 1846 the firm built a number of "long boiler" locomotives with haystack fireboxes and in 1848, four s for the North Staffordshire Railway. In the same year, the company built Chester, probably the earliest known prototype of a 6-wheel coupled } goods locomotive. Aerodynamic disc wheel Benjamin Hick's wheel design was used on a number of Great Western Railway engines including what may have been the world's first streamlined locomotive; an experimental prototype, nicknamed Grasshoper, driven by Brunel at , c.1847. The 10 ft disc wheels from GWR locomotive Ajax were lent to convey the statue of the Duke of Wellington to Hyde Park Corner in London. Hick's patent extended the purpose of the design from the locomotive steam-carriage, '...I do not confine myself to this adaptation. Wheels for carts, waggons, coaches, timber carriages, and for many other uses, may be advantageously constructed on this principle. The forms, dimensions, nature, and strength of material of the naves, discs, and fellies, as well as the mode of uniting the different parts, may be varied, in order to suit the particular purpose for which the wheels are required, and to the wear and tear to which they are liable'. Examples using wood paneling as streamlining are applied to the 16 ft flywheel and rope races of a Hick Hargreaves and Co. 120hp non-condensing Corliss engine, Caroline installed new at Gurteen's textile manufactorary, Chauntry Mills, Haverhill, Suffolk in 1879. Disc wheels and wheel fairings have been used for armoured cars, aviation, drag racing, Land speed record attempts, Land speed racing, motor racing, motor scooters, motorcycle speedway, wheelchair racing, icetrack cycling, velomobiles and bicycle racing, particularly track cycling, track bikes and time trials. Engineering drawings Hick Hargreaves collection of early locomotive and steam engine drawings represents one of the finest of its kind in the world. The majority were produced by Benjamin Hick senior and John Hick between 1833-1855, they are of significant interest for their technical detail, fine draughtsmanship and artistic merit. The elaborate finish and harmonious colouring extends from the largest drawings for prospective customers to ordinary working drawings and records for the engineer. Works like this influenced the contemporary illustrators of popular science and technology of the time like John Emslie (1813-1875), their aesthetic quality stems from a romantic outlook in which science and poetry were partners. The drawings are held by Bolton Metropolitan Borough Archives and the Transport Trust, University of Surrey. Hick, Hargreaves & Co After the death of Benjamin Hick in 1842, the firm continued as Benjamin Hick & Son under the management of his eldest son, John Hick; his second son, Benjamin Jr left the company after a year of its founding for partnership in a Liverpool company about 1834, possibly George Forrester & Co. In 1840 he filed a patent governor for B. Hick and Son using an Egyptian winged motif, that featured on the front page of Mechanics' Magazine. Hick's third and youngest son William (1820–1844) served as an apprentice millwright, engineer in the company from 1834 and a 'fitter' from 1837, he was listed as an iron founder in 1843 with his eldest brother John, but died the next year. In 1845 John Hick took his brother-in-law John Hargreaves Jr (1800–1874) into partnership followed by the younger brother William Hargreaves (1821–1889) in 1847. John Hargreaves Jr left the firm in April 1850 before buying Silwood Park in Berkshire. The following year B. Hick and Son exhibited engineering models and machinery at The Great Exhibition in Class VI. Manufacturing Machines and Tools, including a 6 horse power crank overhead engine and mill-gear driving Hibbert, Platt and Sons' cotton machinery and a 2 hp high-pressure oscillating engine driving a Ryder forging machine. Both engines were modelled in the Egyptian Style. The company received a Council Medal award for its mill gearing, radial drill mandrils and portable forges. The B. Hick & Son London office was at 1 New Broad Street in the City. One of the Great Exhibition models, a 1:10 scale 1840 double beam engine built in the Egyptian style for John Marshall's Temple Works in Leeds, is displayed at the Science Museum and considered to be the ultimate development of a Watt engine. A second model, apparently built by John Hick and probably shown at the Great Exhibition, is the open ended 3-cylinder A 2-2-2 locomotive on display at Bolton Museum. Bolton Museum holds the best collection of Egyptian cotton products outside the British Museum as a result of the company's strong exports, particularly to Egypt. Leeds Industrial Museum houses a Benjamin Hick and Son beam engine in the Egyptian style c.1845, used for hoisting machinery at the London Road warehouse of the Manchester, Sheffield and Lincolnshire Railway. Locomotive building continued until 1855, and in all some ninety to a hundred locomotives were produced; but they were a sideline for the company, which concentrated on marine and stationary engines, of which they made a large number. B. Hick and Son supplied engines for the paddle frigates Dom Afonso by Thomas Royden & Sons and Amazonas by the leading shipbuilder in Liverpool, Thomas Wilson & Co. also builders of the Royal William; the screw propelled Mediterranean steamers, Nile and Orontes and the SS Don Manuel built by Alexander Denny and Brothers of Dumbarton. The Brazilian Navy's Afonso rescued passengers from the Ocean Monarch in 1848 and took part in the Battle of The Tonelero Pass in 1851; the Amazonas participated in the Battle of Riachuelo in 1865. The company made blowing engines for furnaces and smelters, boilers, weighing machines, water wheels and mill machinery. It supplied machinery "on a new and perfectly unique" concept together with iron pillars, roofing and fittings for the steam-driven pulp and paper mill at Woolwich Arsenal in 1856. The mill made cartridge bags at the rate of about 20,000 per hour, sufficient to supply the entire British army and navy. The intention was to manufacture paper for various departments of Her Majesty's service. Steel boilers were first produced in 1863, mostly of the Lancashire type, and more than 200 locomotive boilers were made for torpedo boats into the 1890s. The Phoenix Boiler Works were purchased in 1891 to meet an increase in demand. Bolton Steam Museum hold a 1906 Hick, Hargreaves and Co. Ltd. Lancashire boiler front-plate, previously installed at Halliwell Mills, Bolton. The company introduced the highly efficient Corliss valve gear into the United Kingdom from the United States in about 1864 and was closely identified with it thereafter; William Inglis being responsible for promoting the high speed Corliss engine. In the same year Swiss engineer Robert Lüthy came to the firm from L. and L.R. Bodmer. An early horizontal Corliss type built in 1866, arrived in Australia the following year for Bell's Creek gold mine, Araluan, New South Wales; the engine is now housed at Goulburn Historic Waterworks Museum. A 50 hp Inglis and Spencer improved Corliss girder bed engine built in 1873 (No.303), used to power Gamble's lace factory, Nottingham and an 1879, 120 hp non-condensing Corliss engine with Inglis and Spencer patent double clip trip gear are held at Forncett Industrial Steam Museum and Gurteen's textile manufactorary, Haverhill, Suffolk. About 1881 Hick, Hargreaves received orders for two Corliss engines of 3000 hp, the largest cotton mill engines in the world. Hargreaves and Inglis trip gear was first applied to a large single cylinder 1800 hp Corliss engine at Eagley Mills near Bolton and the company received a Gold Medal for its products at the 1885 International Inventions Exhibition. An 1886 Hick, Hargreaves and Co. inverted, vertical single cylinder Corliss engine with Inglis and Spencer trip gear, used to run Ford Ayrton and Co.'s spinning mill, Bentham until 1966 is preserved under glass at Bolton Town Centre. Lüthy was appointed superintendent of hydraulic apparatus in 1870, about August 1883 he went on business to Australia connected with the shipping of frozen meat and to inspect machinery for a large freezing establishment, but died suddenly on 3 July 1884, the day after he returned home. Mill gearing was a speciality including large flywheels for rope drives, one example of 128 tons being 32 ft in diameter and groved for 56 ropes. Turbines and hydraulic machinery were also manufactured. Many of the tools were to suit the specialist work, with travelling cranes to take 15 to 40 tons in weight, a large lathe, side planer, slotting machine, pit planer and a tool for turning four 32 ft rope flywheels simultaneously. The workshops featured an 80ton hydraulic riveting machine. For the ease of shipping and transportation, Soho Iron Works had its own railway system, traversed by sidings of the London North Western Railway (LNWR). Inglis, who lived in Bolton was a neighbour of LNWR's chief mechanical engineer, Francis Webb. The company was renamed Hick, Hargreaves and Company in 1867; John Hick retired from the business in 1868 when he became a member of parliament (MP), leaving William Hargreaves as the sole proprietor. On the death of John Hick's nephew Benjamin Hick in 1882, a "much respected member of the firm", active involvement of the Hick family ceased. William Hargreaves died in 1889 and, under the directorship of his three sons, John Henry, Frances and Percy, the business became a private limited company in 1892. In 1893 the founder's great grandson, also Benjamin Hick started an apprenticeship, followed by his younger brother Geoffrey about 1900. Diversification About 1885 Hick Hargreaves & Co became associated with Sebastian Ziani de Ferranti during the reconstruction of the Grosvenor Gallery and began to manufacture steam engines for power generation including those of Ferranti's Deptford Power Station, the largest power station in the world at the time. In 1908, the company was licensed to build uniflow engines. From 1911, the company began the manufacture of large diesel engines; however, these did not prove successful and were eventually discontinued. Boiler production finished in 1912. During World War I the company was involved in war work, producing 9.2 inch then 6 inch shells for the Ministry of Munitions, mines and a contract with Vickers to produce marine oil engines for submarines, under licence for the Admiralty. In the early hours 26 September 1916, the works were targeted by Zeppelin L 21; a bomb missed, passing instead through the roof of the nearby Holy Trinity Church. The company's recoil gear for the Vickers 18 pounder quick firing gun was so successful that, by war's end, a significant part of the factory was devoted to its production. Civil manufacture was not suspended entirely and in 1916 the firm began making Hick-Bréguet two-stage steam jet air ejectors and high vacuum condensing plant for power generation, including a contract with Yorkshire Electric Power Company. Hick Hargreaves production greatly expanded as centralised power generation was adopted in Great Britain, by the formation of the Central Electricity Board (CEB) in 1926. In the search for new markets after the war the firm invested in machinery to produce petrol engines and other car components, entering a contract with Vulcan Motor & Engineering Co of Southport for 1000 20 hp petrol engines, but work discontinued in 1922 when Vulcan became bankrupt, with only 150 completed. Following the arrival of electrical engineer Wyndham D'arcy Madden from Stothert & Pitt in 1919, Hick Hargreaves was re-organised to include a sales department responsible for advertising, supervised by Madden who in succession was appointed Managing Director in 1922, serving until 1963. Trained at Faraday House Engineering College, from outside the Hargreaves family circle and established conventions of the industrial regions, Madden ensured the business was run economically during the difficult times ahead. The readiness to adapt was crucial to success during the interwar period; he realised that marketing the firm's specialities were as important to the design and manufacture of its products. As the steam turbine replaced reciprocating steam engines, the company required a skilled engineer to produce a design of its own; in 1923 former principal assistant to the Chief Turbine Designer of English Electric, George Arrowsmith was appointed as Hick Hargreaves' Chief Turbine Designer; development continued and by 1927 the firm's engine work was principally steam turbines for electricity generating stations, becoming a major supplier to the CEB. Three of the nine turbines produced were supplied to Fraser & Chalmers for installation at Ham Halls power station. Arrowsmith was appointed Chief Engineer and a director of Hick Hargreaves in 1928. A 1923 Hick Hargreaves Co. Ltd. condenser, coupled to an English Electric Company turbogenerator built by Dick, Kerr & Co., set No. 6 in operation at the Back o' th' Bank power station, Bolton until 1979, is displayed at the Museum of Science and Industry, Manchester. During the 1930s, the company acquired the records, drawings, and patterns of four defunct steam engine manufacturers: J & E Wood, John Musgrave & Sons Limited, Galloways Limited and Scott & Hodgson Limited. As a consequence it made a lucrative business out of repairs and the supply of spare parts during the Great Depression. Large stationary steam engines were still used for the many cotton mills in the Bolton area until the collapse of the industry after World War II. 3 and 4-cylinder triple expansion marine steam engines were built during the 1940s, post-war the company expanded its work in electricity generation, again becoming a major supplier to the CEB and branched out into food processing, oil refining and offshore oil equipment production, continuing to supply vacuum equipment to the chemical and petrochemical industries. Between 1946 and 1947 it supplied vacuum pumps to Vickers Armstrongs for the Barnes Wallis designed Stratosphere Chamber at Brooklands, built to investigate high-speed flight at very high altitudes. By the early 1960's Hick Hargreaves established itself in the practical application of nuclear energy, supplying de-aerating equipment for the early atomic power stations at Calder Hall, Chapelcross and Dounreay, and the complete feed heating system, condensing plant and steam dump condensers for Hunterston. The company received orders for the ejectors, de-aerators and dump condensers for the prototype advanced gas cooled reactor at Windscale and a commission to design the condensing plants and feed systems for the first 175.000 KW Japanese Atomic Power Station at Tokai Mura. About 1969 the firm's 1930s corporate identity was brought up to date with a logo, while Madden's established and successful marketing of specialities continued; during 1974 Hick Hargreaves promoted its achievements and support of industrial archaeology with an exhibition of B. Hick and Son locomotive drawings, emphasising its response to changing industrial developments since the nineteenth century. In 2000 Hick Hargreaves' products included compressors, blowers, refrigeration equipment, deaerators, vacuum ejectors and liquid ring vacuum pumps. Soho Iron Works Between the 1840s and 1870s, the firm had its own Brass Band, "John Hick's Esq, Band," known as the Soho Iron Works Band with a uniform of "... rich full braided coat, black trousers, with two-inch gold lace down the sides and blue cap with gold band," who would play airs through the streets of Bolton. Ownership changes In 1968, the Hargreaves family sold their shares to Electrical & Industrial Securities Ltd which became part of TI Group, and subsequently Smiths Group. Smiths Group sold Soho Iron works to Sainsbury's and it closed in 2002. Two switchgear panels; the works clock, and a pair of cast iron gateposts with Hick's caduceus logo were preserved by the Northern Mill Engine Society. The 170 year old firm's records were deposited with Bolton library. In 2001, BOC bought the business from Smiths Group and relocated the offices to Wingates Industrial Estate in Westhoughton, and subsequently to Lynstock Way in Lostock, as part of Edwards. Some of the manufacturing equipment was transferred to their lower cost facility in Czechoslovakia. Mills powered by Hick, Hargreaves engines Textile Mill, Chadderton Cavendish Mill, Ashton-under-Lyne Century Mill, Farnworth Pioneer Mill, Radcliffe See also Bradford Colliery James Cudworth Fred Dibnah Helmshore Mills Textile Museum House-built engine Redevelopment of Mumbai mills Thomas Pitfield Wadia Group Waltham Abbey Royal Gunpowder Mills References Bibliography External links Lucien Alphonse Legros - eldest son of Alphonse Legros, entered Hick, Hargreaves works in 1887. The Clyde Built Ships: Empire Ridley 1941 (Ministry of War Transport), HMS Latimer 1943 (Petroleum Warfare Department) Tyne built ships: Empire Grey 1944 (Ministry of War Transport) Tyne built ships: Landing Ship, Tank (LST) 3001 1945 (Royal Navy) Frederick Clover LST (3): 1946 (War Office), 1952 (Atlantic Steam Navigation Company) Tyne built ships: Zarian 1947 (United Africa Company) Edwards Limited Manufacturing companies established in 1833 British companies established in 1833 1833 establishments in England Engineering companies of the United Kingdom Engineering companies of England Engine manufacturers of the United Kingdom Boilermakers Millwrights Hick Steam engine manufacturers Machine tool builders Electrical engineering companies Nuclear technology companies of the United Kingdom Companies based in Bolton History of Bolton Industrial Revolution
B. Hick and Sons
[ "Engineering" ]
4,200
[ "Electrical engineering organizations", "Electrical engineering companies", "Engineering companies" ]
4,277,843
https://en.wikipedia.org/wiki/Improper%20input%20validation
Improper input validation or unchecked user input is a type of vulnerability in computer software that may be used for security exploits. This vulnerability is caused when "[t]he product does not validate or incorrectly validates input that can affect the control flow or data flow of a program." Examples include: Buffer overflow Cross-site scripting Directory traversal Null byte injection SQL injection Uncontrolled format string References Computer security exploits
Improper input validation
[ "Technology" ]
90
[ "Computer security exploits" ]
4,277,941
https://en.wikipedia.org/wiki/American%20Residential%20Services
American Residential Services (ARS) is a United States network of plumbing, and home and commercial heating and air conditioning (HVAC) businesses, operating under the trade name ARS/Rescue Rooter. The trade name came from the acquisition and merging of ARS and Rescue Rooter by their then-parent company ServiceMaster. They have locations in 24 states. The company is based in Memphis, Tennessee. The ARS mascots are Dandy and his sidekick Pronto. History ARS was established in 1975. The name "Rescue Rooter" was trademarked in 1976 by the California-based Rescue Industries Inc. It was a family-owned West Coast plumbing and drain cleaning company. In 1996, ARS was founded to consolidate local and regional HVAC service companies. Shortly after, The Servicemaster Company, based in Downers Grove, Illinois, acquired both Rescue Rooter and ARS, in 1998 and 1999 respectively, and brought them together under the "ARS/Service Express" brand. The company eventually dropped the "Service Express brand and the parent brand was known primarily as “ARS/Rescue Rooter.” In October 2006, ARS/Rescue Rooter was then acquired from ServiceMaster by two private equity firms, Caxton-Iseman Capital and Royal Palm Capital Partners, for $100 million. In May 2014, ARS was acquired by Charlesbank Capital Partners from Caxton-Iseman Capital and Royal Palm Capital Partners. References Further reading External links Official website Rescue Rooter website Heating, ventilation, and air conditioning Companies based in Memphis, Tennessee Plumbing Service companies of the United States Privately held companies based in Tennessee
American Residential Services
[ "Engineering" ]
321
[ "Construction", "Plumbing" ]
4,278,182
https://en.wikipedia.org/wiki/Monoidal%20functor
In category theory, monoidal functors are functors between monoidal categories which preserve the monoidal structure. More specifically, a monoidal functor between two monoidal categories consists of a functor between the categories, along with two coherence maps—a natural transformation and a morphism that preserve monoidal multiplication and unit, respectively. Mathematicians require these coherence maps to satisfy additional properties depending on how strictly they want to preserve the monoidal structure; each of these properties gives rise to a slightly different definition of monoidal functors The coherence maps of lax monoidal functors satisfy no additional properties; they are not necessarily invertible. The coherence maps of strong monoidal functors are invertible. The coherence maps of strict monoidal functors are identity maps. Although we distinguish between these different definitions here, authors may call any one of these simply monoidal functors. Definition Let and be monoidal categories. A lax monoidal functor from to (which may also just be called a monoidal functor) consists of a functor together with a natural transformation between functors and a morphism , called the coherence maps or structure morphisms, which are such that for every three objects , and of the diagrams ,    and    commute in the category . Above, the various natural transformations denoted using are parts of the monoidal structure on and . Variants The dual of a monoidal functor is a comonoidal functor; it is a monoidal functor whose coherence maps are reversed. Comonoidal functors may also be called opmonoidal, colax monoidal, or oplax monoidal functors. A strong monoidal functor is a monoidal functor whose coherence maps are invertible. A strict monoidal functor is a monoidal functor whose coherence maps are identities. A braided monoidal functor is a monoidal functor between braided monoidal categories (with braidings denoted ) such that the following diagram commutes for every pair of objects A, B in : A symmetric monoidal functor is a braided monoidal functor whose domain and codomain are symmetric monoidal categories. Examples The underlying functor from the category of abelian groups to the category of sets. In this case, the map sends (a, b) to ; the map sends to 1. If is a (commutative) ring, then the free functor extends to a strongly monoidal functor (and also if is commutative). If is a homomorphism of commutative rings, then the restriction functor is monoidal and the induction functor is strongly monoidal. An important example of a symmetric monoidal functor is the mathematical model of topological quantum field theory. Let be the category of cobordisms of n-1,n-dimensional manifolds with tensor product given by disjoint union, and unit the empty manifold. A topological quantum field theory in dimension n is a symmetric monoidal functor The homology functor is monoidal as via the map . Alternate notions If and are closed monoidal categories with internal hom-functors (we drop the subscripts for readability), there is an alternative formulation ψAB : F(A ⇒ B) → FA ⇒ FB of φAB commonly used in functional programming. The relation between ψAB and φAB is illustrated in the following commutative diagrams: Properties If is a monoid object in , then is a monoid object in . Monoidal functors and adjunctions Suppose that a functor is left adjoint to a monoidal . Then has a comonoidal structure induced by , defined by and . If the induced structure on is strong, then the unit and counit of the adjunction are monoidal natural transformations, and the adjunction is said to be a monoidal adjunction; conversely, the left adjoint of a monoidal adjunction is always a strong monoidal functor. Similarly, a right adjoint to a comonoidal functor is monoidal, and the right adjoint of a comonoidal adjunction is a strong monoidal functor. See also Monoidal natural transformation Inline citations References Monoidal categories
Monoidal functor
[ "Mathematics" ]
882
[ "Monoidal categories", "Mathematical structures", "Category theory" ]
4,278,245
https://en.wikipedia.org/wiki/Deformation%20quantization
In mathematics and physics, deformation quantization roughly amounts to finding a (quantum) algebra whose classical limit is a given (classical) algebra such as a Lie algebra or a Poisson algebra. In physics Intuitively, a deformation of a mathematical object is a family of the same kind of objects that depend on some parameter(s). Here, it provides rules for how to deform the "classical" commutative algebra of observables to a quantum non-commutative algebra of observables. The basic setup in deformation theory is to start with an algebraic structure (say a Lie algebra) and ask: Does there exist a one or more parameter(s) family of similar structures, such that for an initial value of the parameter(s) one has the same structure (Lie algebra) one started with? (The oldest illustration of this may be the realization of Eratosthenes in the ancient world that a flat Earth was deformable to a spherical Earth, with deformation parameter 1/R⊕.) E.g., one may define a noncommutative torus as a deformation quantization through a ★-product to implicitly address all convergence subtleties (usually not addressed in formal deformation quantization). Insofar as the algebra of functions on a space determines the geometry of that space, the study of the star product leads to the study of a non-commutative geometry deformation of that space. In the context of the above flat phase-space example, the star product (Moyal product, actually introduced by Groenewold in 1946), ★ħ, of a pair of functions in , is specified by where is the Wigner–Weyl transform. The star product is not commutative in general, but goes over to the ordinary commutative product of functions in the limit of . As such, it is said to define a deformation of the commutative algebra of . For the Weyl-map example above, the ★-product may be written in terms of the Poisson bracket as Here, Π is the Poisson bivector, an operator defined such that its powers are and where {f1, f2} is the Poisson bracket. More generally, where is the binomial coefficient. Thus, e.g., Gaussians compose hyperbolically, or etc. These formulas are predicated on coordinates in which the Poisson bivector is constant (plain flat Poisson brackets). For the general formula on arbitrary Poisson manifolds, cf. the Kontsevich quantization formula. Antisymmetrization of this ★-product yields the Moyal bracket, the proper quantum deformation of the Poisson bracket, and the phase-space isomorph (Wigner transform) of the quantum commutator in the more usual Hilbert-space formulation of quantum mechanics. As such, it provides the cornerstone of the dynamical equations of observables in this phase-space formulation. There results a complete phase space formulation of quantum mechanics, completely equivalent to the Hilbert-space operator representation, with star-multiplications paralleling operator multiplications isomorphically. Expectation values in phase-space quantization are obtained isomorphically to tracing operator observables with the density matrix in Hilbert space: they are obtained by phase-space integrals of observables such as the above with the Wigner quasi-probability distribution effectively serving as a measure. Thus, by expressing quantum mechanics in phase space (the same ambit as for classical mechanics), the above Weyl map facilitates recognition of quantum mechanics as a deformation (generalization, cf. correspondence principle) of classical mechanics, with deformation parameter . (Other familiar deformations in physics involve the deformation of classical Newtonian into relativistic mechanics, with deformation parameter v/c; or the deformation of Newtonian gravity into General Relativity, with deformation parameter Schwarzschild-radius/characteristic-dimension. Conversely, group contraction leads to the vanishing-parameter undeformed theories—classical limits.) Classical expressions, observables, and operations (such as Poisson brackets) are modified by -dependent quantum corrections, as the conventional commutative multiplication applying in classical mechanics is generalized to the noncommutative star-multiplication characterizing quantum mechanics and underlying its uncertainty principle. See also Deligne's conjecture on Hochschild cohomology Poisson manifold Formality theorem; for now, see https://mathoverflow.net/questions/32889/a-few-questions-about-kontsevich-formality Kontsevich quantization formula References Further reading https://ncatlab.org/nlab/show/deformation+quantization https://ncatlab.org/nlab/show/formal+deformation+quantization Mathematical quantization Mathematical physics
Deformation quantization
[ "Physics", "Mathematics" ]
1,006
[ "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Mathematical quantization", "Mathematical physics" ]
4,278,455
https://en.wikipedia.org/wiki/Vertex%20pipeline
The function of the vertex pipeline in any GPU is to take geometry data (usually supplied as vector points), work with it if needed with either fixed function processes (earlier DirectX), or a vertex shader program (later DirectX), and create all of the 3D data points in a scene to a 2D plane for display on a computer monitor. It is possible to eliminate unneeded data from going through the rendering pipeline to cut out extraneous work (called view volume clipping and backface culling). After the vertex engine is done working with the geometry, all the 2D calculated data is sent to the pixel engine for further processing such as texturing and fragment shading. As of DirectX 9c, the vertex processor is able to do the following by programming the vertex processing under the Direct X API: Displacement mapping Geometry blending Higher-order primitives Point sprites Matrix stacks External links Anandtech Article 3D computer graphics Graphics standards
Vertex pipeline
[ "Technology" ]
195
[ "Computer standards", "Graphics standards" ]
4,279,049
https://en.wikipedia.org/wiki/Organophosphorus%20chemistry
Organophosphorus chemistry is the scientific study of the synthesis and properties of organophosphorus compounds, which are organic compounds containing phosphorus. They are used primarily in pest control as an alternative to chlorinated hydrocarbons that persist in the environment. Some organophosphorus compounds are highly effective insecticides, although some are extremely toxic to humans, including sarin and VX nerve agents. Phosphorus, like nitrogen, is in group 15 of the periodic table, and thus phosphorus compounds and nitrogen compounds have many similar properties. The definition of organophosphorus compounds is variable, which can lead to confusion. In industrial and environmental chemistry, an organophosphorus compound need contain only an organic substituent, but need not have a direct phosphorus-carbon (P-C) bond. Thus a large proportion of pesticides (e.g., malathion), are often included in this class of compounds. Phosphorus can adopt a variety of oxidation states, and it is general to classify organophosphorus compounds based on their being derivatives of phosphorus(V) vs phosphorus(III), which are the predominant classes of compounds. In a descriptive but only intermittently used nomenclature, phosphorus compounds are identified by their coordination number σ and their valency λ. In this system, a phosphine is a σ3λ3 compound. Organophosphorus(V) compounds, main categories Phosphate esters and amides Phosphate esters have the general structure P(=O)(OR)3 feature P(V). Such species are of technological importance as flame retardant agents, and plasticizers. Lacking a P−C bond, these compounds are in the technical sense not organophosphorus compounds but esters of phosphoric acid. Many derivatives are found in nature, such as phosphatidylcholine. Phosphate ester are synthesized by alcoholysis of phosphorus oxychloride. A variety of mixed amido-alkoxo derivatives are known, one medically significant example being the anti-cancer drug cyclophosphamide. Also derivatives containing the thiophosphoryl group (P=S) include the pesticide malathion. The organophosphates prepared on the largest scale are the zinc dithiophosphates, as additives for motor oil. Several million kilograms of this coordination complex are produced annually by the reaction of phosphorus pentasulfide with alcohols. Phosphoryl thioates are thermodynamically much stabler than thiophosphates, which can rearrange at high temperature or with a catalytic alkylant to the former: SP(OR)3 → OP(OR)2SR In the environment, all these phosphorus(V) compounds break down via hydrolysis to eventually afford phosphate and the organic alcohol or amine from which they are derived. Phosphonic and phosphinic acids and their esters Phosphonates are esters of phosphonic acid and have the general formula RP(=O)(OR')2. Phosphonates have many technical applications, a well-known member being glyphosate, better known as Roundup. With the formula (HO)2P(O)CH2NHCH2CO2H, this derivative of glycine is one of the most widely used herbicides. Bisphosphonates are a class of drugs to treat osteoporosis. The nerve gas agent sarin, containing both C–P and F–P bonds, is a phosphonate. Phosphinates feature two P–C bonds, with the general formula R2P(=O)(OR'). A commercially significant member is the herbicide glufosinate. Similar to glyphosate mentioned above, it has the structure CH3P(O)(OH)CH2CH2CH(NH2)CO2H. The Michaelis–Arbuzov reaction is the main method for the synthesis of these compounds. For example, dimethylmethylphosphonate (see figure above) arises from the rearrangement of trimethylphosphite, which is catalyzed by methyl iodide. In the Horner–Wadsworth–Emmons reaction and the Seyferth–Gilbert homologation, phosphonates are used in reactions with carbonyl compounds. The Kabachnik–Fields reaction is a method for the preparation of aminophosphonates. These compounds contain a very inert bond between phosphorus and carbon. Consequently, they hydrolyze to give phosphonic and phosphinic acid derivatives, but not phosphate. Phosphine oxides, imides, and chalcogenides Phosphine oxides (designation σ4λ5) have the general structure R3P=O with formal oxidation state V. Phosphine oxides form hydrogen bonds and some are therefore soluble in water. The P=O bond is very polar with a dipole moment of 4.51 D for triphenylphosphine oxide. Compounds related to phosphine oxides include phosphine imides (R3PNR') and related chalcogenides (R3PE, where E = S, Se, Te). These compounds are some of the most thermally stable organophosphorus compounds. In general, they are less basic than the corresponding phosphine oxides, which can adduce to thiophosphoryl halides: R3PO + X3PS → R3P+–O–P+X2–S− + X− Some phosphorus sulfides can undergo a reverse Arbuzov rearrangement to a dialkylthiophosphinate ester. Phosphonium salts and phosphoranes Compounds with the formula [PR4+]X− comprise the phosphonium salts. These species are tetrahedral phosphorus(V) compounds. From the commercial perspective, the most important member is tetrakis(hydroxymethyl)phosphonium chloride, [P(CH2OH)4]Cl, which is used as a fire retardant in textiles. Approximately 2M kg are produced annually of the chloride and the related sulfate. They are generated by the reaction of phosphine with formaldehyde in the presence of the mineral acid: PH3 + HX + 4 CH2O → [P(CH2OH)4+]X− A variety of phosphonium salts can be prepared by alkylation and arylation of organophosphines: PR3 + R'X → [PR3R'+]X− The methylation of triphenylphosphine is the first step in the preparation of the Wittig reagent. The parent phosphorane (σ5λ5) is PH5, which is unknown. Related compounds containing both halide and organic substituents on phosphorus are fairly common. Those with five organic substituents are rare, although P(C6H5)5 is known, being derived from P(C6H5)4+ by reaction with phenyllithium. Phosphorus ylides are unsaturated phosphoranes, known as Wittig reagents, e.g. CH2P(C6H5)3. These compounds feature tetrahedral phosphorus(V) and are considered relatives of phosphine oxides. They also are derived from phosphonium salts, but by deprotonation not alkylation. Organophosphorus(III) compounds, main categories Phosphites, phosphonites, and phosphinites Phosphites, sometimes called phosphite esters, have the general structure P(OR)3 with oxidation state +3. Such species arise from the alcoholysis of phosphorus trichloride: PCl3 + 3 ROH → P(OR)3 + 3 HCl The reaction is general, thus a vast number of such species are known. Phosphites are employed in the Perkow reaction and the Michaelis–Arbuzov reaction. They also serve as ligands in organometallic chemistry. Intermediate between phosphites and phosphines are phosphonites (P(OR)2R') and phosphinite (P(OR)R'2). Such species arise via alcoholysis reactions of the corresponding phosphonous and phosphinous chlorides ((PCl2R') and (PClR'2) , respectively). The latter are produced by reaction of a phosphorus trichloride with a poor metal-alkyl complex, e.g. organomercury, organolead, or a mixed lithium-organoaluminum compound. Phosphines The parent compound of the phosphines is PH3, called phosphine in the US and British Commonwealth, but phosphane elsewhere. Replacement of one or more hydrogen centers by an organic substituents (alkyl, aryl), gives PH3−xRx, an organophosphine, generally referred to as phosphines. From the commercial perspective, the most important phosphine is triphenylphosphine, several million kilograms being produced annually. It is prepared from the reaction of chlorobenzene, PCl3, and sodium. Phosphines of a more specialized nature are usually prepared by other routes. Phosphorus halides undergo nucleophilic displacement by organometallic reagents such as Grignard reagents. Organophosphines are nucleophiles and ligands. Two major applications are as reagents in the Wittig reaction and as supporting phosphine ligands in homogeneous catalysis. Their nucleophilicity is evidenced by their reactions with alkyl halides to give phosphonium salts. Phosphines are nucleophilic catalysts in organic synthesis, e.g. the Rauhut–Currier reaction and Baylis-Hillman reaction. Phosphines are reducing agents, as illustrated in the Staudinger reduction for the conversion of organic azides to amines and in the Mitsunobu reaction for converting alcohols into esters. In these processes, the phosphine is oxidized to phosphorus(V). Phosphines have also been found to reduce activated carbonyl groups, for instance the reduction of an α-keto ester to an α-hydroxy ester. A few halophosphines are known, although phosphorus' strong nucleophilicity predisposes them to decomposition, and dimethylphosphinyl fluoride spontaneously disproportionates to dimethylphosphine trifluoride and tetramethylbiphosphine. One common synthesis adds halogens to tetramethylbiphosphine disulfide. Alternatively alkylation of phosphorus trichloride gives a halophosphonium cation, which metals reduce to halophosphines. Phosphaalkenes and phosphaalkynes Compounds with carbon phosphorus(III) multiple bonds are called phosphaalkenes (R2C=PR) and phosphaalkynes (RC≡P). They are similar in structure, but not in reactivity, to imines (R2C=NR) and nitriles (RC≡N), respectively. In the compound phosphorine, one carbon atom in benzene is replaced by phosphorus. Species of this type are relatively rare but for that reason are of interest to researchers. A general method for the synthesis of phosphaalkenes is by 1,2-elimination of suitable precursors, initiated thermally or by base such as DBU, DABCO, or triethylamine: Thermolysis of Me2PH generates CH2=PMe, an unstable species in the condensed phase. Organophosphorus(0), (I), and (II) compounds Compounds where phosphorus exists in a formal oxidation state of less than III are uncommon, but examples are known for each class. Organophosphorus(0) species are debatably illustrated by the carbene adducts, [P(NHC)]2, where NHC is an N-heterocyclic carbene. With the formulae (RP)n and (R2P)2, respectively, compounds of phosphorus(I) and (II) are generated by reduction of the related organophosphorus(III) chlorides: 5 PhPCl2 + 5 Mg → (PhP)5 + 5 MgCl2 2 Ph2PCl + Mg → Ph2P-PPh2 + MgCl2 Diphosphenes, with the formula R2P2, formally contain phosphorus-phosphorus double bonds. These phosphorus(I) species are rare but are stable provided that the organic substituents are large enough to prevent catenation. Bulky substituents also stabilize phosphorus radicals. Many mixed-valence compounds are known, e.g. the cage P7(CH3)3. See also Activity-based proteomics—A branch of biochemistry that often relies on organophosphorus probes to interrogate enzyme activities Bihar school meal poisoning incident Organophosphates Organophosphites Organothiophosphates References External links Organophosphorus chemistry at users.ox.ac.uk Organophosphorus chemistry at www.chem.wisc.edu NMR predictor for organophosphorus compound chemical shifts from Alan Brisdon's Research Group Functional groups
Organophosphorus chemistry
[ "Chemistry" ]
2,904
[ "Organophosphorus compounds", "Organic compounds", "Functional groups" ]
4,279,149
https://en.wikipedia.org/wiki/Quantum%20mysticism
Quantum mysticism, sometimes referred to pejoratively as quantum quackery or quantum woo, is a set of metaphysical beliefs and associated practices that seek to relate spirituality or mystical worldviews to the ideas of quantum mechanics and its interpretations. Quantum mysticism is considered pseudoscience and quackery by quantum mechanics experts. Before the 1970s the term was usually used in reference to the von Neumann–Wigner interpretation, but was later more closely associated with the purportedly pseudoscientific views espoused by New Age thinkers such as Fritjof Capra and other members of the Fundamental Fysiks Group, who were influential in popularizing the modern form of quantum mysticism. History Many early quantum physicists held some interest in traditionally Eastern metaphysics. Physicists Werner Heisenberg and Erwin Schrödinger, two of the main pioneers of quantum mechanics in the 1920s, were interested in Eastern mysticism, but are not known to have directly associated one with the other. In fact, both endorsed the Copenhagen interpretation of quantum mechanics. Olav Hammer said that "Schrödinger’s studies of Hindu mysticism never compelled him to pursue the same course as quantum metaphysicists such as David Bohm or Fritjof Capra." Schrödinger biographer Walter J. Moore said that Schrödinger's two interests of quantum physics and Hindu mysticism were "strangely dissociated". In his 1961 paper "Remarks on the mind–body question", Eugene Wigner suggested that a conscious observer played a fundamental role in quantum mechanics, a concept which is part of the von Neumann–Wigner interpretation. While his paper served as inspiration for later mystical works by others, Wigner's ideas were primarily philosophical and were not considered overtly pseudoscientific like the mysticism that followed. By the late 1970s, Wigner had shifted his position and rejected the role of consciousness in quantum mechanics. Harvard historian Juan Miguel Marin suggests that "consciousness [was] introduced hypothetically at the birth of quantum physics, [and] the term 'mystical' was also used by its founders, to argue in favor of and against such an introduction." Mysticism was argued against by Albert Einstein. Einstein's theories have often been falsely believed to support mystical interpretations of quantum theory. Einstein said, with regard to quantum mysticism, "No physicist believes that. Otherwise he wouldn't be a physicist." He debates several arguments about the approval of mysticism, even suggesting Bohr and Pauli to be in support of and to hold a positive belief in mysticism which he believes to be false. Niels Bohr denied quantum mysticism and had rejected the hypothesis that quantum theory requires a conscious observer as early as 1927, despite having been "sympathetic towards the hypothesis that understanding consciousness might require an extension of quantum theory to accommodate laws other than those of physics". In New Age thought In the early 1970s New Age culture began to incorporate ideas from quantum physics, beginning with books by Arthur Koestler, Lawrence LeShan and others which suggested that purported parapsychological phenomena could be explained by quantum mechanics. In this decade, the Fundamental Fysiks Group emerged. This group of physicists embraced quantum mysticism, parapsychology, Transcendental Meditation, and various New Age and Eastern mystical practices. Inspired in part by Wigner's exploration of the von Neumann–Wigner interpretation, Fritjof Capra, a member of the Fundamental Fysiks Group, wrote The Tao of Physics: An Exploration of the Parallels Between Modern Physics and Eastern Mysticism (1975), which espoused New Age quantum physics; the book was popular among the non-scientific public. In 1979, Gary Zukav, a non-scientist and "the most successful of Capra's followers", published The Dancing Wu Li Masters. The Fundamental Fysiks Group and Capra's book are said to be major influences for the rise of quantum mysticism as a pseudoscientific interpretation of quantum mechanics. Modern usage and examples In contrast to the mysticism of the early 20th century, today quantum mysticism typically refers to New Age beliefs that combine ancient mysticism with the language of quantum mechanics. Called a pseudoscience and a "hijacking" of quantum physics, it draws upon "coincidental similarities of language rather than genuine connections" to quantum mechanics. Physicist Murray Gell-Mann coined the phrase "quantum flapdoodle" to refer to the misuse and misapplication of quantum physics to other topics. An example of such use is New Age guru Deepak Chopra's "quantum theory" that aging is caused by the mind, expounded in his books Quantum Healing (1989) and Ageless Body, Timeless Mind (1993). In 1998, Chopra was awarded the parody Ig Nobel Prize in the physics category for "his unique interpretation of quantum physics as it applies to life, liberty, and the pursuit of economic happiness". In 2012, Stuart Hameroff and Chopra proposed that the "quantum soul" could exist "apart from the body" and "in space-time geometry, outside the brain, distributed nonlocally". The 2004 film What the Bleep Do We Know!? dealt with a range of New Age ideas in relation to physics. It was produced by the Ramtha School of Enlightenment, founded by J.Z. Knight, a channeler who said that her teachings were based on a discourse with a 35,000-year-old disembodied entity named Ramtha. Featuring Fundamental Fysiks Group member Fred Alan Wolf, the film misused some aspects of quantum mechanics—including the Heisenberg uncertainty principle and the observer effect—as well as biology and medicine. Numerous critics dismissed the film for its use of pseudoscience. See also Buddhism and science Psi (parapsychology) Quantum pseudo-telepathy Quantum suicide and immortality Schrödinger's cat in popular culture Synchronicity Notes Further reading Publications relating to quantum mysticism Criticism of quantum mysticism – criticism from both scientific and mystical point of view – an anti-mystical point-of-view External links Mysticism New Age and science Pseudoscience
Quantum mysticism
[ "Physics" ]
1,277
[ "Quantum mechanics", "Quantum mysticism" ]
4,279,208
https://en.wikipedia.org/wiki/Peer-to-peer%20file%20sharing
Peer-to-peer file sharing is the distribution and sharing of digital media using peer-to-peer (P2P) networking technology. P2P file sharing allows users to access media files such as books, music, movies, and games using a P2P software program that searches for other connected computers on a P2P network to locate the desired content. The nodes (peers) of such networks are end-user computers and distribution servers (not required). The early days of file-sharing were done predominantly by client-server transfers from web pages, FTP and IRC before Napster popularised a Windows application that allowed users to both upload and download with a freemium style service. Record companies and artists called for its shutdown and FBI raids followed. Napster had been incredibly popular at its peak, spawning a grass-roots movement following from the mixtape scene of the 80's and left a significant gap in music availability with its followers. After much discussion on forums and in chat-rooms, it was decided that Napster had been vulnerable due to its reliance on centralised servers and their physical location and thus competing groups raced to build a decentralised peer-to-peer system. Peer-to-peer file sharing technology has evolved through several design stages from the early networks like Gnutella, which popularized the technology in several iterations that used various front ends such as Kazaa, Limewire and WinMX before Edonkey then on to later models like the BitTorrent protocol. Microsoft uses it for Update distribution (Windows 10) and online video games use it as their content distribution network for downloading large amounts of data without incurring the dramatic costs for bandwidth inherent when providing just a single source. Several factors contributed to the widespread adoption and facilitation of peer-to-peer file sharing. These included increasing Internet bandwidth, the widespread digitization of physical media, and the increasing capabilities of residential personal computers. Users are able to transfer one or more files from one computer to another across the Internet through various file transfer systems and other file-sharing networks. History The central index server indexed the users and their shared content. When someone searched for a file, the server searched all available copies of that file and presented them to the user. The files would be transferred directly between private computers (peers/nodes). A limitation was that only music files could be shared. Because this process occurred on a central server, however, Napster was held liable for copyright infringement and shut down in July 2001. It later reopened as a pay service. After Napster was shut down, peer-to-peer services were invented such as Gnutella and Kazaa. These services also allowed users to download files other than music, such as movies and games. Technology evolution Napster and eDonkey2000 both used a central server-based model. These systems relied on the operation of the respective central servers, and thus were susceptible to centralized shutdown. Their demise led to the rise of networks like Limewire, Kazaa, Morpheus, Gnutella, and Gnutella2, which are able to operate without any central servers, eliminating the central vulnerability by connecting users remotely to each other. However, these networks still relied on specific, centrally distributed client programs, so they could be crippled by taking legal action against a sufficiently large number of publishers of the client programs. Sharman Networks, the publisher of Kazaa, has been inactive since 2006. StreamCast Networks, the publisher of Morpheus, shut down on April 22, 2008. Limewire LLC was shut down in late 2010 or early 2011. This cleared the way for the dominance of the Bittorrent protocol, which differs from its predecessors in two major ways. The first is that no individual, group, or company owns the protocol or the terms "Torrent" or "Bittorrent", meaning that anyone can write and distribute client software that works with the network. The second is that Bittorrent clients have no search functionality of their own. Instead, users must rely on third-party websites like Isohunt or The Pirate Bay to find "torrent" files, which function like maps that tell the client how to find and download the files that the user actually wants. These two characteristics combined offer a level of decentralization that makes Bittorrent practically impossible to shut down. File-sharing networks are sometimes organized into three "generations" based on these different levels of decentralization. Darknets, including networks like Freenet, are sometimes considered to be third-generation file-sharing networks. Peer-to-peer file sharing is also efficient in terms of cost. The system administration overhead is smaller because the user is the provider and usually the provider is the administrator as well. Hence each network can be monitored by the users themselves. At the same time, large servers sometimes require more storage and this increases the cost since the storage has to be rented or bought exclusively for a server. However, usually peer-to-peer file sharing does not require a dedicated server. Economic impact There are ongoing discussion about the economic impact of P2P file sharing. Norbert Michel, a policy analyst at The Heritage Foundation, said that studies had produced "disparate estimates of file sharing's impact on album sales". In the book The Wealth of Networks, Yochai Benkler states that peer-to-peer file sharing is economically efficient and that the users pay the full transaction cost and marginal cost of such sharing even if it "throws a monkey wrench into the particular way in which our society has chosen to pay musicians and re-cording executives. This trades off efficiency for longer-term incentive effects for the recording industry. However, it is efficient within the normal meaning of the term in economics in a way that it would not have been had Jack and Jane used subsidized computers or network connections". A calculation example: with peer to peer file sharing: with casual content delivery networks: Music industry The economic effect of copyright infringement through peer-to-peer file sharing on music revenue has been controversial and difficult to determine. Unofficial studies found that file sharing had a negative impact on record sales. It has proven difficult to untangle the cause and effect relationships among a number of different trends, including an increase in legal online purchases of music; illegal file-sharing; drop in the prices of compact disks; and the closure of many independent music stores with a concomitant shift to sales by big-box retailers. Film industry The Motion Picture Association (MPAA) reported that American studios lost $2,373 billion in 2005 (equivalent to $ billion in ) representing approximately one third of the total cost of film piracy in the United States. The MPAA's estimate was doubted by commentators since it was based on the assumption that one download was equivalent to one lost sale, and downloaders might not purchase the movie if illegal downloading was not an option. Due to the private nature of the study, the figures could not be publicly checked for methodology or validity. In January 2008, as the MPAA was lobbying for a bill which would compel Universities to crack down on piracy, it was admitted by MPAA that its figures on piracy in colleges had been inflated by up to 300%. A 2010 study, commissioned by the International Chamber of Commerce and conducted by independent Paris-based economics firm TERA, estimated that unlawful downloading of music, film and software cost Europe's creative industries several billion dollars in revenue each year. A further TERA study predicted losses due to piracy reaching as much as 1.2 million jobs and €240 billion in retail revenue by 2015 if the trend continued. Researchers applied a substitution rate of ten percent to the volume of copyright infringements per year. This rate corresponded to the number of units potentially traded if unlawful file sharing were eliminated and did not occur. Piracy rates for popular software and operating systems have been common, even in regions with strong intellectual property enforcement, such as the United States or the European Union. Public perception and usage In 2004, an estimated 70 million people participated in online file sharing. According to a CBS News poll, nearly 70 percent of 18- to 29-year-olds thought file sharing was acceptable in some circumstances and 58 percent of all Americans who followed the file sharing issue considered it acceptable in at least some circumstances. In January 2006, 32 million Americans over the age of 12 had downloaded at least one feature-length movie from the Internet, 80 percent of whom had done so exclusively over P2P. Of the population sampled, 60 percent felt that downloading copyrighted movies off the Internet did not constitute a very serious offense, however 78 percent believed taking a DVD from a store without paying for it constituted a very serious offense. In July 2008, 20 percent of Europeans used file sharing networks to obtain music, while 10 percent used paid-for digital music services such as iTunes. In February 2009, a survey undertaken by Tiscali in the UK found that 75 percent of the English public polled were aware of what was legal and illegal in relation to file sharing, but there was a divide as to where they felt the legal burden should be placed: 49 percent of people believed P2P companies should be held responsible for illegal file sharing on their networks and 18 percent viewed individual file sharers as the culprits. According to an earlier poll, 75 percent of young voters in Sweden (18-20) supported file sharing when presented with the statement: "I think it is OK to download files from the Net, even if it is illegal." Of the respondents, 38 percent said they "adamantly agreed" while 39 percent said they "partly agreed". An academic study among American and European college students found that users of file-sharing technologies were relatively anti-copyright and that copyright enforcement created backlash, hardening pro-file sharing beliefs among users of these technologies. Communities in P2P file sharing networks Communities have a prominent role in many peer to peer networks and applications, such as BitTorrent, Gnutella and DC++. There are different elements that contribute to the formation, development and the stability of these communities, which include interests, user attributes, cost reduction, user motivation and the dimension of the community. Interest attributes Peer communities are formed on the basis of common interests. For Khambatti, Ryu and Dasgupta common interests can be labelled as attributes "which are used to determine the peer communities in which a particular peer can participate". There are two ways in which these attributes can be classified: explicit and implicit attributes. Explicit values are information that peers provide about themselves to a specific community, such as their interest in a subject or their taste in music. With implicit values, users do not directly express information about themselves, albeit, it is still possible to find information about that specific user by uncovering his or her past queries and research carried out in a P2P network. Khambatti, Ryu and Dasgupta divide these interests further into three classes: personal, claimed and group attributes. A full set of attributes (common interests) of a specific peer is defined as personal attributes, and is a collection of information a peer has about him or herself. Peers may decide not to disclose information about themselves to maintain their privacy and online security. It is for this reason that the authors specify that "a subset of...attributes is explicitly claimed public by a peer", and they define such attributes as "claimed attributes". The third category of interests is group attributes, defined as "location or affiliation oriented" and are needed to form a...basis for communities", an example being the "domain name of an internet connection" which acts as an online location and group identifier for certain users. Cost reduction Cost reduction influences the sharing component of P2P communities. Users who share do so to attempt "to reduce...costs" as made clear by Cunningham, Alexander and Adilov. In their work Peer-to-peer File Sharing Communities, they explain that "the act of sharing is costly since any download from a sharer implies that the sharer is sacrificing bandwidth". As sharing represents the basis of P2P communities, such as Napster, and without it "the network collapses", users share despite its costs in order to attempt to lower their own costs, particularly those associated with searching, and with the congestion of internet servers. User motivation and size of community User motivation and the size of the P2P community contribute to its sustainability and activity. In her work Motivating Participation in Peer to Peer Communities, Vassileva studies these two aspects through an experiment carried out in the University of Saskatchewan (Canada), where a P2P application (COMUTELLA) was created and distributed among students. In her view, motivation is "a crucial factor" in encouraging users to participate in an online P2P community, particularly because the "lack of a critical mass of active users" in the form of a community will not allow for a P2P sharing to function properly. Usefulness is a valued aspect by users when joining a P2P community. The specific P2P system must be perceived as "useful" by the user and must be able to fulfil his or her needs and pursue his or her interests. Consequently, the "size of the community of users defines the level of usefulness" and "the value of the system determines the number of users". This two way process is defined by Vassileva as a feedback loop, and has allowed for the birth of file-sharing systems like Napster and KaZaA. However, in her research Vassileva has also found that "incentives are needed for the users in the beginning", particularly for motivating and getting users into the habit of staying online. This can be done, for example, by providing the system with a wide amount of resources or by having an experienced user provide assistance to a less experienced one. User classification Users participating in P2P systems can be classified in different ways. According to Vassileva, users can be classified depending on their participation in the P2P system. There are five types of users to be found: users who create services, users who allow services, users who facilitate search, users who allow communication, users who are uncooperative and free ride. In the first instance, the user creates new resources or services and offers them to the community. In the second, the user provides the community with disk space "to store files for downloads" or with "computing resources" to facilitate a service provided by another users. In the third, the user provides a list of relationships to help other users find specific files or services. In the fourth, the user participates actively in the "protocol of the network", contributing to keeping the network together. In the last situation, the user does not contribute to the network, downloads what he or she needs but goes immediately offline once the service is not needed anymore, thus free-riding on the network and community resources. Tracking Corporations continue to combat the use of the internet as a tool to illegally copy and share various files, especially that of copyrighted music. The Recording Industry Association of America (RIAA) has been active in leading campaigns against infringers. Lawsuits have been launched against individuals as well as programs such as Napster in order to "protect" copyright owners. One effort of the RIAA has been to implant decoy users to monitor the use of copyrighted material from a firsthand perspective. Risks In early June 2002, Researcher Nathaniel Good at HP Labs demonstrated that user interface design issues could contribute to users inadvertently sharing personal and confidential information over P2P networks. In 2003, Congressional hearings before the House Committee of Government Reform (Overexposed: The Threats to Privacy & Security on File Sharing Networks) and the Senate Judiciary Committee (The Dark Side of a Bright Idea: Could Personal and National Security Risks Compromise the Potential of P2P File-Sharing Networks?) were convened to address and discuss the issue of inadvertent sharing on peer-to-peer networks and its consequences to consumer and national security. Researchers have examined potential security risks including the release of personal information, bundled spyware, and viruses downloaded from the network. Some proprietary file sharing clients have been known to bundle malware, though open source programs typically have not. Some open source file sharing packages have even provided integrated anti-virus scanning. Since approximately 2004 the threat of identity theft had become more prevalent, and in July 2008 there was another inadvertent revealing of vast amounts of personal information through P2P sites. The "names, dates of birth, and Social Security numbers of about 2,000 of (an investment) firm's clients" were exposed, "including [those of] Supreme Court Justice Stephen Breyer." A drastic increase in inadvertent P2P file sharing of personal and sensitive information became evident in 2009 at the beginning of President Obama's administration when the blueprints to the helicopter Marine One were made available to the public through a breach in security via a P2P file sharing site. Access to this information has the potential of being detrimental to US security. Furthermore, shortly before this security breach, the Today show had reported that more than 150,000 tax returns, 25,800 student loan applications and 626,000 credit reports had been inadvertently made available through file sharing. The United States government then attempted to make users more aware of the potential risks involved with P2P file sharing programs through legislation such as H.R. 1319, the Informed P2P User Act, in 2009. According to this act, it would be mandatory for individuals to be aware of the risks associated with peer-to-peer file sharing before purchasing software with informed consent of the user required prior to use of such programs. In addition, the act would allow users to block and remove P2P file sharing software from their computers at any time, with the Federal Trade Commission enforcing regulations. US-CERT also warns of the potential risks. Copyright issues The act of file sharing is not illegal per se and peer-to-peer networks are also used for legitimate purposes. The legal issues in file sharing involve violating the laws of copyrighted material. Most discussions about the legality of file sharing are implied to be about solely copyright material. Many countries have fair use exceptions that permit limited use of copyrighted material without acquiring permission from the rights holders. Such documents include commentary, news reporting, research and scholarship. Copyright laws are territorial- they do not extend beyond the territory of a specific state unless that state is a party to an international agreement. Most countries today are parties to at least one such agreement. In the area of privacy, recent court rulings seem to indicate that there can be no expectation of privacy in data exposed over peer-to-peer file-sharing networks. In a 39-page ruling released November 8, 2013, US District Court Judge Christina Reiss denied the motion to suppress evidence gathered by authorities without a search warrant through an automated peer-to-peer search tool. Curtailing the sharing of copyrighted materials Media industries have made efforts to curtail the spread of copyrighted materials through P2P systems. Initially, the corporations were able to successfully sue the distribution platforms such as Napster and have them shut down. Additionally, they litigated users who prominently shared copyrighted materials en masse. However, as more decentralized systems such as FastTrack were developed, this proved to be unenforceable. There are also millions of users worldwide who use P2P systems illegally, which made it impractical to seek widespread legal action. One major effort involves distributing polluted files into the P2P network. For instance, one may distribute unrelated files that has the metadata of a copyrighted media. This way, users who downloads the media would receive something unrelated to what they have been expecting. See also Anonymous P2P Comparison of file-sharing applications Disk sharing File sharing in Canada File sharing in Japan File sharing timeline (peer to peer and not) Friend-to-friend or F2F List of P2P protocols Open Music Model Privacy in file sharing networks Private P2P Public domain Torrent poisoning Trade group efforts against file sharing Warez References Internet terminology Intellectual property law de:Filesharing
Peer-to-peer file sharing
[ "Technology" ]
4,158
[ "Computing terminology", "Internet terminology" ]
4,279,302
https://en.wikipedia.org/wiki/166%20%28number%29
166 (one hundred [and] sixty-six) is the natural number following 165 and preceding 167. In mathematics 166 is an even number and a composite number. It is a centered triangular number. Given 166, the Mertens function returns 0. 166 is a Smith number in base 10. 166 in Roman numerals consists of the first 5 symbols, CLXVI. External links Number Facts and Trivia: 166 The Number 166 VirtueScience: 166 166th Street (3rd Avenue El) References Integers
166 (number)
[ "Mathematics" ]
105
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
4,279,364
https://en.wikipedia.org/wiki/Primasheet
Primasheet is a rubberized sheet explosive identical to Detasheet. Manufactured by Ensign-Bickford Aerospace & Defense Company Primasheet comes in two varieties: Primasheet 1000 is PETN based and Primasheet 2000 is RDX based. Both are waterproof and are supplied in continuous rolls. Primasheet 1000 Primasheet 1000 is PETN based, and contains 65% PETN, 8% nitrocellulose, and 27% plasticizer. Primasheet 1000 is olive green colored, and manufactured in 1.0, 1.5, 2, 3, 4, 5, 6, and 8 mm thicknesses. Primasheet 2000 Primasheet 2000 is an RDX-based rubberized sheet explosive. It contains 88.2% RDX with the remainder plasticizer. It is equally as powerful as C4. SX2 A British military explosive, also manufactured by Ensign-Bickford. This is very similar or identical to commercial Primasheet 2000. References Explosives Rubberized explosives
Primasheet
[ "Chemistry" ]
213
[ "Explosives", "Explosions" ]
4,279,403
https://en.wikipedia.org/wiki/Correlation%20immunity
In mathematics, the correlation immunity of a Boolean function is a measure of the degree to which its outputs are uncorrelated with some subset of its inputs. Specifically, a Boolean function is said to be correlation-immune of order m if every subset of m or fewer variables in is statistically independent of the value of . Definition A function is -th order correlation immune if for any independent binary random variables , the random variable is independent from any random vector with . Results in cryptography When used in a stream cipher as a combining function for linear feedback shift registers, a Boolean function with low-order correlation-immunity is more susceptible to a correlation attack than a function with correlation immunity of high order. Siegenthaler showed that the correlation immunity m of a Boolean function of algebraic degree d of n variables satisfies m + d ≤ n; for a given set of input variables, this means that a high algebraic degree will restrict the maximum possible correlation immunity. Furthermore, if the function is balanced then m + d ≤ n − 1. References Further reading Cusick, Thomas W. & Stanica, Pantelimon (2009). "Cryptographic Boolean functions and applications". Academic Press. . Cryptography Boolean algebra
Correlation immunity
[ "Mathematics", "Engineering" ]
256
[ "Boolean algebra", "Cybersecurity engineering", "Cryptography", "Applied mathematics", "Mathematical logic", "Fields of abstract algebra" ]
4,279,406
https://en.wikipedia.org/wiki/Mixotricha%20paradoxa
Mixotricha paradoxa is a species of protozoan that lives inside the gut of the Australian termite species Mastotermes darwiniensis. It is composed of five different organisms: three bacterial ectosymbionts live on its surface for locomotion and at least one endosymbiont lives inside to help digest cellulose in wood to produce acetate for its host(s). Mixotricha mitochondria degenerated in hydrogenosomes and mitosomes and lost the ability to produce energy aerobically by oxidative phosphorylation. The mitochondria-derived nuclear genes were however conserved. Discovery The name was given by the Australian biologist J.L. Sutherland, who first described Mixotricha in 1933. The name means "the paradoxical being with mixed-up hairs" because this protist has both cilia and flagella, which was not thought to be possible for protists. Behavior Mixotricha is a species of protozoan that lives inside the gut of the Australian termite species Mastotermes darwiniensis and has multiple bacterial symbionts. Mixotricha is a large protozoan long and contains hundreds of thousands of bacteria. It is an endosymbiont and digests cellulose for the termite. Trichomonads like Mixotricha reproduce by a special form of longitudinal fission, leading to large numbers of trophozoites in a relatively short time. Cysts never form, so transmission from one host to another is always based on direct contact between the sites they occupy. Anatomy Species of the order Trichomonadida typically have four to six flagella at the cell's apical pole, one of which is recurrent - that is, it runs along a surface wave, giving the aspect of an undulating membrane. Mixotricha paradoxa have four weak flagella that serve as rudders. It has four large flagella at the front end, three pointing forwards and one backward. The basal bodies are also bacteria, not spirochaetes but oval, pill-shaped bacteria. There is a one-to-one relationship between a bracket, a spirochaete, and a basal bacterium. Each bracket has one spirochaete running through it and one pill bacterium at its base as the basal body. It has not been shown definitely, but the basal bodies could also be making cellulases that digest wood. Endosymbionts for biochemical processes At least one endosymbiont lives inside the protist to help digest cellulose and lignin, a major component of the wood the termites eat. The cellulose gets converted to glucose then to acetate, and the lignin is digested directly to acetate. The acetate probably crosses the termite gut membrane to be digested later. Mixotricha forms a mutualistic relationship with bacteria living inside the termite. There are a total of four species of bacterial symbionts. It has spherical bacteria inside the cell, which function as mitochondria, which Mixotricha lacks. Mixotricha mitochondria degenerated and lost the ability to produce energy aerobically by oxidative phosphorylation. Mitochondrial relics include hydrogenosomes which produce hydrogen and small structures called mitosomes. Ectosymbionts for movement Three surface colonising bacteria are anchored on the surface. The flagella and cilia are actually two different single celled organisms. The ciliate belongs to an archaic group that used to be called archezoa but this term is no longer in fashion. It has four weak flagella, which serve as a rudder. While Mixotricha has four anterior flagella, it does not use them for locomotion, but more for steering. For locomotion, about 250,000 hairlike Treponema spirochaetes, a species of helical bacteria, are attached to the cell surface and provide the cell with cilia-like movements. The wavelength of the cilia is about and suggests that the spirochaetes are somehow in touch with each other. Mixotricha also has rod-shaped bacteria arranged in an ordered pattern on the surface of the cell. Each spirochaete has its own little emplacement, called a 'bracket'. Spirochetes move continuously forwards or backwards but when they are attached they move in one direction. Sperm tails might have their origin in spirochaetes. The evidence that cilia (undulipodia) are symbiotic bacteria is found unpersuasive. Genome Mixotricha have five genomes, as they form very close symbiotic relationships with four types of bacteria. It is a good example organism for symbiogenesis and nestedness. There are two spirochete and one-rod bacteria on its surface, one endosymbiotic bacteria inside to digest cellulose and the host nucleus. References Metamonads Symbiosis Endosymbiotic events Protists described in 1933 Metamonad species
Mixotricha paradoxa
[ "Biology" ]
1,064
[ "Biological interactions", "Endosymbiotic events", "Symbiosis", "Behavior" ]
4,279,531
https://en.wikipedia.org/wiki/Health%20impact%20assessment
Health impact assessment (HIA) is defined as "a combination of procedures, methods, and tools by which a policy, program, or project may be judged as to its potential effects on the health of a population, and the distribution of those effects within the population." Overview HIA is intended to produce a set of evidence-based recommendations to inform decision-making . HIA seeks to maximise the positive health impacts and minimise the negative health impacts of proposed policies, programs or projects. The procedures of HIA are similar to those used in other forms of impact assessment, such as environmental impact assessment or social impact assessment. HIA is usually described as following the steps listed, though many practitioners break these into sub-steps or label them differently: Screening - determining if an HIA is warranted/required Scoping - determining which impacts will be considered and the plan for the HIA Identification and assessment of impacts - determining the magnitude, nature, extent and likelihood of potential health impacts, using a variety of different methods and types of information Decision-making and recommendations - making explicit the trade-offs to be made in decision-making and formulating evidence-informed recommendations Evaluation, monitoring and follow-up - process and impact evaluation of the HIA and the monitoring and management of health impacts The main objective of HIA is to apply existing knowledge and evidence about health impacts, to specific social and community contexts, to develop evidence-based recommendations that inform decision-making in order to protect and improve community health and wellbeing. Because of financial and time constraints, HIAs do not generally involve new research or the generation of original scientific knowledge. However, the findings of HIAs, especially where these have been monitored and evaluated over time, can be used to inform other HIAs in contexts that are similar. An HIA's recommendations may focus on both design and operational aspects of a proposal. HIA has also been identified as a mechanism by which potential health inequalities can be identified and redressed prior to the implementation of proposed policy, program or project . A number of manuals and guidelines for HIA's use have been developed (see further reading). Determinants of health The proposition that policies, programs and projects have the potential to change the determinants of health underpins HIA's use. Changes to health determinants then leads to changes in health outcomes or the health status of individuals and communities. The determinants of health are largely environmental and social, so that there are many overlaps with environmental impact assessment and social impact assessment. Levels of HIA Three forms of HIA exist: Desk-based HIA, which takes 2–6 weeks for one assessor to complete and provides a broad overview of potential health impacts; Rapid HIA, which takes approximately 12 weeks for one assessor to complete and provides more detailed information on potential health impacts; and Comprehensive HIA, which takes approximately 6 months for one assessor and provides a in-depth assessment of potential health impacts. It has been suggested that HIAs can be prospective (done before a proposal is implemented), concurrent (done while the proposal is being implemented) or retrospective (done after a proposal has been implemented) . This remains controversial, however, with a number of HIA practitioners suggesting that concurrent HIA is better regarded as a monitoring activity and that retrospective HIA is more akin to evaluation with a health focus, rather than being assessment per se . Prospective HIA is preferred as it allows the maximum practical opportunity to influence decision-making and subsequent health impacts. HIA practitioners HIA practitioners can be found in the private and public sectors, but are relatively few in number. There are no universally accepted competency frameworks or certification processes. It is suggested that a lead practitioner should have extensive education and training in a health related field, experience of participating in HIAs, and have attended an HIA training course. It has been suggested and widely accepted that merely having a medical or health degree should not be regarded as an indication of competency. The International Association for Impact Assessment has an active health section. A HIA People Directory can be found on the HIA GATEWAY. HIA worldwide HIA used around the world, most notably in Europe, North America, Australia, New Zealand, Africa and Thailand . The safeguard policies and standards of the International Finance Corporation (IFC), part of the World Bank, were established in 2006. These contain a requirement for health impact assessment in large projects. The standards have been accepted by most of the leading lending banks who are parties to the Equator Principles. Health impact assessments are becoming routine in many large development projects in both public and private sectors of developing countries. There is also a long history of health impact assessment in the water resource development sector - large dams and irrigation systems. Of the regional development banks, the Asian Development Bank has the longest and most consistent history of engaging with HIA. This engagement dates back to 1992, when it produced its first HIA Guidelines (ADB, 1992); this focused on the state of the art of methods and procedures at this early stage in the development of HIA. A second guidance document, a primer on health impacts of Development Project was published ten years later (Peralta and Hunt, 2003), with a focus on health risks and opportunities in development from sector-specific perspectives. Between 2015 and 2018, the Governments of Australia and the UK funded the Regional Malaria and Other Communicable Disease Threats Trust Fund (RMTF) which supported multi-country, cross-border and multisector responses to urgent malaria and other communicable disease issues, focused on the countries of the Greater Mekong Subregion (GMS). Under the domain of promotion and prevention mainly HIA capacity development was addressed. This resulted in a new publication: Health Impact Assessment: A Good Practice Source Book (2018). See also Impact Assessment Environmental impact assessment Equality Impact Assessment Four-Step Impact Assessment Health Equity Impact Analysis Healthy development measurement tool Risk assessment Social impact assessment Health promotion Jakarta Declaration Ottawa Charter for Health Promotion Health protection Environmental health List of environmental health hazards Precautionary principle Risk assessment Population health Public health Social determinants of health References . ADB (2018), Health Impact Assessment: A Good Practice Sourcebook, Manila, Asian Development Bank. https://www.adb.org/documents/health-impact-assessment-sourcebook . . . . . . This page uses Harvard referencing. References are sorted alphabetically by author surname. Further reading Books and edited book chapters ADB (1992). Guidelines for the Health Impact Assessment of Development Projects. ADB Environment Paper no. 11. Manila, Asian Development Bank. . . . . Peralta, G.M., and Hunt, J.M. (2003). A Primer of Health Impacts of Development Programs. Manila, Asian Development Bank. . . Includes several chapters on HIA. Journal articles . . . . . . . Journal special issues . . . . . . Manuals and guidelines . . . . . . . . . . This page uses Harvard referencing. Further reading categories are sorted alphabetically; citations are sorted by year (newest to oldest), then alphabetically by author surname within years. If citations are included in the references section they are not listed in the further reading section. HIA resource websites Health Impact Project - Funding for HIA and resources HIA Connect HIA Gateway IMPACT - International Health Impact Assessment Consortium RIVM HIA Database World Health Organization HIA Site Government HIA websites Environmental Health Branch, New South Wales Health (Australia) European Centre for Health Policy (Belgium) HPP-HIA Program (Thailand) Institute for Public Health in Ireland (Ireland) Planning for Healthy Places with Health Impact Assessments (United States) San Francisco Department of Public Health (United States) Centers for Disease Control and Prevention (United States) US Environmental Protection Agency (United States) University HIA websites Monash University, SHIA@Monash (Melbourne, Australia) University of Birmingham, HIA Research Unit (Birmingham, UK) University of California, Berkeley, Health Impact Group, School of Public Health (Berkeley, USA) University of California, Los Angeles, HIA Project (Los Angeles, USA) University of California, Los Angeles, Health Impact Assessment Clearinghouse Learning and Information Center (Los Angeles, USA) University of Liverpool, IMPACT - International Health Impact Assessment Consortium Department of Public Health and Policy.(Liverpool, UK) University of New South Wales, HIA Connect, Health Inequalities, Health Impact Assessment and Healthy Public Policy Program (CHETRE), Research Centre for Primary Health Care and Equity, Faculty of Medicine (Sydney, Australia) University of Otago, Health, Wellbeing and Equity Impact Assessment Unit, Wellington School of Medicine and Health Sciences (Wellington, New Zealand) Professional associations IAIA Society for Practitioners of Health Impact Assessment (SOPHIA) Other HIA websites Health Impact Assessment - International (Email Discussion Group) This page uses Harvard referencing. External links are sorted alphabetically. Public health Human geography Impact assessment Health care quality Health economics
Health impact assessment
[ "Environmental_science" ]
1,850
[ "Environmental social science", "Human geography" ]
4,280,030
https://en.wikipedia.org/wiki/International%20Conference%20on%20Functional%20Programming
The International Conference on Functional Programming (ICFP) is an annual academic conference in the field of computer science sponsored by the ACM SIGPLAN, in association with IFIP Working Group 2.8 (Functional Programming). The conference focuses on functional programming and related areas of programming languages, logic, compilers and software development. The ICFP was first held in 1996, replacing two biennial conferences: the Functional Programming and Computer Architecture (FPCA) and LISP and Functional Programming (LFP). The conference location alternates between Europe and North America, with occasional appearances in other continents. The conference usually lasts 3 days, surrounded by co-located workshops devoted to particular functional languages or application areas. The ICFP has also held an open annual programming contest since 1998, called the ICFP Programming Contest. History 2012: 17th ACM SIGPLAN International Conference on Functional Programming in Copenhagen, Denmark (General Chair: Peter Thiemann, University of Freiburg; Program Chair: Robby Findler, Northwestern University) See also Related conferences FSCD : International Conference on Formal Structures for Computation and Deduction FLOPS: International Symposium on Functional and Logic Programming IFL: International Symposia on Implementation and Application of Functional Languages ISMM: International Symposium on Memory Management MPC: International Conference on Mathematics of Program Construction PLDI: Programming Language Design and Implementation POPL: Principles of Programming Languages PPDP: International Conference on Principles and Practice of Declarative Programming TFP: Symposium on Trends in Functional Programming TLCA: International Conference on Typed Lambda Calculi and Applications TLDI: International Workshop on Types in Language Design and Implementation SAS: International Static Analysis Symposium Related journals Journal of Functional Programming Journal of Functional and Logic Programming Higher-Order and Symbolic Computation ACM Transactions on Programming Languages and Systems References External links ICFP main site ICFP 2023 conference ICFP Programming Contest Functional programming Programming languages conferences Computer science conferences
International Conference on Functional Programming
[ "Technology" ]
389
[ "Computing stubs", "Computer science", "Computer science conferences", "Computer conference stubs" ]
627,772
https://en.wikipedia.org/wiki/Sotades
Sotades (; 3rd century BC) was an Ancient Greek poet. Sotades was born in Maroneia, either the one in Thrace, or in Crete. He lived in Alexandria during the reign of Ptolemy II Philadelphus (285–246 BC). The city was at that time a remarkable center of learning, with a great deal of artistic and literary activity, including epic poetry and the Great Library. Only a few genuine fragments of his work have been preserved; those in Stobaeus are generally considered spurious. Ennius translated some poems of this kind, included in his book of satires under the name of Sota. He had a son named Apollonius. He has been credited with the invention of the palindrome. Sotades was the chief representative of the writers of obscene and even satirical poems, called "kinaidoi" (), composed in the Ionic dialect and in the metre named after him. One of his poems attacked Ptolemy II Philadelphus's marriage to his own sister Arsinoe II, from which came the infamous line: "You're sticking your prick in an unholy hole." For this, Sotades was imprisoned, but he escaped to the city of Caunus, where he was afterwards captured by the admiral Patroclus, shut up in a leaden chest, and thrown into the sea. British Orientalist and explorer Sir Richard Francis Burton (1821–1890) hypothesised the existence of a "Sotadic zone". He asserted that there exists a geographic zone in which pederasty is prevalent and celebrated among the indigenous inhabitants, and named it after Sotades. See also Sotadean metre References External links Sotades from the Wiki Classical Dictionary Sotades (2) from Smith, Dictionary of Greek and Roman Biography and Mythology (1867) Ancient Greek poets Ancient Thracian Greeks Erotic poetry Greek erotica writers Greek male writers Obscenity Palindromists Pederasty in ancient Greece Pornography 3rd-century BC Greek people 3rd-century BC poets People from the Ptolemaic Kingdom People from Maroneia
Sotades
[ "Physics" ]
441
[ "Palindromists", "Symmetry", "Palindromes" ]
627,828
https://en.wikipedia.org/wiki/Double%20Mersenne%20number
In mathematics, a double Mersenne number is a Mersenne number of the form where p is prime. Examples The first four terms of the sequence of double Mersenne numbers are : Double Mersenne primes A double Mersenne number that is prime is called a double Mersenne prime. Since a Mersenne number Mp can be prime only if p is prime, (see Mersenne prime for a proof), a double Mersenne number can be prime only if Mp is itself a Mersenne prime. For the first values of p for which Mp is prime, is known to be prime for p = 2, 3, 5, 7 while explicit factors of have been found for p = 13, 17, 19, and 31. Thus, the smallest candidate for the next double Mersenne prime is , or 22305843009213693951 − 1. Being approximately 1.695, this number is far too large for any currently known primality test. It has no prime factor below 1 × 1036. There are probably no other double Mersenne primes than the four known. Smallest prime factor of (where p is the nth prime) are 7, 127, 2147483647, 170141183460469231731687303715884105727, 47, 338193759479, 231733529, 62914441, 2351, 1399, 295257526626031, 18287, 106937, 863, 4703, 138863, 22590223644617, ... (next term is > 1 × 1036) Catalan–Mersenne number conjecture The recursively defined sequence is called the sequence of Catalan–Mersenne numbers. The first terms of the sequence are: Catalan discovered this sequence after the discovery of the primality of by Lucas in 1876. Catalan conjectured that they are prime "up to a certain limit". Although the first five terms are prime, no known methods can prove that any further terms are prime (in any reasonable time) simply because they are too huge. However, if is not prime, there is a chance to discover this by computing modulo some small prime (using recursive modular exponentiation). If the resulting residue is zero, represents a factor of and thus would disprove its primality. Since is a Mersenne number, such a prime factor would have to be of the form . Additionally, because is composite when is composite, the discovery of a composite term in the sequence would preclude the possibility of any further primes in the sequence. If were prime, it would also contradict the New Mersenne conjecture. It is known that is composite, with factor . In popular culture In the Futurama movie The Beast with a Billion Backs, the double Mersenne number is briefly seen in "an elementary proof of the Goldbach conjecture". In the movie, this number is known as a "Martian prime". See also Cunningham chain Double exponential function Fermat number Perfect number Wieferich prime References Further reading . External links Tony Forbes, A search for a factor of MM61 . Status of the factorization of double Mersenne numbers Double Mersennes Prime Search Operazione Doppi Mersennes Eponymous numbers in mathematics Integer sequences Large integers Unsolved problems in number theory Mersenne primes
Double Mersenne number
[ "Mathematics" ]
719
[ "Sequences and series", "Unsolved problems in mathematics", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Unsolved problems in number theory", "Combinatorics", "Mathematical problems", "Numbers", "Number theory" ]
627,964
https://en.wikipedia.org/wiki/Survival%20of%20the%20fittest
"Survival of the fittest" is a phrase that originated from Darwinian evolutionary theory as a way of describing the mechanism of natural selection. The biological concept of fitness is defined as reproductive success. In Darwinian terms, the phrase is best understood as "survival of the form that in successive generations will leave most copies of itself." Herbert Spencer first used the phrase, after reading Charles Darwin's On the Origin of Species, in his Principles of Biology (1864), in which he drew parallels between his own economic theories and Darwin's biological ones: "This survival of the fittest, which I have here sought to express in mechanical terms, is that which Mr. Darwin has called 'natural selection', or the preservation of favoured races in the struggle for life." Darwin responded positively to Alfred Russel Wallace's suggestion of using Spencer's new phrase "survival of the fittest" as an alternative to "natural selection", and adopted the phrase in The Variation of Animals and Plants Under Domestication published in 1868. In On the Origin of Species, he introduced the phrase in the fifth edition published in 1869, intending it to mean "better designed for an immediate, local environment". History of the phrase By his own account, Herbert Spencer described a concept similar to "survival of the fittest" in his 1852 "A Theory of Population". He first used the phrase – after reading Charles Darwin's On the Origin of Species – in his Principles of Biology of 1864 in which he drew parallels between his economic theories and Darwin's biological, evolutionary ones, writing, "This survival of the fittest, which I have here sought to express in mechanical terms, is that which Mr. Darwin has called 'natural selection', or the preservation of favored races in the struggle for life." In July 1866 Alfred Russel Wallace wrote to Darwin about readers thinking that the phrase "natural selection" personified nature as "selecting" and said this misconception could be avoided "by adopting Spencer's term" Survival of the fittest. Darwin promptly replied that Wallace's letter was "as clear as daylight. I fully agree with all that you say on the advantages of H. Spencer's excellent expression of 'the survival of the fittest'. This however had not occurred to me till reading your letter. It is, however, a great objection to this term that it cannot be used as a substantive governing a verb". Had he received the letter two months earlier, he would have worked the phrase into the fourth edition of the Origin which was then being printed, and he would use it in his next book on "Domestic Animals etc.". Darwin wrote on page 6 of The Variation of Animals and Plants Under Domestication published in 1868, "This preservation, during the battle for life, of varieties which possess any advantage in structure, constitution, or instinct, I have called Natural Selection; and Mr. Herbert Spencer has well expressed the same idea by the Survival of the Fittest. The term 'natural selection' is in some respects a bad one, as it seems to imply conscious choice; but this will be disregarded after a little familiarity". He defended his analogy as similar to language used in chemistry, and to astronomers depicting the "attraction of gravity as ruling the movements of the planets", or the way in which "agriculturists speak of man making domestic races by his power of selection". He had "often personified the word Nature; for I have found it difficult to avoid this ambiguity; but I mean by nature only the aggregate action and product of many natural laws,—and by laws only the ascertained sequence of events." In the first four editions of On the Origin of Species, Darwin had used the phrase "natural selection". In Chapter 4 of the 5th edition of The Origin published in 1869, Darwin implies again the synonym: "Natural Selection, or the Survival of the Fittest". By "fittest" Darwin meant "better adapted for the immediate, local environment", not the common modern meaning of "in the best physical shape" (think of a puzzle piece, not an athlete). In the introduction he gave full credit to Spencer, writing "I have called this principle, by which each slight variation, if useful, is preserved, by the term Natural Selection, in order to mark its relation to man's power of selection. But the expression often used by Mr. Herbert Spencer of the Survival of the Fittest is more accurate, and is sometimes equally convenient." In The Man Versus The State, Spencer used the phrase in a postscript to justify a plausible explanation of how his theories would not be adopted by "societies of militant type". He uses the term in the context of societies at war, and the form of his reference suggests that he is applying a general principle: "Thus by survival of the fittest, the militant type of society becomes characterized by profound confidence in the governing power, joined with a loyalty causing submission to it in all matters whatever". Though Spencer's conception of organic evolution is commonly interpreted as a form of Lamarckism, Herbert Spencer is sometimes credited with inaugurating Social Darwinism. Evolutionary biologists criticise the manner in which the term is used by non-scientists and the connotations that have grown around the term in popular culture. The phrase also does not help in conveying the complex nature of natural selection, so modern biologists prefer and almost exclusively use the term natural selection. The biological concept of fitness refers to both reproductive success (fecundity selection), as well as survival (viability selection) and is not prescriptive in the specific ways in which organisms can be more "fit" by having phenotypic characteristics that enhance survival and reproduction (which was the meaning that Spencer had in mind). Critiquing the phrase While the phrase "survival of the fittest" is often used to mean "natural selection", it is avoided by modern biologists, because the phrase can be misleading. For example, survival is only one aspect of selection, and not always the most important. Another problem is that the word "fit" is frequently confused with a state of physical fitness. In the evolutionary meaning "fitness" is the rate of reproductive output among a class of genetic variants. Interpreted as expressing a biological theory The phrase can also be interpreted to express a theory or hypothesis: that "fit" as opposed to "unfit" individuals or species, in some sense of "fit", will survive some test. Nevertheless, when extended to individuals it is a conceptual mistake, the phrase is a reference to the transgenerational survival of the heritable attributes; particular individuals are quite irrelevant. This becomes clearer when referring to Viral quasispecies, in survival of the flattest, which makes it clear to survive makes no reference to the question of even being alive itself; rather the functional capacity of proteins to carry out work. Interpretations of the phrase as expressing a theory are in danger of being tautological, meaning roughly "those with a propensity to survive have a propensity to survive"; to have content the theory must use a concept of fitness that is independent of that of survival. Interpreted as a theory of species survival, the theory that the fittest species survive is undermined by evidence that while direct competition is observed between individuals, populations and species, there is little evidence that competition has been the driving force in the evolution of large groups such as, for example, amphibians, reptiles, and mammals. Instead, these groups have evolved by expanding into empty ecological niches. In the punctuated equilibrium model of environmental and biological change, the factor determining survival is often not superiority over another in competition but ability to survive dramatic changes in environmental conditions, such as after a meteor impact energetic enough to greatly change the environment globally. In 2010 Sahney et al. argued that there is little evidence that intrinsic, biological factors such as competition have been the driving force in the evolution of large groups. Instead, they cited extrinsic, abiotic factors such as expansion as the driving factor on a large evolutionary scale. The rise of dominant groups such as amphibians, reptiles, mammals and birds occurred by opportunistic expansion into empty ecological niches and the extinction of groups happened due to large shifts in the abiotic environment. Interpreted as expressing a moral theory Social Darwinists The use of the term "social Darwinism" as a critique of capitalist ideologies was introduced in Richard Hofstadter's Social Darwinism in American Thought published in 1944. Anarchists Russian zoologist and anarchist Peter Kropotkin viewed the concept of "survival of the fittest" as supporting co-operation rather than competition. In his book Mutual Aid: A Factor of Evolution he set out his analysis leading to the conclusion that the fittest was not necessarily the best at competing individually, but often the community made up of those best at working together. He concluded that: In the animal world we have seen that the vast majority of species live in societies, and that they find in association the best arms for the struggle for life: understood in its wide Darwinian sense – not as a struggle for the sheer means of existence, but as a struggle against all natural conditions unfavourable to the species. The animal species, in which individual struggle has been reduced to its narrowest limits, and the practice of mutual aid has attained the greatest development, are invariably the most numerous, the most prosperous, and the most open to further progress. Applying this concept to human society, Kropotkin presented mutual aid as one of the dominant factors of evolution, the other being self-assertion, and concluded that: In the practice of mutual aid, which we can retrace to the earliest beginnings of evolution, we thus find the positive and undoubted origin of our ethical conceptions; and we can affirm that in the ethical progress of man, mutual support not mutual struggle – has had the leading part. In its wide extension, even at the present time, we also see the best guarantee of a still loftier evolution of our race. Tautology "Survival of the fittest" is sometimes claimed to be a tautology. The reasoning is that if one takes the term "fit" to mean "endowed with phenotypic characteristics which improve chances of survival and reproduction" (which is roughly how Spencer understood it), then "survival of the fittest" can simply be rewritten as "survival of those who are better equipped for surviving". Furthermore, the expression does become a tautology if one uses the most widely accepted definition of "fitness" in modern biology, namely reproductive success itself (rather than any set of characters conducive to this reproductive success). This reasoning is sometimes used to claim that Darwin's entire theory of evolution by natural selection is fundamentally tautological, and therefore devoid of any explanatory power. However, the expression "survival of the fittest" (taken on its own and out of context) gives a very incomplete account of the mechanism of natural selection. The reason is that it does not mention a key requirement for natural selection, namely the requirement of heritability. It is true that the phrase "survival of the fittest", in and by itself, is a tautology if fitness is defined by survival and reproduction. Natural selection is the portion of variation in reproductive success that is caused by heritable characters (see the article on natural selection). If certain heritable characters increase or decrease the chances of survival and reproduction of their bearers, then it follows mechanically (by definition of "heritable") that those characters that improve survival and reproduction will increase in frequency over generations. This is precisely what is called "evolution by natural selection". On the other hand, if the characters which lead to differential reproductive success are not heritable, then no meaningful evolution will occur, "survival of the fittest" or not: if improvement in reproductive success is caused by traits that are not heritable, then there is no reason why these traits should increase in frequency over generations. In other words, natural selection does not simply state that "survivors survive" or "reproducers reproduce"; rather, it states that "survivors survive, reproduce and therefore propagate any heritable characters which have affected their survival and reproductive success". This statement is not tautological: it hinges on the testable hypothesis that such fitness-impacting heritable variations actually exist (a hypothesis that has been amply confirmed.) Momme von Sydow suggested further definitions of 'survival of the fittest' that may yield a testable meaning in biology and also in other areas where Darwinian processes have been influential. However, much care would be needed to disentangle tautological from testable aspects. Moreover, an "implicit shifting between a testable and an untestable interpretation can be an illicit tactic to immunize natural selection ... while conveying the impression that one is concerned with testable hypotheses". Skeptic Society founder and Skeptic magazine publisher Michael Shermer addresses the tautology problem in his 1997 book, Why People Believe Weird Things, in which he points out that although tautologies are sometimes the beginning of science, they are never the end, and that scientific principles like natural selection are testable and falsifiable by virtue of their predictive power. Shermer points out, as an example, that population genetics accurately demonstrate when natural selection will and will not effect change on a population. Shermer hypothesizes that if hominid fossils were found in the same geological strata as trilobites, it would be evidence against natural selection. See also Red Queen hypothesis Notes References External links Origins of the phrase AboutDarwin.com — Darwin's Timeline Pioneers of Psychology Evolution Quotations compiled by GIGA Tautology links Darwin's Untimely Burial by Stephen Jay Gould Evolution and Philosophy: A Good Tautology is Hard to Find by John Wilkins, part of the talk.origins archive. CA500: "Survival of the fittest is a tautology" from the talk.origins index to creationist claims by Mark Ridley. Is "survival of the fittest" a tautology by Don Lindsay. Darwin's Great Tautology by the Doubting Thomas Morality link CA002: Survival of the fittest implies that "might makes right" David Hume — Dialogues Concerning Natural Religion Evolution and philosophy — Does evolution make might right? by John S. Wilkins. Kropotkin: Mutual Aid Mutual Aid: A Factor of Evolution – HTML version at the Anarchy Archives Charles Darwin Evolutionary biology concepts Herbert Spencer Biological evolution English phrases Metaphors Survival ta:தக்கன பிழைக்கும்
Survival of the fittest
[ "Biology" ]
3,022
[ "Evolutionary biology concepts" ]
627,984
https://en.wikipedia.org/wiki/George%20Alcock
George Eric Deacon Alcock, MBE (28 August 1912, in Peterborough, Northamptonshire – 15 December 2000) was an English amateur astronomer. He was one of the most successful visual discoverers of novae and comets. George’s interest in astronomy was sparked by his first encounter with the solar eclipse of 8 April 1921. His interest evolved into the observation of meteors and meteor showers, resulting to him joining the British Astronomical Association on 27 March 1935. In 1953, he started his search for comets and in 1955 for novae. His technique involved memorization of the patterns of thousands of stars, so that he would visually recognize any intruder. In 1959, he discovered comet C/1959 Q1 — the first comet discovered in Britain since 1894. After five days, he discovered another, named C/1959 Q2. He discovered two more comets in 1963 (C/1963 F1) and 1965 (C/1965 S2). His first Nova was Delphini 1967 (HR Delphini), which turned out to have an unusual light-curve. He discovered two more novae, LV Vul (in 1968) and V368 Sct (in 1970). He found his fifth and final comet in 1983: C/1983 H1 (IRAS-Araki-Alcock). In 1991 he found the nova V838 Her. Honours and awards Alcock became a Fellow of 3 British societies in 1947—the Royal Astronomical Society, the Royal Geographical Society, and the Royal Meteorological Society. He won the Jackson-Gwilt Medal of the Royal Astronomical Society in 1963. On 7 February 1979, Queen Elizabeth II conferred on him an MBE. In 1981, he received the International Amateur Achievement Award from the Astronomical Society of the Pacific. An asteroid, 3174 Alcock is named after him. He also maintained an active interest in meteorology (the study of weather, unrelated to his interest in meteors). In 1996, Genesis Publications published a limited edition signed biography, authored by Kay Williams, entitled "Under An English Heaven - The Life of George Alcock". After his death, a plaque was placed in Peterborough Cathedral in his memory. Personal life In 1936, Alcock met Mary Green through their shared interest in astronomy. They were married on 7 June 1941, and moved to the village of Farcet from 1955, in a house they called Antares, where Alcock discovered five comets and five nova. Mary died on 25 October 1991. References External links Ian Ridpath. New Scientist The man with the astronomical memory. New Scientist 16 December 1982 1912 births 2000 deaths 20th-century English astronomers Amateur astronomers Discoverers of comets Members of the Order of the British Empire People from Peterborough
George Alcock
[ "Astronomy" ]
549
[ "Astronomers", "Amateur astronomers" ]
628,083
https://en.wikipedia.org/wiki/Thomas%20Digges
Thomas Digges (; c. 1546 – 24 August 1595) was an English mathematician and astronomer. He was the first to expound the Copernican system in English but discarded the notion of a fixed shell of immoveable stars to postulate infinitely many stars at varying distances. He was also first to postulate the "dark night sky paradox". Life Thomas Digges, born about 1546, was the son of Leonard Digges (c. 1515 – c. 1559), the mathematician and surveyor, and Bridget Wilford, the daughter of Thomas Wilford, esquire, of Hartridge in Cranbrook, Kent, by his first wife, Elizabeth Culpeper, the daughter of Walter Culpeper, esquire. Digges had two brothers, James and Daniel, and three sisters, Mary, who married a man with the surname of Barber; Anne, who married William Digges; and Sarah, whose first husband was surnamed Martin, and whose second husband was John Weston. After the death of his father, Digges grew up under the guardianship of John Dee, a typical Renaissance natural philosopher. In 1583, Lord Burghley appointed Digges, with John Chamber and Henry Savile, to sit on a commission to consider whether England should adopt the Gregorian calendar, as proposed by Dee. Digges served as a member of parliament for Wallingford and also had a military career as a Muster-Master General to the English forces from 1586 to 1594 during the war with the Spanish Netherlands. In his capacity of Master-Muster General he was instrumental in promoting improvements at the Port of Dover. Digges died on 24 August 1595. His last will, in which he specifically excluded both his brother, James Digges, and William Digges, was proved on 1 September. Digges was buried in the chancel of the church of St Mary Aldermanbury, London. Marriage and issue Digges married Anne St Leger (1555–1636), daughter of Sir Warham St Leger and his first wife, Ursula Neville (d. 1575), the fifth daughter of George Neville, 5th Baron Bergavenny, by his third wife, Mary Stafford. In his will he named two surviving sons, Sir Dudley Digges (1583–1639), politician and statesman, and Leonard Digges (1588–1635), poet, and two surviving daughters, Margaret and Ursula. After Digges's death, his widow, Anne, married Thomas Russell of Alderminster in Warwickshire, "whom in 1616 William Shakespeare named as an overseer of his will". Work Digges attempted to determine the parallax of the 1572 supernova observed by Tycho Brahe, and concluded it had to be beyond the orbit of the Moon. This contradicted Aristotle's view of the universe, according to which no change could take place among the fixed stars. In 1576, he published a new edition of his father's perpetual almanac, A Prognostication everlasting. The text written by Leonard Digges for the third edition of 1556 was left unchanged, but Thomas added new material in several appendices. The most important of these was A Perfit Description of the Caelestiall Orbes according to the most aunciente doctrine of the Pythagoreans, latelye revived by Copernicus and by Geometricall Demonstrations approved. Contrary to the Ptolemaic cosmology of the original book by his father, the appendix featured a detailed discussion of the controversial and still poorly known Copernican heliocentric model of the Universe. This was the first publication of that model in English, and a milestone in the popularisation of science. For the most part, the appendix was a loose translation into English of chapters from Copernicus' book De revolutionibus orbium coelestium. Thomas Digges went further than Copernicus, however, by proposing that the universe is infinite, containing infinitely many stars, and may have been the first person to do so, predating Giordano Bruno's (1584) and William Gilbert's (1600) same views. According to Harrison: An illustration of the Copernican universe can be seen above right. The outer inscription on the map reads (after spelling adjustments from Elizabethan to Modern English): In 1583, Lord Burghley appointed Digges, along with Henry Savile (Bible translator) and John Chamber, to sit on a commission to consider whether England should adopt the Gregorian calendar, as proposed by John Dee; in fact Britain did not adopt the calendar until 1752. References Sources and further reading Text of the Perfit Description: Johnson, Francis R. and Larkey, Sanford V., "Thomas Digges, the Copernican System and the idea of the Infinity of the Universe in 1576," Huntington Library Bulletin 5 (1934): 69–117. Harrison, Edward Robert (1987) Darkness at Night. Harvard University Press: 211–17. An abridgement of the preceding. Internet version at Dartmouth retrieved on 2 November 2013 Gribbin, John, 2002. Science: A History. Penguin. Johnson, Francis R., Astronomical Thought in Renaissance England: A Study of the English Scientific Writings from 1500 to 1645, Johns Hopkins Press, 1937. Kugler, Martin Astronomy in Elizabethan England, 1558 to 1585: John Dee, Thomas Digges, and Giordano Bruno, Montpellier: Université Paul Valéry, 1982. Vickers, Brian (ed.), Occult & Scientific Mentalities in the Renaissance. Cambridge: Cambridge University Press, 1984. External links Digges, Thomas Thomas Digges, Gentleman and Mathematician John Dee, Thomas Digges and the identity of the mathematician Digges's Mactutor biography Digges, Thomas (1546–1595), History of Parliament 1540s births 1595 deaths 16th-century Calvinist and Reformed Christians 16th-century English mathematicians Alumni of Queens' College, Cambridge John Dee 16th-century English astronomers English MPs 1572–1583 English MPs 1584–1585 Copernican Revolution People from Dover District
Thomas Digges
[ "Astronomy" ]
1,272
[ "Copernican Revolution", "History of astronomy" ]
628,094
https://en.wikipedia.org/wiki/Screen%20filter
A screen filter is a type of water purification using a rigid or flexible screen to separate sand and other fine particles out of water for irrigation or industrial applications. These are generally not recommended for filtering out organic matter such as algae, since these types of contaminants can be extruded into spaghetti-like strings through the filter if enough pressure drop occurs across the filter surface. Typical screen materials include stainless steel (mesh), polypropylene, nylon and polyester. Use Self-cleaning screen filters incorporate an automatic backwash cycle to overcome these limitations. Backwash cycles are far more frequent when compared to a media filter with similar capacity, and each backwash requires far less water to perform. Their ability to quickly remove contaminants from water before they leach their nutrients make such filters popular choices for recirculating aquaculture systems. They have also become popular in closed loop industrial systems such as cooling tower, heat exchanger, and other equipment protection applications. Similar devices with larger openings designed only to keep out large objects are called strainers. Stainless-steel strainers are used in industrial, municipal, and irrigation applications, and can be designed for very high flow rates. When paired with a controller and flush valve, a strainer can be fully automated. Suspended particles collect on the inside of the screen, and the flush valve opens to expel the buildup. This eliminates the need for manual cleaning of the strainer element. See also Traveling screen References Water treatment Filters Irrigation Water technology eo:Ĝermolisto de hidraŭliko#Reta filtrilo
Screen filter
[ "Chemistry", "Engineering", "Environmental_science" ]
324
[ "Water treatment", "Chemical equipment", "Filters", "Water pollution", "Filtration", "Environmental engineering", "Water technology" ]
628,111
https://en.wikipedia.org/wiki/Hydrocyclone
Hydrocyclones are a type of cyclonic separators that separate product phases mainly on basis of differences in gravity with aqueous solutions as the primary feed fluid. As opposed to dry or dust cyclones, which separate solids from gasses, hydrocyclones separate solids or different phase fluids from the bulk fluid. A hydrocyclone comprises a cylindrical shaped feed part with tangential feed; an overflow part with vortex finder; a conical part with an apex. A cyclone has no moving parts. Working principle Product is fed into the hydrocyclone tangentially under a certain pressure. This creates a centrifugal movement, pushing the heavier phase outward and downward alongside the wall of the conical part. The decreasing diameter in the conical part increases the speed and so enhances the separation. Finally, the concentrated solids are discharged through the apex. The vortex finder in the overflow part creates a fast rotating upward spiral movement of the fluid in the centre of the conically shaped housing. The fluid is discharged through the overflow outlet. Cyclone parameters The following parameters are decisive for good cyclone operation: the design the specific weight difference between the two product phases the shape of the solids the speed of the feed the density of the medium the counter pressure at the overflow and apex Areas of application The main areas of application for hydrocyclones are: Mineral processing industry Hydrocyclones are frequently utilized in the metallurgical and mineral processing industry for the classification of fine particles and dewatering of slurries. In Coal Washery Units : When we use magnetite and water as media in cyclone then it is called Dense Media Cyclone (DMC). DMCs are frequently used in coal washeries for washing of coals. Starch industry Hydrocyclones are commonplace in the potato starch, cassava starch, wheat starch and corn starch industry for the concentration and refining of starch slurries. The potato processing industry Hydrocyclones are used for the separation of starch from cutting water in the French fries and potato crisp and potato flakes industry. Sand separation and classification Hydrocyclones used for sand separation and classification and as a separator of sand from water or sludge Oil-water separation: Separation of oil and water in, among other things, the offshore industry Dewatering: Concentration of slurry and dewater sludge for disposal Microplastic separation: Removal of microplastics from wastewater References Filters Solid-gas separation Waste treatment technology
Hydrocyclone
[ "Chemistry", "Engineering" ]
515
[ "Separation processes by phases", "Solid-gas separation", "Water treatment", "Chemical equipment", "Filters", "Filtration", "Environmental engineering", "Waste treatment technology" ]
628,113
https://en.wikipedia.org/wiki/Labrador%20tea
Labrador tea is a common name for three closely related plant species in the genus Rhododendron as well as a herbal tea made from their leaves. All three species are primarily wetland plants in the heath family. Labrador tea has been a favorite beverage for a long time among the Dene and Inuit peoples. Description All three species used to make Labrador tea are low, slow-growing shrubs with evergreen leaves: Rhododendron tomentosum (northern Labrador tea, previously Ledum palustre), Rhododendron groenlandicum, (bog Labrador tea, previously Ledum groenlandicum or Ledum latifolium) and Rhododendron neoglandulosum, (western Labrador tea, or trapper's tea, previously Ledum glandulosum or Ledum columbianum). The leaves are smooth on top with often wrinkled edges, and fuzzy white to red-brown underneath, and point straight to the sides or downward. R. tomentosum, R. groenlandicum, and R. neoglandulosum can be found in wetlands and peat bogs. Uses The Athabaskans and other indigenous peoples brew the leaves as a beverage. The Pomo, Kashaya, Tolowa and Yurok of Northern California boil the leaves of western Labrador tea similarly, to make a medicinal herbal tea to help with coughs and colds. Botanical extracts from the leaves have been used to create natural skin care products by companies in Quebec and Newfoundland and Labrador. Others use Labrador tea to spice meat by boiling the leaves and branches in water and then soaking the meat in the decoction. During the eighteenth century, German brewers used R. tomentosum while brewing beer to make it more intoxicating, but it became forbidden because it was thought to led to increased aggression. Toxicology There is no sufficient data that demonstrates Labrador tea is safe to consume, as toxicity varies across species and localities. Excessive consumption is not recommended due to diuresis, vomiting, dizziness, and drowsiness. Large doses can lead to cramps, convulsions, paralysis, and, in rare cases, death. Toxicity occurs due to the terpenoid ledol which is found in all Labrador tea species. R. groenlandicum has the lowest toxicity due to lower levels of ledol. Moderately narcotic Grayanotoxins are also present, but few lethal human cases of poisoning solely due to grayanotoxins have been documented. However, lethal poisonings have been documented in livestock. Harvesting Tea leaves are collected from multiple plants in the spring. Labrador tea is slow-growing, so only a single new leaf is collected from a plant every other year, to avoid damaging individual plants. See also Labrador tea doll References Herbal teas Inuit cuisine Plant common names Plants used in traditional Native American medicine Plants used in Native American cuisine
Labrador tea
[ "Biology" ]
595
[ "Plant common names", "Common names of organisms", "Plants" ]
628,183
https://en.wikipedia.org/wiki/Goldstone%20boson
In particle and condensed matter physics, Goldstone bosons or Nambu–Goldstone bosons (NGBs) are bosons that appear necessarily in models exhibiting spontaneous breakdown of continuous symmetries. They were discovered by Yoichiro Nambu in particle physics within the context of the BCS superconductivity mechanism, and subsequently elucidated by Jeffrey Goldstone, and systematically generalized in the context of quantum field theory. In condensed matter physics such bosons are quasiparticles and are known as Anderson–Bogoliubov modes. These spinless bosons correspond to the spontaneously broken internal symmetry generators, and are characterized by the quantum numbers of these. They transform nonlinearly (shift) under the action of these generators, and can thus be excited out of the asymmetric vacuum by these generators. Thus, they can be thought of as the excitations of the field in the broken symmetry directions in group space—and are massless if the spontaneously broken symmetry is not also broken explicitly. If, instead, the symmetry is not exact, i.e. if it is explicitly broken as well as spontaneously broken, then the Nambu–Goldstone bosons are not massless, though they typically remain relatively light; they are then called pseudo-Goldstone bosons or pseudo–Nambu–Goldstone bosons (abbreviated PNGBs). Goldstone's theorem Goldstone's theorem examines a generic continuous symmetry which is spontaneously broken; i.e., its currents are conserved, but the ground state is not invariant under the action of the corresponding charges. Then, necessarily, new massless (or light, if the symmetry is not exact) scalar particles appear in the spectrum of possible excitations. There is one scalar particle—called a Nambu–Goldstone boson—for each generator of the symmetry that is broken, i.e., that does not preserve the ground state. The Nambu–Goldstone mode is a long-wavelength fluctuation of the corresponding order parameter. By virtue of their special properties in coupling to the vacuum of the respective symmetry-broken theory, vanishing momentum ("soft") Goldstone bosons involved in field-theoretic amplitudes make such amplitudes vanish ("Adler zeros"). Examples Natural In fluids, the phonon is longitudinal and it is the Goldstone boson of the spontaneously broken Galilean symmetry. In solids, the situation is more complicated; the Goldstone bosons are the longitudinal and transverse phonons and they happen to be the Goldstone bosons of spontaneously broken Galilean, translational, and rotational symmetry with no simple one-to-one correspondence between the Goldstone modes and the broken symmetries. In magnets, the original rotational symmetry (present in the absence of an external magnetic field) is spontaneously broken such that the magnetization points in a specific direction. The Goldstone bosons then are the magnons, i.e., spin waves in which the local magnetization direction oscillates. The pions are the pseudo-Goldstone bosons that result from the spontaneous breakdown of the chiral-flavor symmetries of QCD effected by quark condensation due to the strong interaction. These symmetries are further explicitly broken by the masses of the quarks so that the pions are not massless, but their mass is significantly smaller than typical hadron masses. The longitudinal polarization components of the W and Z bosons correspond to the Goldstone bosons of the spontaneously broken part of the electroweak symmetry SU(2)⊗U(1), which, however, are not observable. Because this symmetry is gauged, the three would-be Goldstone bosons are absorbed by the three gauge bosons corresponding to the three broken generators; this gives these three gauge bosons a mass and the associated necessary third polarization degree of freedom. This is described in the Standard Model through the Higgs mechanism. An analogous phenomenon occurs in superconductivity, which served as the original source of inspiration for Nambu, namely, the photon develops a dynamical mass (expressed as magnetic flux exclusion from a superconductor), cf. the Ginzburg–Landau theory. Primordial fluctuations during inflation can be viewed as Goldstone bosons arising due to the spontaneous symmetry breaking of time translation symmetry of a de Sitter universe. These fluctuations in the inflaton scalar field subsequently seed cosmic structure formation. Ricciardi and Umezawa proposed in 1967 a general theory (quantum brain) about the possible brain mechanism of memory storage and retrieval in terms of Nambu–Goldstone bosons. This theory was subsequently extended in 1995 by Giuseppe Vitiello taking into account that the brain is an "open" system (the dissipative quantum model of the brain). Applications of spontaneous symmetry breaking and of Goldstone's theorem to biological systems, in general, have been published by E. Del Giudice, S. Doglia, M. Milani, and G. Vitiello, and by E. Del Giudice, G. Preparata and G. Vitiello. Mari Jibu and Kunio Yasue and Giuseppe Vitiello, based on these findings, discussed the implications for consciousness. Theory Consider a complex scalar field , with the constraint that , a constant. One way to impose a constraint of this sort is by including a potential interaction term in its Lagrangian density, and taking the limit as . This is called the "Abelian nonlinear σ-model". The constraint, and the action, below, are invariant under a U(1) phase transformation, . The field can be redefined to give a real scalar field (i.e., a spin-zero particle) without any constraint by where is the Nambu–Goldstone boson (actually is) and the U(1) symmetry transformation effects a shift on , namely but does not preserve the ground state (i.e. the above infinitesimal transformation does not annihilate it—the hallmark of invariance), as evident in the charge of the current below. Thus, the vacuum is degenerate and noninvariant under the action of the spontaneously broken symmetry. The corresponding Lagrangian density is given by and thus Note that the constant term in the Lagrangian density has no physical significance, and the other term in it is simply the kinetic term for a massless scalar. The symmetry-induced conserved U(1) current is The charge, Q, resulting from this current shifts and the ground state to a new, degenerate, ground state. Thus, a vacuum with will shift to a different vacuum with . The current connects the original vacuum with the Nambu–Goldstone boson state, . In general, in a theory with several scalar fields, , the Nambu–Goldstone mode is massless, and parameterises the curve of possible (degenerate) vacuum states. Its hallmark under the broken symmetry transformation is nonvanishing vacuum expectation , an order parameter, for vanishing , at some ground state |0〉 chosen at the minimum of the potential, . In principle the vacuum should be the minimum of the effective potential which takes into account quantum effects, however it is equal to the classical potential to first approximation. Symmetry dictates that all variations of the potential with respect to the fields in all symmetry directions vanish. The vacuum value of the first order variation in any direction vanishes as just seen; while the vacuum value of the second order variation must also vanish, as follows. Vanishing vacuum values of field symmetry transformation increments add no new information. By contrast, however, nonvanishing vacuum expectations of transformation increments, , specify the relevant (Goldstone) null eigenvectors of the mass matrix, and hence the corresponding zero-mass eigenvalues. Goldstone's argument The principle behind Goldstone's argument is that the ground state is not unique. Normally, by current conservation, the charge operator for any symmetry current is time-independent, Acting with the charge operator on the vacuum either annihilates the vacuum, if that is symmetric; else, if not, as is the case in spontaneous symmetry breaking, it produces a zero-frequency state out of it, through its shift transformation feature illustrated above. Actually, here, the charge itself is ill-defined, cf. the Fabri–Picasso argument below. But its better behaved commutators with fields, that is, the nonvanishing transformation shifts , are, nevertheless, time-invariant, thus generating a in its Fourier transform. (This ensures that, inserting a complete set of intermediate states in a nonvanishing current commutator can lead to vanishing time-evolution only when one or more of these states is massless.) Thus, if the vacuum is not invariant under the symmetry, action of the charge operator produces a state which is different from the vacuum chosen, but which has zero frequency. This is a long-wavelength oscillation of a field which is nearly stationary: there are physical states with zero frequency, , so that the theory cannot have a mass gap. This argument is further clarified by taking the limit carefully. If an approximate charge operator acting in a huge but finite region is applied to the vacuum, a state with approximately vanishing time derivative is produced, Assuming a nonvanishing mass gap , the frequency of any state like the above, which is orthogonal to the vacuum, is at least , Letting become large leads to a contradiction. Consequently 0 = 0. However this argument fails when the symmetry is gauged, because then the symmetry generator is only performing a gauge transformation. A gauge transformed state is the same exact state, so that acting with a symmetry generator does not get one out of the vacuum (see Higgs mechanism). Fabri–Picasso Theorem. does not properly exist in the Hilbert space, unless . The argument requires both the vacuum and the charge to be translationally invariant, , . Consider the correlation function of the charge with itself, so the integrand in the right hand side does not depend on the position. Thus, its value is proportional to the total space volume, — unless the symmetry is unbroken, . Consequently, does not properly exist in the Hilbert space. Infraparticles There is an arguable loophole in the theorem. If one reads the theorem carefully, it only states that there exist non-vacuum states with arbitrarily small energies. Take for example a chiral N = 1 super QCD model with a nonzero squark VEV which is conformal in the IR. The chiral symmetry is a global symmetry which is (partially) spontaneously broken. Some of the "Goldstone bosons" associated with this spontaneous symmetry breaking are charged under the unbroken gauge group and hence, these composite bosons have a continuous mass spectrum with arbitrarily small masses but yet there is no Goldstone boson with exactly zero mass. In other words, the Goldstone bosons are infraparticles. Extensions Nonrelativistic theories A version of Goldstone's theorem also applies to nonrelativistic theories. It essentially states that, for each spontaneously broken symmetry, there corresponds some quasiparticle which is typically a boson and has no energy gap. In condensed matter these goldstone bosons are also called gapless modes (i.e. states where the energy dispersion relation is like and is zero for ), the nonrelativistic version of the massless particles (i.e. photons where the dispersion relation is also and zero for ). Note that the energy in the non relativistic condensed matter case is and not as it would be in a relativistic case. However, two different spontaneously broken generators may now give rise to the same Nambu–Goldstone boson. As a first example an antiferromagnet has 2 goldstone bosons, a ferromagnet has 1 goldstone bosons, where in both cases we are breaking symmetry from SO(3) to SO(2), for the antiferromagnet the dispersion is and the expectation value of the ground state is zero, for the ferromagnet instead the dispersion is and the expectation value of the ground state is not zero, i.e. there is a spontaneously broken symmetry for the ground state As a second example, in a superfluid, both the U(1) particle number symmetry and Galilean symmetry are spontaneously broken. However, the phonon is the Goldstone boson for both. Still in regards to symmetry breaking there is also a close analogy between gapless modes in condensed matter and the Higgs boson, e.g. in the paramagnet to ferromagnet phase transition Breaking of spacetime symmetries In contrast to the case of the breaking of internal symmetries, when spacetime symmetries such as Lorentz, conformal, rotational, or translational symmetries are broken, the order parameter need not be a scalar field, but may be a tensor field, and the number of independent massless modes may be fewer than the number of spontaneously broken generators. For a theory with an order parameter that spontaneously breaks a spacetime symmetry, the number of broken generators minus the number non-trivial independent solutions to is the number of Goldstone modes that arise. For internal symmetries, the above equation has no non-trivial solutions, so the usual Goldstone theorem holds. When solutions do exist, this is because the Goldstone modes are linearly dependent among themselves, in that the resulting mode can be expressed as a gradients of another mode. Since the spacetime dependence of the solutions is in the direction of the unbroken generators, when all translation generators are broken, no non-trivial solutions exist and the number of Goldstone modes is once again exactly the number of broken generators. In general, the phonon is effectively the Nambu–Goldstone boson for spontaneously broken translation symmetry. Nambu–Goldstone fermions Spontaneously broken global fermionic symmetries, which occur in some supersymmetric models, lead to Nambu–Goldstone fermions, or goldstinos. These have spin , instead of 0, and carry all quantum numbers of the respective supersymmetry generators broken spontaneously. Spontaneous supersymmetry breaking smashes up ("reduces") supermultiplet structures into the characteristic nonlinear realizations of broken supersymmetry, so that goldstinos are superpartners of all particles in the theory, of any spin, and the only superpartners, at that. That is, to say, two non-goldstino particles are connected to only goldstinos through supersymmetry transformations, and not to each other, even if they were so connected before the breaking of supersymmetry. As a result, the masses and spin multiplicities of such particles are then arbitrary. See also Pseudo-Goldstone boson Majoron Higgs mechanism Mermin–Wagner theorem Vacuum expectation value Noether's theorem Notes References Bosons Quantum field theory Mathematical physics Physics theorems Subatomic particles with spin 0
Goldstone boson
[ "Physics", "Mathematics" ]
3,155
[ "Quantum field theory", "Equations of physics", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Bosons", "Subatomic particles", "Mathematical physics", "Matter", "Physics theorems" ]
628,198
https://en.wikipedia.org/wiki/Majoron
In particle physics, majorons (named after Ettore Majorana) are a hypothetical type of Goldstone boson that are conjectured to mediate the neutrino mass violation of lepton number or B − L in certain high energy collisions such as  +  →  +  +  Where two electrons collide to form two W bosons and the majoron J. The U(1)B–L symmetry is assumed to be global so that the majoron is not "eaten up" by the gauge boson and spontaneously broken. Majorons were originally formulated in four dimensions by Yuichi Chikashige, Rabindra Mohapatra and Roberto Peccei to understand neutrino masses by the seesaw mechanism and are being searched for in the neutrino-less double beta decay process. The name majoron was suggested by Graciela Gelmini as a derivative of the last name Majorana with the suffix -on typical of particle names like electron, proton, neutron, etc. There are theoretical extensions of this idea into supersymmetric theories and theories involving extra compactified dimensions. By propagating through the extra spatial dimensions the detectable number of majoron creation events vary accordingly. Mathematically, majorons may be modeled by allowing them to propagate through a material while all other Standard Model forces are fixed to an orbifold point. Searches Experiments studying double beta decay have set limits on decay modes that emit majorons. NEMO has observed a variety of elements. EXO and Kamland-Zen have set half-life limits for majoron decays in xenon. See also List of hypothetical particles References Further reading Bosons Hypothetical elementary particles Subatomic particles with spin 0
Majoron
[ "Physics" ]
348
[ "Matter", "Unsolved problems in physics", "Bosons", "Particle physics", "Particle physics stubs", "Hypothetical elementary particles", "Physics beyond the Standard Model", "Subatomic particles" ]
628,391
https://en.wikipedia.org/wiki/Encrypting%20File%20System
The Encrypting File System (EFS) on Microsoft Windows is a feature introduced in version 3.0 of NTFS that provides filesystem-level encryption. The technology enables files to be transparently encrypted to protect confidential data from attackers with physical access to the computer. EFS is available in all versions of Windows except the home versions (see Supported operating systems below) from Windows 2000 onwards. By default, no files are encrypted, but encryption can be enabled by users on a per-file, per-directory, or per-drive basis. Some EFS settings can also be mandated via Group Policy in Windows domain environments. Cryptographic file system implementations for other operating systems are available, but the Microsoft EFS is not compatible with any of them. See also the list of cryptographic file systems. Basic ideas When an operating system is running on a system without file encryption, access to files normally goes through OS-controlled user authentication and access control lists. However, if an attacker gains physical access to the computer, this barrier can be easily circumvented. One way, for example, would be to remove the disk and put it in another computer with an OS installed that can read the filesystem; another, would be to simply reboot the computer from a boot CD containing an OS that is suitable for accessing the local filesystem. The most widely accepted solution to this is to store the files encrypted on the physical media (disks, USB pen drives, tapes, CDs and so on). In the Microsoft Windows family of operating systems EFS enables this measure, although on NTFS drives only, and does so using a combination of public key cryptography and symmetric key cryptography to make decrypting the files extremely difficult without the correct key. However, the cryptography keys for EFS are in practice protected by the user account password, and are therefore susceptible to most password attacks. In other words, the encryption of a file is only as strong as the password to unlock the decryption key. Operation EFS works by encrypting a file with a bulk symmetric key, also known as the File Encryption Key, or FEK. It uses a symmetric encryption algorithm because it takes less time to encrypt and decrypt large amounts of data than if an asymmetric key cipher is used. The symmetric encryption algorithm used will vary depending on the version and configuration of the operating system; see Algorithms used by Windows version below. The FEK (the symmetric key that is used to encrypt the file) is then encrypted with a public key that is associated with the user who encrypted the file, and this encrypted FEK is stored in the $EFS alternative data stream of the encrypted file. To decrypt the file, the EFS component driver uses the private key that matches the EFS digital certificate (used to encrypt the file) to decrypt the symmetric key that is stored in the $EFS stream. The EFS component driver then uses the symmetric key to decrypt the file. Because the encryption & decryption operations are performed at a layer below NTFS, it is transparent to the user and all their applications. Folders whose contents are to be encrypted by the file system are marked with an encryption attribute. The EFS component driver treats this encryption attribute in a way that is analogous to the inheritance of file permissions in NTFS: if a folder is marked for encryption, then by default all files and subfolders that are created under the folder are also encrypted. When encrypted files are moved within an NTFS volume, the files remain encrypted. However, there are a number of occasions in which the file could be decrypted without the user explicitly asking Windows to do so. Files and folders are decrypted before being copied to a volume formatted with another file system, like FAT32. Finally, when encrypted files are copied over the network using the SMB/CIFS protocol, the files are decrypted before they are sent over the network. The most significant way of preventing the decryption-on-copy is using backup applications that are aware of the "Raw" APIs. Backup applications that have implemented these Raw APIs will simply copy the encrypted file stream and the $EFS alternative data stream as a single file. In other words, the files are "copied" (e.g. into the backup file) in encrypted form, and are not decrypted during backup. Starting with Windows Vista, a user's private key can be stored on a smart card; Data Recovery Agent (DRA) keys can also be stored on a smart card. Security Vulnerabilities Two significant security vulnerabilities existed in Windows 2000 EFS, and have been variously targeted since. In Windows 2000, the local administrator is the default Data Recovery Agent, capable of decrypting all files encrypted with EFS by any local user. EFS in Windows 2000 cannot function without a recovery agent, so there is always someone who can decrypt encrypted files of the users. Any non-domain-joined Windows 2000 computer will be susceptible to unauthorized EFS decryption by anyone who can take over the local Administrator account, which is trivial given many tools available freely on the Internet. In Windows XP and later, there is no default local Data Recovery Agent and no requirement to have one. Setting SYSKEY to mode 2 or 3 (syskey typed in during bootup or stored on a floppy disk) will mitigate the risk of unauthorized decryption through the local Administrator account. This is because the local user's password hashes, stored in the SAM file, are encrypted with the Syskey, and the Syskey value is not available to an offline attacker who does not possess the Syskey passphrase/floppy. Accessing private key via password reset In Windows 2000, the user's RSA private key is not only stored in a truly encrypted form, but there is also a backup of the user's RSA private key that is more weakly protected. If an attacker gains physical access to the Windows 2000 computer and resets a local user account's password, the attacker can log in as that user (or recovery agent) and gain access to the RSA private key which can decrypt all files. This is because the backup of the user's RSA private key is encrypted with an LSA secret, which is accessible to any attacker who can elevate their login to LocalSystem (again, trivial given numerous tools on the Internet). In Windows XP and beyond, the user's RSA private key is backed up using an offline public key whose matching private key is stored in one of two places: the password reset disk (if Windows XP is not a member of a domain) or in the Active Directory (if Windows XP is a member of a domain). This means that an attacker who can authenticate to Windows XP as LocalSystem still does not have access to a decryption key stored on the PC's hard drive. In Windows 2000, XP or later, the user's RSA private key is encrypted using a hash of the user's NTLM password hash plus the user name – use of a salted hash makes it extremely difficult to reverse the process and recover the private key without knowing the user's passphrase. Also, again, setting Syskey to mode 2 or 3 (Syskey typed in during bootup or stored on a floppy disk) will mitigate this attack, since the local user's password hash will be stored encrypted in the SAM file. Other issues Once a user is logged on successfully, access to his own EFS encrypted data requires no additional authentication, decryption happens transparently. Thus, any compromise of the user's password automatically leads to access to that data. Windows can store versions of user account passphrases with reversible encryption, though this is no longer default behaviour; it can also be configured to store (and will by default on the original version of Windows XP and lower) Lan Manager hashes of the local user account passphrases, which can be attacked and broken easily. It also stores local user account passphrases as NTLM hashes, which can be fairly easily attacked using "rainbow tables" if the passwords are weak (Windows Vista and later versions don't allow weak passwords by default). To mitigate the threat of trivial brute-force attacks on local passphrases, older versions of Windows need to be configured (using the Security Settings portion of Group Policy) to never store LM hashes, and of course, to not enable Autologon (which stores plaintext passphrases in the registry). Further, using local user account passphrases over 14 characters long prevents Windows from storing an LM hash in the SAM – and has the added benefit of making brute-force attacks against the NTLM hash harder. When encrypting files with EFS – when converting plaintext files to encrypted files – the plaintext files are not wiped, but simply deleted (i.e. data blocks flagged as "not in use" in the filesystem). This means that, unless they for example happen to be stored on an SSD with TRIM support, they can be easily recovered unless they are overwritten. To fully mitigate known, non-challenging technical attacks against EFS, encryption should be configured at the folder level (so that all temporary files like Word document backups which are created in these directories are also encrypted). When encrypting individual files, they should be copied to an encrypted folder or encrypted "in place", followed by securely wiping the disk volume. The Windows Cipher utility can be used (with the /W option) to wipe free space including that which still contains deleted plaintext files; various third-party utilities may work as well. Anyone who can gain Administrators access can overwrite, override or change the Data Recovery Agent configuration. This is a very serious issue, since an attacker can for example hack the Administrator account (using third-party tools), set whatever DRA certificate they want as the Data Recovery Agent and wait. This is sometimes referred to as a two-stage attack, which is a significantly different scenario than the risk due to a lost or stolen PC, but which highlights the risk due to malicious insiders. When the user encrypts files after the first stage of such an attack, the FEKs are automatically encrypted with the designated DRA's public key. The attacker only needs to access the computer once more as Administrator to gain full access to all those subsequently EFS-encrypted files. Even using Syskey mode 2 or 3 does not protect against this attack, because the attacker could back up the encrypted files offline, restore them elsewhere and use the DRA's private key to decrypt the files. If such a malicious insider can gain physical access to the computer, all security features are to be considered irrelevant, because they could also install rootkits, software or even hardware keyloggers etc. on the computer – which is potentially much more interesting and effective than overwriting DRA policy. Recovery Files encrypted with EFS can only be decrypted by using the RSA private key(s) matching the previously used public key(s). The stored copy of the user's private key is ultimately protected by the user's logon password. Accessing encrypted files from outside Windows with other operating systems (Linux, for example) is not possible—not least of which because there is currently no third party EFS component driver. Further, using special tools to reset the user's login password will render it impossible to decrypt the user's private key and thus useless for gaining access to the user's encrypted files. The significance of this is occasionally lost on users, resulting in data loss if a user forgets his or her password, or fails to back up the encryption key. This led to coining of the term "delayed recycle bin", to describe the seeming inevitability of data loss if an inexperienced user encrypts his or her files. If EFS is configured to use keys issued by a Public Key Infrastructure and the PKI is configured to enable Key Archival and Recovery, encrypted files can be recovered by recovering the private key first. Keys user password (or smart card private key): used to generate a decryption key to decrypt the user's DPAPI Master Key DPAPI Master Key: used to decrypt the user's RSA private key(s) RSA private key: used to decrypt each file's FEK File Encryption Key (FEK): used to decrypt/encrypt each file's data (in the primary NTFS stream) SYSKEY: used to encrypt the cached domain verifier and the password hashes stored in the SAM Supported operating systems Windows Windows 2000 Professional, Server, Advanced Server and Datacenter editions Windows XP Professional, also in Tablet PC Edition, Media Center Edition and x64 Edition Windows Server 2003 and Windows Server 2003 R2, in both x86 and x64 editions Windows Vista Business, Enterprise and Ultimate editions Windows 7 Professional, Enterprise and Ultimate editions Windows Server 2008 and Windows Server 2008 R2 Windows 8 and 8.1 Pro and Enterprise editions Windows Server 2012 and Windows Server 2012 R2 Windows 10 Pro, Enterprise, and Education editions. Windows 11 Pro, Enterprise, and Education editions. Windows Server 2016 Windows Server 2019 Other operating systems No other operating systems or file systems have native support for EFS. New features available by Windows version Windows XP Encryption of the Client-Side Cache (Offline Files database) Protection of DPAPI Master Key backup using domain-wide public key Autoenrollment of user certificates (including EFS certificates) Multiple-user (shared) access to encrypted files (on a file-by-file basis) and revocation checking on certificates used when sharing encrypted files Encrypted files can be shown in an alternative color (green by default) No requirement for mandatory Recovery Agent Warning when files may be getting silently decrypted when moving to an unsupported file system Password reset disk EFS over WebDAV and remote encryption for servers delegated in Active Directory Windows XP SP1 Support for and default use of AES-256 symmetric encryption algorithm for all EFS-encrypted files Windows XP SP2 + KB 912761 Prevent enrollment of self-signed EFS certificates Windows Server 2003 Digital Identity Management Service Enforcement of RSAKeyLength setting for enforcing a minimum key length when enrolling self-signed EFS certificates Windows Vista and Windows Server 2008 Per-user encryption of Client-Side Cache (Offline Files) Support for storing (user or DRA) RSA private keys on a PC/SC smart card EFS Re-Key Wizard EFS Key backup prompts Support for deriving DPAPI Master Key from PC/SC smart card Support for encryption of pagefile.sys Protection of EFS-related secrets using BitLocker (Enterprise or Ultimate edition of Windows Vista) Group Policy controls to enforce Encryption of Documents folder Offline files encryption Indexing of encrypted files Requiring smart card for EFS Creating a caching-capable user key from smart card Displaying a key backup notification when a user key is created or changed Specifying the certificate template used for enrolling EFS certificates automatically Windows Server 2008 EFS self-signed certificates enrolled on the Windows Server 2008 server will default to 2048-bit RSA key length All EFS templates (user and data recovery agent certificates) default to 2048-bit RSA key length Windows 7 and Windows Server 2008 R2 Elliptic-curve cryptographic algorithms (ECC). Windows 7 supports a mixed mode operation of ECC and RSA algorithms for backward compatibility EFS self-signed certificates, when using ECC, will use 256-bit key by default. EFS can be configured to use 1K/2k/4k/8k/16k-bit keys when using self-signed RSA certificates, or 256/384/521-bit keys when using ECC certificates. Windows 10 version 1607 and Windows Server 2016 Add EFS support on FAT and exFAT. Algorithms used by Windows version Windows EFS supports a range of symmetric encryption algorithms, depending on the version of Windows in use when the files are encrypted: See also BitLocker Data Protection API Disk encryption Disk encryption software eCryptfs EncFS Filesystem-level encryption Hardware-based full disk encryption References Further reading Special-purpose file systems Cryptographic software Windows disk file systems
Encrypting File System
[ "Mathematics" ]
3,570
[ "Cryptographic software", "Mathematical software" ]
628,466
https://en.wikipedia.org/wiki/Outer%20automorphism%20group
In mathematics, the outer automorphism group of a group, , is the quotient, , where is the automorphism group of and ) is the subgroup consisting of inner automorphisms. The outer automorphism group is usually denoted . If is trivial and has a trivial center, then is said to be complete. An automorphism of a group that is not inner is called an outer automorphism. The cosets of with respect to outer automorphisms are then the elements of ; this is an instance of the fact that quotients of groups are not, in general, (isomorphic to) subgroups. If the inner automorphism group is trivial (when a group is abelian), the automorphism group and outer automorphism group are naturally identified; that is, the outer automorphism group does act on the group. For example, for the alternating group, , the outer automorphism group is usually the group of order 2, with exceptions noted below. Considering as a subgroup of the symmetric group, , conjugation by any odd permutation is an outer automorphism of or more precisely "represents the class of the (non-trivial) outer automorphism of ", but the outer automorphism does not correspond to conjugation by any particular odd element, and all conjugations by odd elements are equivalent up to conjugation by an even element. Structure The Schreier conjecture asserts that is always a solvable group when is a finite simple group. This result is now known to be true as a corollary of the classification of finite simple groups, although no simpler proof is known. As dual of the center The outer automorphism group is dual to the center in the following sense: conjugation by an element of is an automorphism, yielding a map . The kernel of the conjugation map is the center, while the cokernel is the outer automorphism group (and the image is the inner automorphism group). This can be summarized by the exact sequence Applications The outer automorphism group of a group acts on conjugacy classes, and accordingly on the character table. See details at character table: outer automorphisms. Topology of surfaces The outer automorphism group is important in the topology of surfaces because there is a connection provided by the Dehn–Nielsen theorem: the extended mapping class group of the surface is the outer automorphism group of its fundamental group. In finite groups For the outer automorphism groups of all finite simple groups see the list of finite simple groups. Sporadic simple groups and alternating groups (other than the alternating group, ; see below) all have outer automorphism groups of order 1 or 2. The outer automorphism group of a finite simple group of Lie type is an extension of a group of "diagonal automorphisms" (cyclic except for , when it has order 4), a group of "field automorphisms" (always cyclic), and a group of "graph automorphisms" (of order 1 or 2 except for , when it is the symmetric group on 3 points). These extensions are not always semidirect products, as the case of the alternating group shows; a precise criterion for this to happen was given in 2003. In symmetric and alternating groups The outer automorphism group of a finite simple group in some infinite family of finite simple groups can almost always be given by a uniform formula that works for all elements of the family. There is just one exception to this: the alternating group has outer automorphism group of order 4, rather than 2 as do the other simple alternating groups (given by conjugation by an odd permutation). Equivalently the symmetric group is the only symmetric group with a non-trivial outer automorphism group. Note that, in the case of , the sequence does not split. A similar result holds for any , odd. In reductive algebraic groups Let now be a connected reductive group over an algebraically closed field. Then any two Borel subgroups are conjugate by an inner automorphism, so to study outer automorphisms it suffices to consider automorphisms that fix a given Borel subgroup. Associated to the Borel subgroup is a set of simple roots, and the outer automorphism may permute them, while preserving the structure of the associated Dynkin diagram. In this way one may identify the automorphism group of the Dynkin diagram of with a subgroup of . has a very symmetric Dynkin diagram, which yields a large outer automorphism group of , namely ; this is called triality. In complex and real simple Lie algebras The preceding interpretation of outer automorphisms as symmetries of a Dynkin diagram follows from the general fact, that for a complex or real simple Lie algebra, , the automorphism group is a semidirect product of and ; i.e., the short exact sequence splits. In the complex simple case, this is a classical result, whereas for real simple Lie algebras, this fact was proven as recently as 2010. Word play The term outer automorphism lends itself to word play: the term outermorphism is sometimes used for outer automorphism, and a particular geometry on which acts is called outer space. See also Mapping class group Out(F) Outer space References External links ATLAS of Finite Group Representations-V3, contains a lot of information on various classes of finite groups (in particular sporadic simple groups), including the order of for each group listed. Group theory Group automorphisms
Outer automorphism group
[ "Mathematics" ]
1,126
[ "Functions and mappings", "Mathematical objects", "Group theory", "Fields of abstract algebra", "Mathematical relations", "Group automorphisms" ]
628,467
https://en.wikipedia.org/wiki/Space%20burial
Space burial is the launching of human remains into space. Missions may go into orbit around the Earth or to extraterrestrial bodies such as the Moon, or farther into space. Remains are sealed until the spacecraft burns up upon re-entry into the Earth's atmosphere or they reach their extraterrestrial destinations. Suborbital flights briefly transport them into space then return to Earth where they can be recovered. Small samples of remains are usually launched to minimize the cost of launching mass into space, thereby making such services more affordable. History and typology The concept of launching human remains into space using conventional rockets was proposed by the science fiction author Neil R. Jones in the novella "The Jameson Satellite", which was published in the pulp magazine Amazing Stories in 1931. It was later proposed as a commercial service in the 1965 movie, The Loved One, and by Richard DeGroot in a The Seattle Times newspaper article on April 3, 1977. Since 1997, the private company Celestis has conducted numerous space burials flying as secondary payloads. Maiden flights The first space burial occurred in 1992 when the NASA (mission STS-52) carried a sample of Gene Roddenberry's cremated remains into space and returned them to Earth. The first private space burial, Celestis' Earthview 01: The Founders Flight, was launched on April 21, 1997. An aircraft departing from the Canary Islands carried a Pegasus rocket containing samples of the remains of 24 people to an altitude of above the Atlantic Ocean. The rocket then carried the remains into an elliptical orbit with an apogee of and a perigee of , orbiting the Earth once every 96 minutes until re-entry on May 20, 2002, northeast of Australia. Famous people on this flight included Roddenberry and Timothy Leary. Suborbital flights Short flights that cross the boundary of space without attempting to reach orbital velocity are a cost-effective method of space burial. The remains do not burn up and are either recovered or lost. Moon burial The first moon burial was that of Eugene Merle Shoemaker, a portion of whose cremated remains were flown to the Moon by NASA. Shoemaker's former colleague Carolyn Porco, a University of Arizona professor, proposed and produced the tribute of having Shoemaker's ashes launched aboard the NASA's Lunar Prospector spacecraft. Ten days after Shoemaker's passing, Porco had the go-ahead from NASA administrators and delivered the ashes to the Lunar Prospector Mission Director Scott Hubbard at the NASA Ames Research Center. The ashes were accompanied by a piece of brass foil inscribed with an image of Comet Hale–Bopp, an image of a Meteor Crater in northern Arizona, and a passage from William Shakespeare's Romeo and Juliet. The Lunar Prospector spacecraft was launched on January 6, 1998, and impacted the south polar region of the Moon on July 31, 1999. Missions to the Moon are proposed by both Elysium Space and Celestis as part of a mission by Astrobotic Technology of Pittsburgh. The first mission in January 2024 failed to reach the Moon due to a failure of the spacecraft and instead reentered Earth's atmosphere shortly after. Pet burial In 2014, Celestis launched Celestis Pets, a pet memorial spaceflight service for animal cremated remains. Prior to then, Bismarck, a Monroe, Washington police dog may have flown on a 2012 memorial spaceflight. When this news broke, Celestis' president said that if dog ashes were on the rocket, the person who supplied the cremated remains likely violated the contract they signed with Celestis. Dedicated spacecraft On May 17, 2017, Elysium Space announced the world's first memorial flight involving a dedicated spacecraft. The CubeSat was placed as a secondary payload on a SpaceX Falcon 9 rocket as part of a dedicated rideshare mission called SSO-A planned by Spaceflight. The launch took place from Vandenberg Air Force Base in California on December 3, 2018. The launch was successful, however, industry sources have noted that the Elysium Star spacecraft remained attached to the deployer due to a failure to procure proper licensing. Space burial businesses Space burial businesses generally refer to their service offering as "Memorial Spaceflight". Spaceflight history Orbital Moon Deep space Suborbital Notable individuals buried in space Launched into Earth orbit Gene Roddenberry (1921–1991), creator of Star Trek. Gerard K. O'Neill (1927–1992), space physicist. Krafft Ehricke (1917–1984), rocket scientist. Timothy Leary (1920–1996), American writer, psychologist, psychedelic drug advocate and Harvard University professor. James Doohan (1920– 2005), actor best known for his portrayal of Scotty in the television and film series Star Trek. Celestis also launched him into space in 2007 and in 2008. Leroy Gordon "Gordo" Cooper, Jr. (1927–2004), American astronaut. He was one of the original Mercury Seven pilots in the Project Mercury program, the first crewed space effort by the United States. Launched into outer space Nichelle Nichols (1932-2022), American actress best known for her role as Nyota Uhura in the original Star Trek was launched on the maiden flight of the Vulcan Centaur. Eugene Merle Shoemaker (1928–1997), astronomer and co-discoverer of Comet Shoemaker–Levy 9. Clyde Tombaugh (1906–1997), American astronomer and discoverer of Pluto in 1930. A small sample of Tombaugh's ashes are aboard New Horizons, the first spacecraft to attempt to pass by and photograph Pluto. This is the first sample of human cremated remains which will escape the solar system. Planned space burials Leiji Matsumoto (1938–2023), Japanese creator of numerous anime and manga series including Galaxy Express 999, Space Battleship Yamato and Space Pirate Captain Harlock, announced his intention to have a symbolic portion of his cremated remains to be launched into space on a future Elysium Space mission. Majel Barrett (1932–2008), American actress who played Christine Chapel in the original Star Trek series; wife of Gene Roddenberry. A symbolic portion of both her cremated remains and Roddenberry's cremated remains will be launched into space on a future Celestis mission. William R. Pogue (1930–2014), American astronaut. Luise Clayborn Kaish (1925–2013), American sculptor and painter. Ethics References External links The Real Elysium – Send Your Loved One Into Space for $2k, PandoDaily, 2013 Have A Space Burial As Elysium Sends Your Ashes Into Orbit, TechCrunch, 2013 Ash Scattering: Non-Traditional Ways To Be Memorialized, Huffington Post, 2012 The Ultimate One-Way Ticket, Wired Magazine, 2006 Death Is a Long, Strange Trip, Wired Magazine, 2006 Death customs Burial Funeral transport Science fiction themes 1992 introductions
Space burial
[ "Astronomy" ]
1,415
[ "Space applications", "Outer space" ]
628,485
https://en.wikipedia.org/wiki/Television%20set
A television set or television receiver (more commonly called TV, TV set, television, telly, or tele) is an electronic device for viewing and hearing television broadcasts, or as a computer monitor. It combines a tuner, display, and loudspeakers. Introduced in the late 1920s in mechanical form, television sets became a popular consumer product after World War II in electronic form, using cathode-ray tube (CRT) technology. The addition of color to broadcast television after 1953 further increased the popularity of television sets in the 1960s, and an outdoor antenna became a common feature of suburban homes. The ubiquitous television set became the display device for the first recorded media for consumer use in the 1970s, such as Betamax, VHS; these were later succeeded by DVD. It has been used as a display device since the first generation of home computers (e.g. Timex Sinclair 1000) and dedicated video game consoles (e.g., Atari) in the 1980s. By the early 2010s, flat-panel television incorporating liquid-crystal display (LCD) technology, especially LED-backlit LCD technology, largely replaced CRT and other display technologies. Modern flat-panel TVs are typically capable of high-definition display (720p, 1080i, 1080p, 4K, 8K) and can also play content from a USB device. In the late 2010s, most flat-panel TVs began offering 4K and 8K resolutions. History Early television Mechanical televisions were commercially sold from 1928 to 1934 in the United Kingdom, France, the United States, and the Soviet Union. The earliest commercially made televisions were radios with the addition of a television device consisting of a neon tube behind a mechanically spinning disk with a spiral of apertures that produced a red postage-stamp size image, enlarged to twice that size by a magnifying glass. The Baird "Televisor" (sold in 1930–1933 in the UK) is considered the first mass-produced television, selling about a thousand units. Karl Ferdinand Braun was the first to conceive the use of a CRT as a display device in 1897. The "Braun tube" became the foundation of 20th century TV. In 1926, Kenjiro Takayanagi demonstrated the first TV system that employed a cathode-ray tube (CRT) display, at Hamamatsu Industrial High School in Japan. This was the first working example of a fully electronic television receiver. His research toward creating a production model was halted by the US after Japan lost World War II. The first commercially made electronic televisions with CRTs were manufactured by Telefunken in Germany in 1934, followed by other makers in France (1936), Britain (1936), and US (1938). The cheapest model with a screen was $445 (). An estimated 19,000 electronic televisions were manufactured in Britain, and about 1,600 in Germany, before World War II. About 7,000–8,000 electronic sets were made in the U.S. before the War Production Board halted manufacture in April 1942, production resuming in August 1945. Television usage in the western world skyrocketed after World War II with the lifting of the manufacturing freeze, war-related technological advances, the drop in television prices caused by mass production, increased leisure time, and additional disposable income. While only 0.5% of U.S. households had a television in 1946, 55.7% had one in 1954, and 90% by 1962. In Britain, there were 15,000 television households in 1947, 1.4 million in 1952, and 15.1 million by 1968. Transistorised television Early electronic television sets were large and bulky, with analog circuits made of vacuum tubes. As an example, the RCA CT-100 color TV set used 36 vacuum tubes. Following the invention of the first working transistor at Bell Labs, Sony founder Masaru Ibuka predicted in 1952 that the transition to electronic circuits made of transistors would lead to smaller and more portable television sets. The first fully transistorized, portable solid-state television set was the Sony TV8-301, developed in 1959 and released in 1960. By the 1970s, television manufacturers utilized this push for miniaturization to create small, console-styled sets which their salesmen could easily transport, pushing demand for television sets out into rural areas. However, the first fully transistorized color TV set, the HMV Colourmaster Model 2700, was released in 1967 by the British Radio Corporation. This began the transformation of television viewership from a communal viewing experience to a solitary viewing experience. By 1960, Sony had sold over 4million portable television sets worldwide. By the late 1960s and early 1970s, color television had come into wide use. In Britain, BBC1, BBC2 and ITV were regularly broadcasting in colour by 1969. Late model CRT TVs used highly integrated electronics such as a Jungle chip which performs the functions of many transistors. LCD television Paul K. Weimer at RCA developed the thin-film transistor (TFT) in 1962, later the idea of a TFT-based liquid-crystal display (LCD) was conceived by Bernard Lechner of RCA Laboratories in 1968. Lechner, F. J. Marlowe, E. O. Nester and J. Tults demonstrated the concept in 1968 with a dynamic scattering LCD that used standard discrete MOSFETs. In 1973, T. Peter Brody, J. A. Asars and G. D. Dixon at Westinghouse Research Laboratories demonstrated the first thin-film-transistor liquid-crystal display (TFT LCD). Brody and Fang-Chen Luo demonstrated the first flat active-matrix liquid-crystal display (AM LCD) in 1974. By 1982, pocket LCD TVs based on AM LCD technology were developed in Japan. The Epson ET-10 (Epson Elf) was the first color LCD pocket TV, released in 1984. In 1988, a Sharp research team led by engineer T. Nagayasu demonstrated a full-color LCD display, which convinced the electronics industry that LCD would eventually replace the CRT as the standard television display technology. The first wall-mountable TV was introduced by Sharp Corporation in 1992. During the first decade of the 21st century, CRT "picture tube" display technology was almost entirely supplanted worldwide by flat-panel displays: first plasma displays around 1997, then LCDs. By the early 2010s, LCD TVs, which increasingly used LED-backlit LCDs, accounted for the overwhelming majority of television sets being manufactured. In 2014, Curved OLED TVs were released to the market, which were intended to offer improved image quality but this effect was only visible at a certain position away from the TV. Rollable OLED TVs were introduced in 2020, which allow the display panel of the TV to be hidden. 2023 saw the release of wireless TVs which connect to other devices solely through a transmitter box with an antenna that transmits information wirelessly to the TV. Demos of transparent TVs have also been made. There are TVs that are offered to users for free, but are paid for by showing ads to users and collecting user data. TV sizes Cambridge's Clive Sinclair created a mini TV in 1967 that could be held in the palm of a hand and was the world's smallest television at the time, though it never took off commercially because the design was complex. In 2019, Samsung launched the largest television to date at . The average size of TVs has grown over time. In 2024, the sales of large-screen televisions significantly increased. Between January and September, approximately 38 million televisions with a screen size of or larger were sold globally. This surge in popularity can be attributed to several factors, including technological advancements and decreasing prices. The availability of larger screen sizes at more affordable prices has driven consumer demand. For example, Samsung, a leading electronics manufacturer, introduced its first television in 2019 with a price tag of $99,000. In 2024, the company will offer four models starting at $4,000. This trend is reflected in the overall market, with the average price of a television exceeding , declining from $6,662 in 2023 to $3,113 in 2024. As technology advances, even larger screen sizes, such as , are becoming increasingly accessible to consumers. Display Television sets may employ one of several available display technologies. As of mid-2019, LCDs overwhelmingly predominate in new merchandise, but OLED displays are claiming an increasing market share as they become more affordable and DLP technology continues to offer some advantages in projection systems. The production of plasma and CRT displays has been completely discontinued. There are four primary competing TV technologies: CRT LCD (multiple variations of LCD screens are called QLED, quantum dot, LED, LCD TN, LCD IPS, LCD PLS, LCD VA, etc.) OLED Plasma CRT The cathode-ray tube (CRT) is a vacuum tube containing a so-called electron gun (or three for a color television) and a fluorescent screen where the television image is displayed. The electron gun accelerates electrons in a beam which is deflected in both the vertical and horizontal directions using varying electric or (usually, in television sets) magnetic fields, in order to scan a raster image onto the fluorescent screen. The CRT requires an evacuated glass envelope, which is rather deep (well over half of the screen size), fairly heavy, and breakable. As a matter of radiation safety, both the face (panel) and back (funnel) were made of thick lead glass in order to reduce human exposure to harmful ionizing radiation (in the form of x-rays) produced when electrons accelerated using a high voltage () strike the screen. By the early 1970s, most color TVs replaced leaded glass in the face panel with vitrified strontium oxide glass, which also blocked x-ray emissions but allowed better color visibility. This also eliminated the need for cadmium phosphors in earlier color televisions. Leaded glass, which is less expensive, continued to be used in the funnel glass, which is not visible to the consumer. In Television Sets (or most computer monitors that used CRT's), the entire screen area is scanned repetitively (completing a full frame 25 or 30 times a second) in a fixed pattern called a raster. The Image information is received in real-time from a video signal which controls the electric current supplying the electron gun, or in color televisions each of the three electron guns whose beams land on phosphors of the three primary colors (red, green, and blue). Except in the very early days of television, magnetic deflection has been used to scan the image onto the face of the CRT; this involves a varying current applied to both the vertical and horizontal deflection coils placed around the neck of the tube just beyond the electron gun(s). DLP Digital light processing (DLP) is a type of video projector technology that uses a digital micromirror device. Some DLPs have a TV tuner, which makes them a type of TV display. It was originally developed in 1987 by Larry Hornbeck of Texas Instruments. While the DLP imaging device was invented by Texas Instruments, the first DLP based projector was introduced by Digital Projection Ltd in 1997. Digital Projection and Texas Instruments were both awarded Emmy Awards in 1998 for the DLP projector technology. DLP is used in a variety of display applications from traditional static displays to interactive displays and also non-traditional embedded applications including medical, security, and industrial uses. DLP technology is used in DLP front projectors (standalone projection units for classrooms and business primarily), DLP rear projection television sets, and digital signs. It is also used in about 85% of digital cinema projection, and in additive manufacturing as a power source in some SLA 3D printers to cure resins into solid 3D objects. Rear projection Rear-projection televisions (RPTVs) became very popular in the early days of television, when the ability to practically produce tubes with a large display size did not exist. In 1936, for a tube capable of being mounted horizontally in the television cabinet, would have been regarded as the largest convenient size that could be made owing to its required length, due to the low deflection angles of CRTs produced in the era, which meant that CRTs with large front sizes would have also needed to be very deep, which caused such CRTs to be installed at an angle to reduce the cabinet depth of the TV set. tubes and TV sets were available, but the tubes were so long (deep) that they were mounted vertically and viewed via a mirror in the top of the TV set cabinet which was usually mounted under a hinged lid, reducing considerably the depth of the set but making it taller. These mirror lid televisions were large pieces of furniture. As a solution, Philips introduced a television set in 1937 that relied on back projecting an image from a tube onto a screen. This required the tube to be driven very hard (at unusually high voltages and currents, see ) to produce an extremely bright image on its fluorescent screen. Further, Philips decided to use a green phosphor on the tube face as it was brighter than the white phosphors of the day. In fact these early tubes were not up to the job and by November of that year Philips decided that it was cheaper to buy the sets back than to provide replacement tubes under warranty every couple of weeks or so. Substantial improvements were very quickly made to these small tubes and a more satisfactory tube design was available the following year helped by Philips's decision to use a smaller screen size of . In 1950 a more efficient tube with vastly improved technology and more efficient white phosphor, along with smaller and less demanding screen sizes, was able to provide an acceptable image, though the life of the tubes was still shorter than contemporary direct view tubes. As CRT technology improved during the 1950s, producing larger and larger screen sizes and later on, (more or less) rectangular tubes, the rear projection system was obsolete before the end of the decade. However, in the early to mid 2000s RPTV systems made a comeback as a cheaper alternative to contemporary LCD and Plasma TVs. They were larger and lighter than contemporary CRT TVs and had a flat screen just like LCD and Plasma, but unlike LCD and Plasma, RPTVs were often dimmer, had lower contrast ratios and viewing angles, image quality was affected by room lighting and suffered when compared with direct view CRTs, and were still bulky like CRTs. These TVs worked by having a DLP, LCoS or LCD projector at the bottom of the unit, and using a mirror to project the image onto a screen. The screen may be a Fresnel lens to increase brightness at the cost of viewing angles. Some early units used CRT projectors and were heavy, weighing up to 500 pounds. Most RPTVs used Ultra-high-performance lamps as their light source, which required periodic replacement partly because they dimmed with use but mainly because the operating bulb glass became weaker with ageing to the point where the bulb could eventually shatter often damaging the projection system. Those that used CRTs and lasers did not require replacement. Plasma A plasma display panel (PDP) is a type of flat-panel display common to large TV displays or larger. They are called "plasma" displays because the technology utilizes small cells containing electrically charged ionized gases, or what are in essence chambers more commonly known as fluorescent lamps. Around 2014, television manufacturers were largely phasing out plasma TVs, because a plasma TV became higher cost and more difficult to make in 4k compared to LED or LCD. In 1997, Philips introduced at CES and CeBIT the first large () commercially available flat-panel TV, using Fujitsu plasma displays. LCD Liquid-crystal-display televisions (LCD TV) are television sets that use liquid-crystal displays to produce images. LCD televisions are much thinner and lighter than CRTs of similar display size and are available in much larger sizes (e.g., diagonal). When manufacturing costs fell, this combination of features made LCDs practical for television receivers. In 2007, LCD televisions surpassed sales of CRT-based televisions globally for the first time, and their sales figures relative to other technologies accelerated. LCD TVs quickly displaced the only major competitors in the large-screen market, the plasma display panel and rear-projection television. In the mid-2010s LCDs became, by far, the most widely produced and sold television display type. LCDs also have disadvantages. Other technologies address these weaknesses, including OLEDs, FED and SED. LCDs can have quantum dots and mini-LED backlights to enhance image quality. OLED An OLED (organic light-emitting diode) is a light-emitting diode (LED) in which the emissive electroluminescent layer is a film of organic compound which emits light in response to an electric current. This layer of organic semiconductor is situated between two electrodes. Generally, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens. It is also used for computer monitors, portable systems such as mobile phones, handheld game consoles and PDAs. There are two main families of OLED: those based on small molecules and those employing polymers. Adding mobile ions to an OLED creates a light-emitting electrochemical cell or LEC, which has a slightly different mode of operation. OLED displays can use either passive-matrix (PMOLED) or active-matrix addressing schemes. Active-matrix OLEDs (AMOLED) require a thin-film transistor backplane to switch each individual pixel on or off, but allow for higher resolution and larger display sizes. An OLED display works without a backlight. Thus, it can display deep black levels and can be thinner and lighter than a liquid crystal display (LCD). In low ambient light conditions such as a dark room, an OLED screen can achieve a higher contrast ratio than an LCD, whether the LCD uses cold cathode fluorescent lamps or LED backlight. Television types While most televisions are designed for consumers in the household, there are several markets that demand variations including hospitality, healthcare, and other commercial settings. Hospitality television Televisions made for the hospitality industry are part of an establishment's internal television system designed to be used by its guests. Therefore, settings menus are hidden and locked by a password. Other common software features include volume limiting, customizable power-on splash image, and channel hiding. These TVs are typically controlled by a set-back box using one of the data ports on the rear of the TV. The set back box may offer channel lists, pay per view, video on demand, and casting from a smart phone or tablet. Hospitality spaces are insecure with respect to content piracy, so many content providers require the use of Digital rights management. Hospitality TVs decrypt the industry standard Pro:Idiom when no set back box is used. While H.264 is not part of the ATSC 1.0 standard in North America, TV content in hospitality can include H.264 encoded video, so hospitality TVs include H.264 decoding. Managing dozens or hundreds of TVs can be time consuming, so hospitality TVs can be cloned by storing settings on a USB drive and restoring those settings quickly. Additionally, server-based and cloud-based management systems can monitor and configure an entire fleet of TVs. Healthcare television Healthcare televisions include the provisions of hospitality TVs with additional features for usability and safety. They are designed for use in a healthcare setting in which the user may have limited mobility and audio/visual impairment. A key feature is the pillow speaker connection. Pillow speakers combine nurse call functions, TV remote control and a speaker for audio. In multiple occupancy rooms where several TVs are used in close proximity, the televisions can be programmed to respond to a remote control with unique codes so that each remote only controls one TV. Smaller TVs, also called bedside infotainment systems, have a full function keypad below the screen. This allows direct interaction without the use of a pillow speaker or remote. These TVs typically have antimicrobial surfaces and can withstand daily cleaning using disinfectants. In the US, the UL safety standard for televisions, UL 62368-1, contains a special section (annex DVB) which outlines additional safety requirements for televisions used in healthcare. Outdoor television Outdoor television sets are designed for outdoor use and are usually found in the outdoor sections of bars, sports field, or other community facilities. Most outdoor televisions use high-definition television technology. Their body is more robust. The screens are designed to remain clearly visible even in sunny outdoor lighting. The screens also have anti-reflective coatings to prevent glare. They are weather-resistant and often also have anti-theft brackets. Outdoor TV models can also be connected with BD players and PVRs for greater functionality. Replacing In the United States, the average consumer replaces their television every 6.9 years, but research suggests that due to advanced software and apps, the replacement cycle may be shortening. Recycling and disposal Due to recent changes in electronic waste legislation, economical and environmentally friendly television disposal has been made increasingly more available in the form of television recycling. Challenges with recycling television sets include proper HAZMAT disposal, landfill pollution, and illegal international trade. Major manufacturers Global 2016 years statistics for LCD TV. See also 3D television Active antenna Color killer Color television Digital video recorder Digital television transition Handheld television Home cinema Large-screen television technology Mirror TV Multiplier Smart TV TV aerial plug Viera Cast References External links Set Set
Television set
[ "Technology" ]
4,508
[ "Information and communications technology", "Television technology" ]
628,515
https://en.wikipedia.org/wiki/Sergei%20Novikov%20%28mathematician%29
Sergei Petrovich Novikov (Russian: Серге́й Петро́вич Но́виков ; 20 March 19386 June 2024) was a Soviet and Russian mathematician, noted for work in both algebraic topology and soliton theory. He became the first Soviet mathematician to receive the Fields Medal in 1970. Biography Novikov was born on 20 March 1938 in Gorky, Soviet Union (now Nizhny Novgorod, Russia). He grew up in a family of talented mathematicians. His father was Pyotr Sergeyevich Novikov, who gave a negative solution to the word problem for groups. His mother, Lyudmila Vsevolodovna Keldysh, and maternal uncle, Mstislav Vsevolodovich Keldysh, were also important mathematicians. Novikov entered Moscow State University in 1955 and graduated in 1960. In 1964, he received the Moscow Mathematical Society Award for young mathematicians and defended a dissertation for the Candidate of Science in Physics and Mathematics degree (equivalent to the PhD) under Mikhail Postnikov at Moscow State University. In 1965, he defended a dissertation for the Doctor of Science in Physics and Mathematics degree there. Novikov died on 6 June 2024, at the age of 86. Career In 1966, Novikov became a corresponding member of the Academy of Sciences of the Soviet Union. In 1971, he became head of the Mathematics Division of the Landau Institute for Theoretical Physics of the USSR Academy of Sciences. In 1983, Novikov was also appointed the head of the Department of Higher Geometry and Topology at Moscow State University. He became President of the Moscow Mathematical Society in 1985 and remained in that role until 1996, when he moved to the University of Maryland College of Computer, Mathematical, and Natural Sciences at the University of Maryland, College Park. He continued to maintain research appointments at the Landau Institute for Theoretical Physics, Moscow State University, and the Department of Geometry and Topology at the Steklov Mathematical Institute after his move to Maryland. Research Novikov's early work was in cobordism theory, in relative isolation. Among other advances he showed how the Adams spectral sequence, a powerful tool for proceeding from homology theory to the calculation of homotopy groups, could be adapted to the new (at that time) cohomology theory typified by cobordism and K-theory. This required the development of the idea of cohomology operations in the general setting, since the basis of the spectral sequence is the initial data of Ext functors taken with respect to a ring of such operations, generalising the Steenrod algebra. The resulting Adams–Novikov spectral sequence is now a basic tool in stable homotopy theory. Novikov also carried out important research in geometric topology, being one of the pioneers with William Browder, Dennis Sullivan, and C. T. C. Wall of the surgery theory method for classifying high-dimensional manifolds. He proved the topological invariance of the rational Pontryagin classes, and posed the Novikov conjecture. From about 1971, he moved to work in the field of isospectral flows, with connections to the theory of theta functions. Novikov's conjecture about the Riemann–Schottky problem (characterizing principally polarized abelian varieties that are the Jacobian of some algebraic curve) stated, essentially, that this was the case if and only if the corresponding theta function provided a solution to the Kadomtsev–Petviashvili equation of soliton theory. This was proved by Takahiro Shiota (1986), following earlier work by Enrico Arbarello and Corrado de Concini (1984), and by Motohico Mulase (1984). Awards and honours In 1967, Novikov received the Lenin Prize. In 1970, Novikov became the first Soviet mathematician to be awarded the Fields Medal. He was not allowed to travel to the International Congress of Mathematicians in Nice to accept his medal by the Soviet government due to his support for people who had been arrested and sent to mental institutions for speaking out against the regime, but he received it in 1971 when the International Mathematical Union met in Moscow. In 2005, he was awarded the Wolf Prize for his contributions to algebraic topology, differential topology and to mathematical physics. He is one of just eleven mathematicians who received both the Fields Medal and the Wolf Prize. In 2020, he received the Lomonosov Gold Medal of the Russian Academy of Sciences. In 1981, he was elected a full member of the USSR Academy of Sciences (Russian Academy of Sciences since 1991). He was elected to the London Mathematical Society (honorary member, 1987), Serbian Academy of Sciences and Arts (honorary member, 1988), Accademia dei Lincei (foreign member, 1991), Academia Europaea (member, 1993), National Academy of Sciences (foreign associate, 1994), Pontifical Academy of Sciences (member, 1996), (fellow, 2003), and Montenegrin Academy of Sciences and Arts (honorary member, 2011). He received honorary doctorates from the University of Athens (1988) and University of Tel Aviv (1999). Writings with Dubrovin and Fomenko: Modern geometry- methods and applications, Vol.1-3, Springer, Graduate Texts in Mathematics (originally 1984, 1988, 1990, V.1 The geometry of surfaces and transformation groups, V.2 The geometry and topology of manifolds, V.3 Introduction to homology theory) Topics in Topology and mathematical physics, AMS (American Mathematical Society) 1995 Integrable systems – selected papers, Cambridge University Press 1981 (London Math. Society Lecture notes) with V. I. Arnold as editor and co-author: Dynamical systems, 1994, Encyclopedia of mathematical sciences, Springer Topology I: general survey, V. 12 of Topology Series of Encyclopedia of mathematical sciences, Springer 1996; 2013 edition Solitons and geometry, Cambridge 1994 as editor, with Buchstaber: Solitons, geometry and topology: on the crossroads, AMS, 1997 with Dubrovin and Krichever: Topological and Algebraic Geometry Methods in contemporary mathematical physics V.2, Cambridge , My generation in mathematics, Russian Mathematical Surveys V.49, 1994, p. 1 See also Novikov–Shubin invariant Novikov ring Notes References External links Homepage at the University of Maryland, College Park Homepage at the Steklov Mathematical Institute Biography (in Russian) at the Moscow State University 1938 births 2024 deaths Fields Medalists 20th-century Russian mathematicians 21st-century Russian mathematicians Foreign associates of the National Academy of Sciences Full Members of the USSR Academy of Sciences Full Members of the Russian Academy of Sciences Members of the Serbian Academy of Sciences and Arts Foreign members of the Serbian Academy of Sciences and Arts Moscow State University alumni Academic staff of Moscow State University Soviet mathematicians Topologists University of Maryland, College Park faculty Wolf Prize in Mathematics laureates Recipients of the Lenin Prize Mathematical physicists Russian scientists Members of Academia Europaea Members of the United States National Academy of Sciences Members of the Lincean Academy Members of the Pontifical Academy of Sciences Members of the Montenegrin Academy of Sciences and Arts
Sergei Novikov (mathematician)
[ "Mathematics" ]
1,453
[ "Topologists", "Topology" ]
628,536
https://en.wikipedia.org/wiki/Bandwidth-limited%20pulse
A bandwidth-limited pulse (also known as Fourier-transform-limited pulse, or more commonly, transform-limited pulse) is a pulse of a wave that has the minimum possible duration for a given spectral bandwidth. Bandwidth-limited pulses have a constant phase across all frequencies making up the pulse. Optical pulses of this type can be generated by mode-locked lasers. Any waveform can be disassembled into its spectral components by Fourier analysis or Fourier transformation. The length of a pulse thereby is determined by its spectral components, which include not just their relative intensities, but also the relative positions (spectral phase) of these spectral components. For different pulse shapes, the minimum duration-bandwidth product is different. The duration-bandwidth product is minimal for zero phase-modulation. For example, pulses have a minimum duration-bandwidth product of 0.315 while gaussian pulses have a minimum value of 0.441. A bandwidth-limited pulse can only be kept together if the dispersion of the medium the wave is travelling through is zero; otherwise dispersion management is needed to revert the effects of unwanted spectral phase changes. For example, when an ultrashort pulse passes through a block of glass, the glass medium broadens the pulse due to group velocity dispersion. Keeping pulses bandwidth-limited is necessary to compress information in time or to achieve high field densities, as with ultrashort pulses in modelocked lasers. Further reading Optics Nonlinear optics Laser science
Bandwidth-limited pulse
[ "Physics", "Chemistry" ]
303
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", " and optical physics" ]
628,552
https://en.wikipedia.org/wiki/British%20Construction%20Industry%20Awards
The British Construction Industry Awards (BCI Awards or BCIA) were launched by New Civil Engineer magazine and Thomas Telford Ltd in 1998, at the time both owned by the Institution of Civil Engineers. The awards seek to recognise outstanding achievement in the construction of buildings, taking account of a wide range of factors including architectural and engineering design, but also consideration of the construction process, delivery to time and budget, and client satisfaction. In 2012, Network Rail chief executive Sir David Higgins was the chair of the British Construction Industry Awards judging panel which celebrated its 25th anniversary of rewarding excellence in UK construction delivery. In June 2017 owners of New Civil Engineer, Ascential, sold the title to Metropolis International who now operate the BCIAs in association with the Institution of Civil Engineers. The awards are typically held in October each year. In 2020, the British Construction Industry Awards will be held on the evening of Wednesday 28 October at the JW Marriott Grosvenor House Hotel on Park Lane in London. Award winners 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2005 2004 2003 2002 See also List of engineering awards References External links BCI Awards New Civil Engineer Institution of Civil Engineers 1998 establishments in the United Kingdom Architecture awards Architecture in the United Kingdom Awards established in 1998 Construction Industry British science and technology awards Civil engineering awards Construction and civil engineering companies of the United Kingdom Institution of Civil Engineers
British Construction Industry Awards
[ "Engineering" ]
274
[ "Civil engineering awards", "Civil engineering", "Structural engineering" ]
628,583
https://en.wikipedia.org/wiki/Optical%20coherence%20tomography
Optical coherence tomography (OCT) is an imaging technique that uses interferometry with short-coherence-length light to obtain micrometer-level depth resolution and uses transverse scanning of the light beam to form two- and three-dimensional images from light reflected from within biological tissue or other scattering media. Short-coherence-length light can be obtained using a superluminescent diode (SLD) with a broad spectral bandwidth or a broadly tunable laser with narrow linewidth. The first demonstration of OCT imaging (in vitro) was published by a team from MIT and Harvard Medical School in a 1991 article in the journal Science. The article introduced the term "OCT" to credit its derivation from optical coherence-domain reflectometry, in which the axial resolution is based on temporal coherence. The first demonstrations of in vivo OCT imaging quickly followed. The first US patents on OCT by the MIT/Harvard group described a time-domain OCT (TD-OCT) system. These patents were licensed by Zeiss and formed the basis of the first generations of OCT products until 2006. Tanno et al. obtained a patent on optical heterodyne tomography (similar to TD-OCT) in Japan in the same year. In the decade preceding the invention of OCT, interferometry with short-coherence-length light had been investigated for a variety of applications. The potential to use interferometry for imaging was proposed, and measurement of retinal elevation profile and thickness had been demonstrated. The initial commercial clinical OCT systems were based on point-scanning TD-OCT technology, which primarily produced cross-sectional images due to the speed limitation (tens to thousands of axial scans per second). Fourier-domain OCT became available clinically 2006, enabling much greater image acquisition rate (tens of thousands to hundreds of thousands axial scans per second) without sacrificing signal strength. The higher speed allowed for three-dimensional imaging, which can be visualized in both en face and cross-sectional views. Novel contrasts such as angiography, elastography, and optoretinography also became possible by detecting signal change over time. Over the past three decades, the speed of commercial clinical OCT systems has increased more than 1000-fold, doubling every three years and rivaling Moore's law of computer chip performance. Development of parallel image acquisition approaches such as line-field and full-field technology may allow the performance improvement trend to continue. OCT is most widely used in ophthalmology, in which it has transformed the diagnosis and monitoring of retinal diseases, optic nerve diseases, and corneal diseases. It has greatly improved the management of the top three causes of blindness – macular degeneration, diabetic retinopathy, and glaucoma – thereby preventing vision loss in many patients. By 2016 OCT was estimated to be used in more than 30 million imaging procedures per year worldwide. OCT angioscopy is used in the intravascular evaluation of coronary artery plaques and to guide stent placement. Beyond ophthalmology and cardiology, applications are also developing in other medical specialties such as dermatology, gastroenterology (endoscopy), neurology, oncology, and dentistry. Introduction Interferometric reflectometry of biological tissue, especially of the human eye using short-coherence-length light (also referred to as partially-coherent, low-coherence, or broadband, broad-spectrum, or white light) was investigated in parallel by multiple groups worldwide since 1980s. Lending ideas from ultrasound imaging and merging the time-of-flight detection with optical interferometry to detect optical delays in the pico- and femtosecond range as known from the autocorrelator in the 1960's, the technique's development was and is tightly associated with the availability of novel electronic, mechanical and photonic abilities. Stemming from single lateral point low-coherence interferometry the addition of a wide range of technologies enabled key milestones in this computational imaging technique. High-speed axial and lateral scanners, ultra-broad spectrum or ultra-fast spectrally tunable lasers or other high brightness radiation sources, increasingly sensitive detectors, like high resolution and high speed cameras or fast A/D-converters that picked up from and drove ideas in the rapidly developing photonics field, together with the increasing availability of computing power were essential for its birth and success. In 1991, David Huang, then a student in James Fujimoto laboratory at Massachusetts Institute of Technology, working with Eric Swanson at the MIT Lincoln Laboratory and colleagues at the Harvard Medical School, successfully demonstrated imaging and called the new imaging modality "optical coherence tomography". Since then, OCT with micrometer axial resolution and below and cross-sectional imaging capabilities has become a prominent biomedical imaging technique that has continually improved in technical performance and range of applications. The improvement in image acquisition rate is particularly spectacular, starting with the original 0.8 Hz axial scan repetition rate to the current commercial clinical OCT systems operating at several hundred kHz and laboratory prototypes at multiple MHz. The range of applications has expanded from ophthalmology to cardiology and other medical specialties. For their roles in the invention of OCT, Fujimoto, Huang, and Swanson received the 2023 Lasker-DeBakey Clinical Medical Research Award and the National Medal of Technology and Innovation. These developments have been reviewed in articles written for the general scientific and medical readership. It is particularly suited to ophthalmic applications and other tissue imaging requiring micrometer resolution and millimeter penetration depth. OCT has also been used for various art conservation projects, where it is used to analyze different layers in a painting. OCT has interesting advantages over other medical imaging systems. Medical ultrasonography, magnetic resonance imaging (MRI), confocal microscopy, and OCT are differently suited to morphological tissue imaging: while the first two have whole body but low resolution imaging capability (typically a fraction of a millimeter), the third one can provide images with resolutions well below 1 micrometer (i.e. sub-cellular), between 0 and 100 micrometers in depth, and the fourth can probe as deep as 500 micrometers, but with a lower (i.e. architectural) resolution (around 10 micrometers in lateral and a few micrometers in depth in ophthalmology, for instance, and 20 micrometers in lateral in endoscopy). OCT is based on low-coherence interferometry. In conventional interferometry with long coherence length (i.e., laser interferometry), interference of light occurs over a distance of meters. In OCT, this interference is shortened to a distance of micrometers, owing to the use of broad-bandwidth light sources (i.e., sources that emit light over a broad range of frequencies). Light with broad bandwidths can be generated by using superluminescent diodes or lasers with extremely short pulses (femtosecond lasers). White light is an example of a broadband source with lower power. Light in an OCT system is broken into two arms – a sample arm (containing the item of interest) and a reference arm (usually a mirror). The combination of reflected light from the sample arm and reference light from the reference arm gives rise to an interference pattern, but only if light from both arms have traveled the "same" optical distance ("same" meaning a difference of less than a coherence length). By scanning the mirror in the reference arm, a reflectivity profile of the sample can be obtained (this is time domain OCT). Areas of the sample that reflect back a lot of light will create greater interference than areas that don't. Any light that is outside the short coherence length will not interfere. This reflectivity profile, called an A-scan, contains information about the spatial dimensions and location of structures within the item of interest. A cross-sectional tomogram (B-scan) may be achieved by laterally combining a series of these axial depth scans (A-scan). En face imaging at an acquired depth is possible depending on the imaging engine used. Layperson's explanation Optical coherence tomography (OCT) is a technique for obtaining sub-surface images of translucent or opaque materials at a resolution equivalent to a low-power microscope. It is effectively "optical ultrasound", imaging reflections from within tissue to provide cross-sectional images. OCT has attracted interest among the medical community because it provides tissue morphology imagery at much higher resolution (less than 10 μm axially and less than 20 μm laterally ) than other imaging modalities such as MRI or ultrasound. The key benefits of OCT are: Live sub-surface images at near-microscopic resolution Instant, direct imaging of tissue morphology No preparation of the sample or subject, no contact No ionizing radiation OCT delivers high resolution because it is based on light, rather than sound or radio frequency. An optical beam is directed at the tissue, and the small portion of this light that reflects directly back from sub-surface features is collected. Note that most light scatters off at large angles. In conventional imaging, this diffusely scattered light contributes background that obscures an image. However, in OCT, a technique called interferometry is used to record the optical path length of received photons, allowing rejection of most photons that scatter multiple times before detection. Thus OCT can build up clear 3D images of thick samples by rejecting background signal while collecting light directly reflected from surfaces of interest. Within the range of noninvasive three-dimensional imaging techniques that have been introduced to the medical research community, OCT as an echo technique is similar to ultrasound imaging. Other medical imaging techniques such as computerized axial tomography, magnetic resonance imaging, or positron emission tomography do not use the echo-location principle. The technique is limited to imaging 1 to 2 mm below the surface in biological tissue, because at greater depths the proportion of light that escapes without scattering is too small to be detected. No special preparation of a biological specimen is required, and images can be obtained "non-contact" or through a transparent window or membrane. The laser output from the instruments used is loweye-safe near-infrared or visible-lightand no damage to the sample is therefore likely. Theory The principle of OCT is white light, or low coherence, interferometry. The optical setup typically consists of an interferometer (Fig. 1, typically Michelson type) with a low coherence, broad bandwidth light source. Light is split into and recombined from reference and sample arms, respectively. Time domain In time domain OCT the path length of the reference arm is varied in time (the reference mirror is translated longitudinally). A property of low coherence interferometry is that interference, i.e. the series of dark and bright fringes, is only achieved when the path difference lies within the coherence length of the light source. This interference is called autocorrelation in a symmetric interferometer (both arms have the same reflectivity), or cross-correlation in the common case. The envelope of this modulation changes as path length difference is varied, where the peak of the envelope corresponds to path length matching. The interference of two partially coherent light beams can be expressed in terms of the source intensity, , as where represents the interferometer beam splitting ratio, and is called the complex degree of coherence, i.e. the interference envelope and carrier dependent on reference arm scan or time delay , and whose recovery is of interest in OCT. Due to the coherence gating effect of OCT the complex degree of coherence is represented as a Gaussian function expressed as where represents the spectral width of the source in the optical frequency domain, and is the centre optical frequency of the source. In equation (2), the Gaussian envelope is amplitude modulated by an optical carrier. The peak of this envelope represents the location of the microstructure of the sample under test, with an amplitude dependent on the reflectivity of the surface. The optical carrier is due to the Doppler effect resulting from scanning one arm of the interferometer, and the frequency of this modulation is controlled by the speed of scanning. Therefore, translating one arm of the interferometer has two functions; depth scanning and a Doppler-shifted optical carrier are accomplished by pathlength variation. In OCT, the Doppler-shifted optical carrier has a frequency expressed as where is the central optical frequency of the source, is the scanning velocity of the pathlength variation, and is the speed of light. The axial and lateral resolutions of OCT are decoupled from one another; the former being an equivalent to the coherence length of the light source and the latter being a function of the optics. The axial resolution of OCT is defined as {| |- | | |- | | |} where and are respectively the central wavelength and the spectral width of the light source. Fourier domain Fourier-domain (or Frequency-domain) OCT (FD-OCT) has speed and signal-to-noise ratio (SNR) advantages over time-domain OCT (TD-OCT) and has become the standard in the industry since 2006. The idea of using frequency modulation and coherent detection to obtain ranging information was already demonstrated in optical frequency domain reflectometry and laser radar in the 1980s, though the distance resolution and range were much longer than OCT. There are two types of FD-OCT – swept-source OCT (SS-OCT) and spectral-domain OCT (SD-OCT) – both of which acquire spectral interferograms which are then Fourier transformed to obtain an axial scan of reflectance amplitude versus depth. In SS-OCT, the spectral interferogram is acquired sequentially by tuning the wavelength of a laser light source. SD-OCT acquires spectral interferogram simultaneously in a spectrometer. An implementation of SS-OCT was described by the MIT group as early as 1994.   A group based in the University of Vienna described measurement of intraocular distance using both tunable laser and spectrometer-based interferometry as early as 1995. SD-OCT imaging was first demonstrated both in vitro and in vivo by a collaboration between the Vienna group and a group based in the Nicholas Copernicus University in a series of articles between 2000 and 2002. The SNR advantage of FD-OCT over TD-OCT was first demonstrated in eye imaging and further analyzed by multiple groups of researchers in 2003. Spectral-domain OCT Spectral-domain OCT (spatially encoded frequency domain OCT) extracts spectral information by distributing different optical frequencies onto a detector stripe (line-array CCD or CMOS) via a dispersive element (see Fig. 4). Thereby the information of the full depth scan can be acquired within a single exposure. However, the large signal-to-noise advantage of FD-OCT is reduced due to the lower dynamic range of stripe detectors with respect to single photosensitive diodes, resulting in an SNR advantage of ~10 dB at much higher speeds. This is not much of a problem when working at 1300 nm, however, since dynamic range is not a serious problem at this wavelength range. The drawbacks of this technology are found in a strong fall-off of the SNR, which is proportional to the distance from the zero delay and a sinc-type reduction of the depth-dependent sensitivity because of limited detection linewidth. (One pixel detects a quasi-rectangular portion of an optical frequency range instead of a single frequency, the Fourier transform leads to the sinc(z) behavior). Additionally, the dispersive elements in the spectroscopic detector usually do not distribute the light equally spaced in frequency on the detector, but mostly have an inverse dependence. Therefore, the signal has to be resampled before processing, which cannot take care of the difference in local (pixelwise) bandwidth, which results in further reduction of the signal quality. However, the fall-off is not a serious problem with the development of new generation CCD or photodiode array with a larger number of pixels. Synthetic array heterodyne detection offers another approach to this problem without the need for high dispersion. Swept-source OCT Swept-source OCT (Time-encoded frequency domain OCT) tries to combine some of the advantages of standard TD and spectral domain OCT. Here the spectral components are not encoded by spatial separation, but they are encoded in time. The spectrum is either filtered or generated in single successive frequency steps and reconstructed before Fourier transformation. By accommodation of a frequency scanning light source (i.e. frequency scanning laser) the optical setup (see Fig. 3) becomes simpler than spectral domain OCT, but the problem of scanning is essentially translated from the TD-OCT reference arm into the swept source OCT light source. Here the advantage lies in the proven high SNR detection technology, while swept laser sources achieve very small instantaneous bandwidths (linewidths) at very high frequencies (20–200 kHz). Drawbacks are the nonlinearities in the wavelength (especially at high scanning frequencies), the broadening of the linewidth at high frequencies and a high sensitivity to movements of the scanning geometry or the sample (below the range of nanometers within successive frequency steps). Scanning schemes Focusing the light beam to a point on the surface of the sample under test, and recombining the reflected light with the reference will yield an interferogram with sample information corresponding to a single A-scan (Z axis only). Scanning of the sample can be accomplished by either scanning the light on the sample, or by moving the sample under test. A linear scan will yield a two-dimensional data set corresponding to a cross-sectional image (X-Z axes scan), whereas an area scan achieves a three-dimensional data set corresponding to a volumetric image (X-Y-Z axes scan). Single point Systems based on single point, confocal, or flying-spot time domain OCT, must scan the sample in two lateral dimensions and reconstruct a three-dimensional image using depth information obtained by coherence-gating through an axially scanning reference arm (Fig. 2). Two-dimensional lateral scanning has been electromechanically implemented by moving the sample using a translation stage, and using a novel micro-electro-mechanical system scanner. Line-field OCT Line-field confocal optical coherence tomography (LC-OCT) is an imaging technique based on the principle of time-domain OCT with line illumination using a broadband laser and line detection using a line-scan camera. LC-OCT produces B-scans in real-time from multiple A-scans acquired in parallel. En face as well as three-dimensional images can also be obtained by scanning the illumination line laterally. The focus is continuously adjusted during the scan of the sample depth, using a high numerical aperture (NA) microscope objective to image with high lateral resolution. By using a supercontinuum laser as a light source, a quasi-isotropic spatial resolution of ~ 1 μm is achieved at a central wavelength of ~ 800 nm. On the other hand, line illumination and detection, combined with the use of a high NA microscope objective, produce a confocal gate that prevents most scattered light that does not contribute to the signal from being detected by the camera. This confocal gate, which is absent in the full-field OCT technique, gives LC-OCT an advantage in terms of detection sensitivity and penetration in highly scattering media such as skin tissues. So far this technique has been used mainly for skin imaging in the fields of dermatology and cosmetology. Full-field OCT An imaging approach to temporal OCT was developed by Claude Boccara's team in 1998, with an acquisition of the images without beam scanning. In this technique called full-field OCT (FF-OCT), unlike other OCT techniques that acquire cross-sections of the sample, the images are here "en-face" i.e. like images of classical microscopy: orthogonal to the light beam of illumination. More precisely, interferometric images are created by a Michelson interferometer where the path length difference is varied by a fast electric component (usually a piezo mirror in the reference arm). These images acquired by a CCD camera are combined in post-treatment (or online) by the phase shift interferometry method, where usually 2 or 4 images per modulation period are acquired, depending on the algorithm used. More recently, approaches that allow rapid single-shot imaging were developed to simultaneously capture multiple phase-shifted images required for reconstruction, using single camera. Single-shot time-domain OCM is limited only by the camera frame rate and available illumination. The "en-face" tomographic images are thus produced by a wide-field illumination, ensured by the Linnik configuration of the Michelson interferometer where a microscope objective is used in both arms. Furthermore, while the temporal coherence of the source must remain low as in classical OCT (i.e. a broad spectrum), the spatial coherence must also be low to avoid parasitical interferences (i.e. a source with a large size). Selected applications Optical coherence tomography is an established medical imaging technique and is used across several medical specialties including ophthalmology and cardiology and is widely used in basic science research applications. Ophthalmology Ocular (or ophthalmic) OCT is used heavily by ophthalmologists and optometrists to obtain high-resolution images of the retina and anterior segment. Owing to OCT's capability to show cross-sections of tissue layers with micrometer resolution, OCT provides a straightforward method of assessing cellular organization, photoreceptor integrity, and axonal thickness in glaucoma, macular degeneration, diabetic macular edema, multiple sclerosis, optic neuritis, and other eye diseases or systemic pathologies which have ocular signs. Additionally, ophthalmologists leverage OCT to assess the vascular health of the retina via a technique called OCT angiography (OCTA). In ophthalmological surgery, especially retinal surgery, an OCT can be mounted on the microscope. Such a system is called an intraoperative OCT (iOCT) and provides support during the surgery with clinical benefits. Polarization-sensitive OCT was recently applied in the human retina to determine optical polarization properties of vessel walls near the optic nerve. Retinal imaging with PS-OCT demonstrated how the thickness and birefringence of blood vessel wall tissue of healthy subjects could be quantified, in vivo. PS-OCT was subsequently applied to patients with diabetes and age-matched healthy subjects, and showed an almost 100% increase in vessel wall birefringence due to diabetes, without a significant change in vessel wall thickness. In patients with hypertension however, the retinal vessel wall thickness increased by 60% while the vessel wall birefringence dropped by 20%, on average. The large differences measured in healthy subjects and patients suggest that retinal measurements with PS-OCT could be used as a screening tool for hypertension and diabetes. OCT can used to measure the thickness of the Retinal nerve fiber layer (RNFL). Cardiology In the settings of cardiology, OCT is used to image coronary arteries to visualize vessel wall lumen morphology and microstructure at a resolution ~10 times higher than other existing modalities such as intravascular ultrasounds, and x-ray angiography (intracoronary optical coherence tomography). For this type of application, 1 mm in diameter or smaller fiber-optics catheters are used to access artery lumen through semi-invasive interventions such as percutaneous coronary interventions. The first demonstration of endoscopic OCT was reported in 1997, by researchers in Fujimoto's laboratory at Massachusetts Institute of Technology. The first TD-OCT imaging catheter and system was commercialized by LightLab Imaging, Inc., a company based in Massachusetts in 2006. The first FD-OCT imaging study was reported by Massachusetts General Hospital in 2008. Intracoronary FD-OCT was first introduced in the market in 2009 by LightLab Imaging, Inc. followed by Terumo Corporation in 2012 and by Gentuity LLC in 2020. The higher acquisition speed of FD-OCT enabled the widespread adoption of this imaging technology for coronary artery imaging. It is estimated that over 100,000 FD-OCT coronary imaging cases are performed yearly, and that the market is increasing by approximately 20% every year. Other developments of intracoronary OCT included the combination with other optical imaging modalities for multi-modality imaging. Intravascular OCT has been combined with near-infrared fluorescence molecular imaging (NIRF) to enhance its capability to detect molecular/functional and tissue morphological information simultaneously. In a similar way, combination with near-infrared spectroscopy (NIRS) has been implemented. Neurovascular Endoscopic/intravascular OCT has been further developed for use in neurovascular applications including imaging for guiding endovascular treatment of ischemic stroke and brain aneurysms. Initial clinical investigations with existing coronary OCT catheters have been limited to proximal intracranial anatomy of patient with limited tortuosity, as coronary OCT technology was not designed for the tortuous cerebrovasculature encountered in the brain. However, despite these limitations, it showed the potential of OCT for the imaging of neurovascular disease. An intravascular OCT imaging catheter design tailored for use in tortuous neurovascular anatomy has been proposed in 2020. A first-in-human study using endovascular neuro OCT (nOCT) has been reported in 2024. Oncology Endoscopic OCT has been applied to the detection and diagnosis of cancer and precancerous lesions, such as Barrett's esophagus and esophageal dysplasia. Dermatology The first use of OCT in dermatology dates back to 1997. Since then, OCT has been applied to the diagnosis of various skin lesions including carcinomas. However, the diagnosis of melanoma using conventional OCT is difficult, especially due to insufficient imaging resolution. Emerging high-resolution OCT techniques such as LC-OCT have the potential to improve the clinical diagnostic process, allowing for the early detection of malignant skin tumors – including melanoma – and a reduction in the number of surgical excisions of benign lesions. Other promising areas of application include the imaging of lesions where excisions are hazardous or impossible and the guidance of surgical interventions through identification of tumor margins. Dentistry Researchers in Tokyo medical and Dental University were able to detect enamel white spot lesions around and beneath the orthodontic brackets using swept source OCT. Research applications Researchers have used OCT to produce detailed images of mice brains, through a "window" made of zirconia that has been modified to be transparent and implanted in the skull. Optical coherence tomography is also applicable and increasingly used in industrial applications, such as nondestructive testing (NDT), material thickness measurements, and in particular thin silicon wafers and compound semiconductor wafers thickness measurements surface roughness characterization, surface and cross-section imaging and volume loss measurements. OCT systems with feedback can be used to control manufacturing processes. With high speed data acquisition, and sub-micron resolution, OCT is adaptable to perform both inline and off-line. Due to the high volume of produced pills, an interesting field of application is in the pharmaceutical industry to control the coating of tablets. Fiber-based OCT systems are particularly adaptable to industrial environments. These can access and scan interiors of hard-to-reach spaces, and are able to operate in hostile environments—whether radioactive, cryogenic, or very hot. Novel optical biomedical diagnostic and imaging technologies are currently being developed to solve problems in biology and medicine. As of 2014, attempts have been made to use optical coherence tomography to identify root canals in teeth, specifically canal in the maxillary molar, however, there is no difference with the current methods of dental operatory microscope. Research conducted in 2015 was successful in utilizing a smartphone as an OCT platform, although much work remains to be done before such a platform would be commercially viable. Photonic integrated circuits may be a promising option to miniaturized OCT. Similarly to integrated circuits silicon-based fabrication techniques can be used to produce miniaturized photonic systems. First in vivo human retinal imaging has been reported recently. In 3D microfabrication, OCT enables non-destructive testing and real-time inspection during additive manufacturing. Its high-resolution imaging detects defects, characterizes material properties and ensures the integrity of internal geometries without damaging the part. See also Angle-resolved low-coherence interferometry Ballistic photon Confocal microscopy Dual-axis optical coherence tomography Interferometry Intracoronary optical coherence tomography Leica Microsystems Medical imaging Novacam Technologies Optical heterodyne detection Optical projection tomography Spectroscopic optical coherence tomography Terahertz tomography Tomography References Eye procedures Laser medicine Medical equipment Optical imaging
Optical coherence tomography
[ "Biology" ]
6,085
[ "Medical equipment", "Medical technology" ]
628,664
https://en.wikipedia.org/wiki/C8H10N4O2
The molecular formula may refer to: Enprofylline, a xanthine derivative used in the treatment of asthma. Caffeine, the world's most widely consumed psychoactive drug, present in coffee, chocolate, black and green tea, energy drinks, and more.
C8H10N4O2
[ "Chemistry" ]
59
[ "Isomerism", "Set index articles on molecular formulas" ]
628,748
https://en.wikipedia.org/wiki/RenewableUK
RenewableUK, formerly known as the British Wind Energy Association (BWEA), is the trade association for wind power, wave power and tidal power industries in the United Kingdom. History A number of universities active in wind energy in the 1970s met under umbrella of the ITDG Wind Panel (Intermediate Technology Development Group). The BWEA was formed from the ITDG Wind Panel along with other interested parties and representatives from industry, to promote wind power in the United Kingdom. The inaugural meeting of the BWEA took place on 17 November 1978 at the Rutherford Laboratory with Peter Musgrove of Reading University as chairman. In 2004 the British Wind Energy Association expanded its remit to include wave and tidal energy, and to use the Association's experience to guide these technologies along the same path to commercialisation. In December 2009, to reflect this expansion in the industries it represented, members resolved to adopt the new name of RenewableUK. RenewableUK Cymru The members of RenewableUK are represented in Wales by RenewableUK Cymru, the Cardiff-based branch of the organisation. The remit in Wales expanded in February 2014 to include all renewable energy technologies and energy storage. Criticisms Advertising Standards Authority ruling In 2008 the then BWEA was found by the Advertising Standards Authority (ASA) to have overstated the carbon dioxide emissions displaced by wind generation. The BWEA and its members had claimed a value of 860 grams per kilowatt hour but the ASA found this was exaggerated by 100% and ordered the BWEA to reduce the figure to 430 grams. See also Renewable energy in the United Kingdom Energy use and conservation in the United Kingdom Wind power in Scotland European Wind Energy Association Notes External links RenewableUK website Conservation in the United Kingdom Business organisations based in the United Kingdom Wind power in the United Kingdom Renewable energy organizations
RenewableUK
[ "Engineering" ]
370
[ "Renewable energy organizations", "Energy organizations" ]
628,768
https://en.wikipedia.org/wiki/Peppered%20moth%20evolution
The evolution of the peppered moth is an evolutionary instance of directional colour change in the moth population as a consequence of air pollution during the Industrial Revolution. The frequency of dark-coloured moths increased at that time, an example of industrial melanism. Later, when pollution was reduced in response to clean air legislation, the light-coloured form again predominated. Industrial melanism in the peppered moth was an early test of Charles Darwin's natural selection in action, and it remains a classic example in the teaching of evolution. In 1978, Sewall Wright described it as "the clearest case in which a conspicuous evolutionary process has actually been observed." The dark-coloured or melanic form of the peppered moth (var. carbonaria) was rare, though a specimen had been collected by 1811. After field collection in 1848 from Manchester, an industrial city in England, the frequency of the variety was found to have increased drastically. By the end of the 19th century it almost completely outnumbered the original light-coloured type (var. typica), with a record of 98% in 1895. The evolutionary importance of the moth was only speculated upon during Darwin's lifetime. It was 14 years after Darwin's death, in 1896, that J. W. Tutt presented it as a case of natural selection. Because of this, the idea spread widely, and more people came to believe in Darwin's theory. Bernard Kettlewell was the first to investigate the evolutionary mechanism behind peppered moth adaptation, between 1953 and 1956. He found that a light-coloured body was an effective camouflage in a clean environment, such as in rural Dorset, while the dark colour was beneficial in a polluted environment like industrial Birmingham. This selective survival was due to birds, which easily caught dark moths on clean trees and white moths on trees darkened with soot. The story, supported by Kettlewell's experiment, became the canonical example of Darwinian evolution and evidence for natural selection used in standard textbooks. However, failure to replicate the experiment and Theodore David Sargent's criticism of Kettlewell's methods in the late 1960s led to general skepticism. When Judith Hooper's Of Moths and Men was published in 2002, Kettlewell's story was more sternly attacked, and accused of fraud. The criticism became a major argument for creationists. Michael Majerus was the principal defender. His seven-year experiment beginning in 2001, the most elaborate of its kind in population biology, the results of which were published posthumously in 2012, vindicated Kettlewell's work in great detail. This restored peppered moth evolution as "the most direct evidence", and "one of the clearest and most easily understood examples of Darwinian evolution in action". Origin and evolution Before the Industrial Revolution, the black form of the peppered moth was rare. The first black specimen (of unknown origin) was collected before 1811, and kept in the University of Oxford. The first live specimen was caught by R. S. Edleston in Manchester, England in 1848, but he reported this only 16 years later in 1864, in The Entomologist. Edleston notes that by 1864 it was the more common type of moth in his garden in Manchester. The light-bodied moths were able to blend in with the light-coloured lichens and tree bark, and the less common black moths were more likely to be eaten by birds. As a result of the common light-coloured lichens and English trees, therefore, the light-coloured moths were much more effective at hiding from predators, and the frequency of the dark allele was very low, at about 0.01%. During the early decades of the Industrial Revolution in England, the countryside between London and Manchester became blanketed with soot from the new coal-burning factories. Many of the light-bodied lichens died from sulphur dioxide emissions, and the trees became darkened. This led to an increase in bird predation for light-coloured moths, as they no longer blended in as well in their polluted ecosystem: indeed, their bodies now dramatically contrasted with the colour of the bark. Dark-coloured moths, on the other hand, were camouflaged very well by the blackened trees. The population of dark-coloured moth rapidly increased. By the mid-19th century, the number of dark-coloured moths had risen noticeably, and by 1895, the percentage of dark-coloured moths in Manchester was reported at 98%, a dramatic change (of almost 100%) from the original frequency. This effect of industrialization in body colour led to the coining of the term "industrial melanism". The implication that industrial melanism could be evidence supporting Charles Darwin's theory of natural selection was noticed during his lifetime. Albert Brydges Farn (1841–1921), a British entomologist, wrote to Darwin on 18 November 1878 to discuss his observation of colour variations in the Annulet moth (then Gnophos obscurata, now Charissa obscurata). He noted the existence of dark moths in peat in the New Forest, brown moths on clay and red soil in Herefordshire, and white moths on chalk cliffs in Lewes, and suggested that this variation was an example of "survival of the fittest". He told Darwin that he had found dark moths on a chalk slope where the foliage had been blackened by smoke from lime kilns, and he had also heard that white moths had become less common at Lewes after lime kilns had been in operation for a few years. Darwin does not seem to have responded to this information, possibly because he thought natural selection would be a much slower process. A scientific explanation of moth coloration was only published in 1896, 14 years after Darwin's death, when J. W. Tutt explicitly linked peppered moth melanism to natural selection. Rise and fall of phenotype frequency Melanism has been observed in both European and North American peppered moth populations. Information about the rise in frequency is scarce. Much more is known about the subsequent fall in phenotype frequency, as it has been measured by lepidopterists using moth traps. Steward compiled data for the first recordings of the peppered moth by locality, and deduced that the carbonaria morph was the result of a single mutation that subsequently spread. By 1895, it had reached a reported frequency of 98% in Manchester. From around 1962 to the present, the phenotype frequency of carbonaria has steadily fallen in line with cleaner air around industrial cities. Its decline has been measured more accurately than its rise, through more rigorous scientific studies. Notably, Kettlewell conducted a national survey in 1956, Bruce Grant conducted a similar one in early 1996, and L.M. Cook in 2003. Similar results were found in North America. Melanic forms have not been found in Japan. It is believed that this is because peppered moths in Japan do not inhabit industrialised regions. Genetics Tutt was the first to propose the "differential bird predation hypothesis" in 1896, as a mechanism of natural selection. The melanic morphs were better camouflaged against the bark of trees without foliose lichen, whereas the typica morphs were better camouflaged against trees with lichens. As a result, birds would find and eat those morphs that were not camouflaged with increased frequency. In 1924, J.B.S. Haldane calculated, using a simple general selection model, the selective advantage necessary for the recorded natural evolution of peppered moths, based on the assumption that in 1848 the frequency of dark-coloured moths was 2%, and by 1895 it was 95%. The dark-coloured, or melanic, form would have had to be 50% more fit than the typical, light-coloured form. Even taking into consideration possible errors in the model, this reasonably excluded the stochastic process of genetic drift, because the changes were too fast. Haldane's statistical analysis of selection for the melanic variant in peppered moths became a well known part of his effort to demonstrate that mathematical models that combined natural selection with Mendelian genetics could explain evolution – an effort that played a key role in the foundation of the discipline of population genetics, and the beginnings of the modern synthesis of evolutionary theory with genetics. The peppered moth Biston betularia is also a model of parallel evolution in the incidence of melanism in the British form (f. carbonaria) and the American form (f. swettaria) as they are indistinguishable in appearance. Genetic analysis indicates that both phenotypes are inherited as autosomal dominants. Cross hybridizations indicate that the phenotypes are produced by alleles at a single locus. The gene for carbonaria in B. betularia was thought to be in a region of chromosome 17. It was later concluded that the gene could not be in that region, because none of the genes in the chromosome coded for either wing pattern or melanisation. The region that was used to find it was the first intron of the orthologue of the cortex gene in Drosophila. Through elimination of candidates within the region based on rarity, a 21,925 base pair insert remained. The insert, labelled carb-TE, is a class II transposable element that has an approximately 9-kb non-repetitive sequence tandemly repeated two and one third times. There are 6 base pairs of inverted repeats and duplicated 4 base pairs at the target site not present in typica moths. Carb-TE has higher expression during the stage of rapid wing disc morphogenesis. The mechanism of how the gene increases expression, and whether it is the only gene involved, is still not known. Alternative hypotheses Several alternative hypotheses to natural selection as the driving force of evolution were proposed during the 1920s and 1930s. Random mutation, migration, and genetic drift were also seen as major forces of evolution. P. A. Riley proposed an additional selective factor, where heavy metal chelation by melanin would supposedly protect peppered moths against the toxic effects of heavy metals associated with industrialisation. This selective advantage would supplement the major selective mechanism of differential bird predation. Phenotypic induction In 1920, John William Heslop-Harrison rejected Tutt's differential bird predation hypothesis, on the basis that he did not believe that birds ate moths. Instead he proposed that pollutants could cause changes to the soma and germ plasm of the organism. In 1925, K. Hasebroek made an early attempt to prove this hypothesis, exposing pupae to pollutant gases, namely hydrogen sulfide (H2S), ammonia (NH3), and "pyredin". He used eight species in his studies, four of which were species of butterfly that did not exhibit melanism. In 1926 and 1928, Heslop-Harrison suggested that the increase of melanic moths in industrialised regions was due to "mutation pressure", not to selection by predators which he regarded as negligible. Salts of lead and manganese were present in the airborne pollutant particles, and he suggested that these caused the mutation of genes for melanin production but of no others. He used Selenia bilunaria and Tephrosia bistortata as material. The larvae were fed with leaves that had incorporated these salts: melanics subsequently appeared. A similar experiment in 1932 by McKenney Hughes failed to replicate these results; the statistician and geneticist Ronald Fisher showed that Heslop-Harrison's controls were inadequate, and that Hughes's findings made the 6% mutation rate required by Heslop-Harrison "improbable". Kettlewell's experiment The first important experiments on the peppered moth were carried out by Bernard Kettlewell at Oxford University, under the supervision of E. B. Ford, who helped him gain a grant from the Nuffield Foundation to perform the experiments. In 1953, Kettlewell started a preliminary experiment in which moths were released into a large (18m × 6m) aviary, where they were fed on by great tits (Parus major). His main experiment, at Christopher Cadbury Wetland Reserve in Birmingham, England, involved marking, releasing, and recapturing marked moths. He found that in this polluted woodland typica moths were preferentially preyed upon. He thus showed that the melanic phenotype was important to the survival of peppered moths in such a habitat. Kettlewell repeated the experiment in 1955 in unpolluted woodlands in Dorset, and again in the polluted woods in Birmingham. In 1956 he repeated the experiments and found similar results; in Birmingham, birds ate most of the white moths (75%), whereas in Dorset, most of the dark moths (86%) were eaten. Criticisms Theodore David Sargent performed experiments between 1965 and 1969, from which he concluded that it was not possible to reproduce Kettlewell's results, and said that birds showed no preference for moths on either black or white tree trunks. He suggested that Kettlewell had trained the birds to pick moths on tree trunks to obtain the desired results. Two chapters in Michael Majerus's 1998 book Melanism: Evolution in Action critiqued the research in Kettlewell's The Evolution of Melanism, discussed studies which raised questions about Kettlewell's original experimental methods, and called for further research. Reviewing the book, Jerry Coyne noted these points, and concluded that "for the time being we must discard Biston as a well-understood example of natural selection in action, although it is clearly a case of evolution. There are many studies more appropriate for use in the classroom." Judith Hooper's book Of Moths and Men (2002) severely criticised Kettlewell's experiment. Hooper argued that Kettlewell's field notes could not be found and suggested that his experiment was fraudulent, on the basis of Sargent's criticisms alleging that the photographs of the moths were taken of dead moths placed on a log. She said that E. B. Ford was a "Darwinian zealot", and claimed that he exploited the scientifically naive Kettlewell to obtain the desired experimental results. The book's reception led to demands that the peppered moth evolution story be deleted from textbooks. Scientists have examined the allegations made by Hooper, and found them to be without merit. Phillip E. Johnson, a co-founder of the creationist intelligent design movement, said that the moths "do not sit on tree trunks", that "moths had to be glued to the trunks" for pictures, and that the experiments were "fraudulent" and a "scam." The intelligent design advocate Jonathan Wells wrote an essay on the subject, a shortened version of which appeared in the 24 May 1999 issue of The Scientist, claiming that "The fact that peppered moths do not normally rest on tree trunks invalidates Kettlewell's experiments". Wells further wrote in his 2000 book Icons of Evolution that "What the textbooks don't explain, however, is that biologists have known since the 1980s that the classical story has some serious flaws. The most serious is that peppered moths in the wild don't even rest on tree trunks. The textbook photographs, it turns out, have been staged." However, peppered moths do rest on tree trunks on occasion, and Nick Matzke states that there is little difference between the 'staged' photos and 'unstaged' ones. Majerus's experiment From 2001 to 2007, Majerus carried out experiments in Cambridge to resolve the various criticisms of Kettlewell's experiment. During his experiment, he noted the natural resting positions of peppered moths. Of the 135 moths examined, over half were on tree branches, mostly on the lower half of the branch; 37% were on tree trunks, mostly on the north side; and only 12.6% were resting on or under twigs. Following correspondence with Hooper, he added an experiment to find if bats, not birds, could be the main predators. He observed a number of species of bird actually preying on the moths, and found that differential bird predation was a major factor responsible for the decline in carbonaria frequency compared to typica. He described his results as a complete vindication of the natural selection theory of peppered moth evolution, and said "If the rise and fall of the peppered moth is one of the most visually impacting and easily understood examples of Darwinian evolution in action, it should be taught. It provides after all the proof of evolution." Majerus died before he could complete the writing up of his experiments, so the work was carried on by Cook, Grant, Saccheri, and James Mallet, and published on 8 February 2012 as "Selective bird predation on the peppered moth: the last experiment of Michael Majerus." The experiment became the largest ever in the study of industrial melanism, involving 4,864 individuals in a six-year investigation, and it confirmed that melanism in moths is a genuine example of natural selection involving camouflage and predation. Their concluding remark runs: "These data provide the most direct evidence yet to implicate camouflage and bird predation as the overriding explanation for the rise and fall of melanism in moths." Coyne said he was "delighted to agree with this conclusion [of Majerus's experiment], which answers my previous criticisms about the Biston story." See also Polymorphism Scottish red deer Notes References Further reading External links Bruce Grant has written several papers on melanism in the peppered moth which are listed on his home page. Online lecture: "The rise and fall of the melanic Peppered Moth" presented by Laurence Cook. The Peppered Moth: Decline of a Darwinian Disciple. This is the transcript of Michael Majerus' lecture delivered to the British Humanist Association on Darwin Day 2004. The Peppered Moth: The Proof of Darwinian Evolution. This is the transcript of Majerus' lecture given at the European Society for Evolutionary Biology meeting on 23 August 2007. The accompanying PowerPoint presentation is also available. On 19 June 2009, Telegraph.co.uk published an article on this evolutionary phenomenon and implored UK readers to visit the Moths Count website and record their observations of local moths, in an effort to help increase the available data for researchers. Biology experiments Evolution of insects Evolution Selection
Peppered moth evolution
[ "Biology" ]
3,796
[ "Evolutionary processes", "Selection" ]
628,811
https://en.wikipedia.org/wiki/Chlorella
Chlorella is a genus of about thirteen species of single-celled or colonial green algae of the division Chlorophyta. The cells are spherical in shape, about 2 to 10 μm in diameter, and are without flagella. Their chloroplasts contain the green photosynthetic pigments chlorophyll-a and -b. In ideal conditions cells of Chlorella multiply rapidly, requiring only carbon dioxide, water, sunlight, and a small amount of minerals to reproduce. The name Chlorella is taken from the Greek χλώρος, chlōros/ khlōros, meaning green, and the Latin diminutive suffix -ella, meaning small. German biochemist and cell physiologist Otto Heinrich Warburg, awarded with the Nobel Prize in Physiology or Medicine in 1931 for his research on cell respiration, also studied photosynthesis in Chlorella. In 1961, Melvin Calvin of the University of California received the Nobel Prize in Chemistry for his research on the pathways of carbon dioxide assimilation in plants using Chlorella. Chlorella has been considered as a source of food and energy because its photosynthetic efficiency can reach 8%, which exceeds that of other highly efficient crops such as sugar cane. Description Chlorella consists of small, rounded cells which are spherical, subspherical, or ellipsoidal, and may be surrounded by a layer of mucilage. The cells contain a single chloroplast which is parietal (lying against the inner side of the cell membrane), with a single pyrenoid that is surrounded by grains of starch. Reproduction occurs by the formation of autospores; zoospores or gametes are not known to be produced in Chlorella. Autospores are released by a tear in the cell wall. The daughter cell may remain attached to the parent cell wall, thereby forming colonies of cells. Taxonomy Chlorella was first described by Martinus Beijerinck in 1890. Since then, over a hundred taxa have been described within the genus. However, biochemical and genomic data has revealed that many of these species were not closely related to each other, even being placed in a separate class Chlorophyceae. In other words, the "green ball" form of Chlorella appears to be a product of convergent evolution and not a natural taxon. Identifying Chlorella-like algae based on morphological features alone is generally not possible. Some strains of "Chlorella" used for food are incorrectly identified, or correspond to genera that were classified out of true Chlorella. For example, Heterochlorella luteoviridis is typically known as Chlorella luteoviridis which is no longer considered a valid name. As a food source When first harvested, Chlorella was suggested as an inexpensive protein supplement to the human diet. According to the American Cancer Society, "available scientific studies do not support its effectiveness for preventing or treating cancer or any other disease in humans". Under certain growing conditions, Chlorella yields oils that are high in polyunsaturated fats—Chlorella minutissima has yielded eicosapentaenoic acid at 39.9% of total lipids. History Following global fears of an uncontrollable human population boom during the late 1940s and the early 1950s, Chlorella was seen as a new and promising primary food source and as a possible solution to the then-current world hunger crisis. Many people during this time thought hunger would be an overwhelming problem and saw Chlorella as a way to end this crisis by providing large amounts of high-quality food for a relatively low cost. Many institutions began to research the algae, including the Carnegie Institution, the Rockefeller Foundation, the NIH, UC Berkeley, the Atomic Energy Commission, and Stanford University. Following World War II, many Europeans were starving, and many Malthusians attributed this not only to the war, but also to the inability of the world to produce enough food to support the increasing population. According to a 1946 FAO report, the world would need to produce 25 to 35% more food in 1960 than in 1939 to keep up with the increasing population, while health improvements would require a 90 to 100% increase. Because meat was costly and energy-intensive to produce, protein shortages were also an issue. Increasing cultivated area alone would go only so far in providing adequate nutrition to the population. The USDA calculated that, to feed the U.S. population by 1975, it would have to add 200 million acres (800,000 km2) of land, but only 45 million were available. One way to combat national food shortages was to increase the land available for farmers, yet the American frontier and farm land had long since been extinguished in trade for expansion and urban life. Hopes rested solely on new agricultural techniques and technologies. Because of these circumstances, an alternative solution was needed. To cope with the upcoming postwar population boom in the United States and elsewhere, researchers decided to tap into the unexploited sea resources. Initial testing by the Stanford Research Institute showed Chlorella (when growing in warm, sunny, shallow conditions) could convert 20% of solar energy into a plant that, when dried, contains 50% protein. In addition, Chlorella contains fat and vitamins. The plant's photosynthetic efficiency allows it to yield more protein per unit area than any plant—one scientist predicted 10,000 tons of protein a year could be produced with just 20 workers staffing a 1000-acre (4-km2) Chlorella farm. The pilot research performed at Stanford and elsewhere led to immense press from journalists and newspapers, yet did not lead to large-scale algae production. Chlorella seemed like a viable option because of the technological advances in agriculture at the time and the widespread acclaim it got from experts and scientists who studied it. Algae researchers had even hoped to add a neutralized Chlorella powder to conventional food products, as a way to fortify them with vitamins and minerals. When the preliminary laboratory results were published, the scientific community at first backed the possibilities of Chlorella. Science News Letter praised the optimistic results in an article entitled "Algae to Feed the Starving". John Burlew, the editor of the Carnegie Institution of Washington book Algal Culture-from Laboratory to Pilot Plant, stated, "the algae culture may fill a very real need", which Science News Letter turned into "future populations of the world will be kept from starving by the production of improved or educated algae related to the green scum on ponds". The cover of the magazine also featured Arthur D. Little's Cambridge laboratory, which was a supposed future food factory. A few years later, the magazine published an article entitled "Tomorrow's Dinner", which stated, "There is no doubt in the mind of scientists that the farms of the future will actually be factories." Science Digest also reported, "common pond scum would soon become the world's most important agricultural crop." However, in the decades since those claims were made, algae have not been cultivated on that large of a scale. Current status Since the growing world food problem of the 1940s was solved by better crop efficiency and other advances in traditional agriculture, Chlorella has not seen the kind of public and scientific interest that it had in the 1940s. Chlorella has only a niche market for companies promoting it as a dietary supplement. Production difficulties The experimental research was carried out in laboratories, rather than in the field, and scientists discovered that Chlorella would be much more difficult to produce than previously thought. To be practical, the algae grown would have to be placed either in artificial light or in shade to produce at its maximum photosynthetic efficiency. In addition, for the Chlorella to be as productive as the world would require, it would have to be grown in carbonated water, which would have added millions to the production cost. A sophisticated process, and additional cost, was required to harvest the crop and for Chlorella to be a viable food source, its cell walls would have to be pulverized. The plant could reach its nutritional potential only in highly modified artificial situations. Another problem was developing sufficiently palatable food products from Chlorella. Although the production of Chlorella looked promising and involved creative technology, it has not to date been cultivated on the scale some had predicted. It has not been sold on the scale of Spirulina, soybean products, or whole grains. Costs have remained high, and Chlorella has for the most part been sold as a health food, for cosmetics, or as animal feed. After a decade of experimentation, studies showed that following exposure to sunlight, Chlorella captured just 2.5% of the solar energy, not much better than conventional crops. Chlorella, too, was found by scientists in the 1960s to be impossible for humans and other animals to digest in its natural state due to the tough cell walls encapsulating the nutrients, which presented further problems for its use in American food production. Use in carbon dioxide reduction and oxygen production In 1965, the Russian CELSS experiment BIOS-3 determined that 8 m2 of exposed Chlorella could remove carbon dioxide and replace oxygen within the sealed environment for a single human. The algae were grown in vats underneath artificial light. Dietary supplement Chlorella is consumed as a dietary supplement. Some manufacturers of Chlorella products have falsely asserted that it has health benefits, including an ability to treat cancer, for which the American Cancer Society stated "available scientific studies do not support its effectiveness for preventing or treating cancer or any other disease in humans". The United States Food and Drug Administration has issued warning letters to supplement companies for falsely advertising health benefits of consuming chlorella products, such as one company in October 2020. There is some support from animal studies of chlorella's ability to detoxify insecticides. Chlorella protothecoides accelerated the detoxification of rats poisoned with chlordecone, a persistent insecticide, decreasing the half-life of the toxin from 40 to 19 days. The ingested algae passed through the gastrointestinal tract unharmed, interrupted the enteric recirculation of the persistent insecticide, and subsequently eliminated the bound chlordecone with the feces. Health concerns A 2002 study showed that Chlorella cell walls contain lipopolysaccharides, endotoxins found in Gram-negative bacteria that affect the immune system and may cause inflammation. However, more recent studies have found that the lipopolysaccharides in organisms other than Gram-negative bacteria, for example in cyanobacteria, are considerably different from the lipopolysaccharides in Gram-negative bacteria. See also Calvin cycle List of ineffective cancer treatments Quorn: food made from mycoprotein Soyuz 28, a 1978 space mission which included experiments on Chlorella Spirulina (dietary supplement) Chlorellosis, a disease caused by the infection of Chlorella''. References Trebouxiophyceae genera Trebouxiophyceae Edible algae Dietary supplements Algaculture Alternative cancer treatments
Chlorella
[ "Biology" ]
2,311
[ "Edible algae", "Algaculture", "Algae" ]
628,895
https://en.wikipedia.org/wiki/Toms%20River
The Toms River is a freshwater river and estuary in Ocean County, New Jersey, United States. The river rises in the Pine Barrens of northern Ocean County, then flows southeast and east, where it is fed by several tributaries, and flows in a meandering course through wetlands. The river empties into Barnegat Bay—an inlet of the Atlantic Ocean—and the Intracoastal Waterway at Mile 14.6. Geography Much of the headwaters of the Toms River are in the New Jersey Pine Barrens. The lower of the river is a broad tidal estuary that is navigable within the community of Toms River. The river empties into the western side of Barnegat Bay, with mid-channel depths of . At , the Toms River subwatershed is the largest drainage area of any river in the Barnegat Bay watershed. It includes 11 municipalities in Ocean County and portions of southwestern Monmouth County. The lowest sections of the river provide convenient locations for marinas and yacht clubs, and bases for fishing and crabbing. Canoeing and kayaking are also popular on the river, which can be paddled for from Don Connor Boulevard, below County Route 528 to Barnegat Bay. History The Toms River has appeared on maps of the region since the New Netherland colony, although it has not always been named. The earliest-known written reference to it is from 1687. Into the late 1700s, it was most-often referred to as Goose Creek or Goose Neck Creek. Post-colonial cartographers switched between 'Goose Creek—as seen on Thomas Jefferys' 1776 map and Aaron Arrowsmith's 1804 map— and Toms Creek, as in Mathew Carey's 1795 State Map of New Jersey. The cartographers Henry Charles Carey and Isaac Lea attempted to address any confusion by choosing "Goose or Toms Cr." in their 1814 map. In 1822, Carey and Lea co-published another map that entirely removed the name Goose Creek. Subsequent maps would use the name Toms River. Etymology The origin of the name Toms River is unknown but there are several theories. According to historical author Edwin Salter, these are: A noted Native American called either "Indian Tom" or "Thomas Pumha", who lived on the river's north bank, in present-day Island Heights, assisted during the American Revolutionary War. Salter said: Local farmer and ferryman Thomas Luker, who came to the area in the late 1600s. Luker married the daughter of a local Lenape chief in 1695 and they established a homestead on the north bend, near the site of the downtown Toms River Post Office. Captain William Tom, an English civil officer for West Jersey from 1664 to 1674, who, during an exploratory expedition, visited the stream and the surrounding region. The river was named in his honor "because he first brought it to the notice of the whites" and persuaded them to settle there. According to Salter, the evidence to support this origin is inconclusive, but this was his preferred origin. In 1992, during the town's 225th anniversary, the township government and local historians officially recognized Thomas Luker as the Tom in the river's name. During the celebration, a footbridge spanning the river in downtown Huddy Park was named in his honor. Pollution incidents In the mid-to-late 20th century, the Toms River river and surrounding township experienced several contamination incidents that lead to the addition of at least two major areas to the United States Environmental Protection Agency's (EPA) list of Superfund sites. Ciba-Geigy Beginning in the early 1900s, Ciba-Geigy Chemical Corporation established a site in Dover Township—now Toms River Township—where it manufactured pigments and dyes. The manufacturing process created a large amount of sludge and toxic waste, which was initially disposed of in unlined pits located on-site. In the 1960s, the company built a -long pipeline to disposing of nearly two billion gallons of wastewater into the Atlantic Ocean. In 1980, New Jersey Department of Environmental Protection (NJDEP) issued an order requiring the removal of approximately 15,000 drums from an on-site landfill dump and to initiate groundwater monitoring throughout the property, which included portions of the Pine Barrens and coastal wetlands. That same year, the United States Environmental Protection Agency (EPA) completed a preliminary assessment under the Potential Hazardous Waste Site Program. In 1983, the EPA placed the site on the Superfund National Priorities List. The EPA has been cleaning up the site since the early 1980s. In September 2000, the agency ordered the excavation and bioremediation of about of contaminated soil. Cleanup of the on-site source areas began in October 2003, with off-site processing and treatment finishing in 2010. According to the department's website, the following milestones have been met so far: The EPA ordered the site to undergo five reviews, each to be performed every five years. The first sitewide review was conducted in September 2003, and the final review is estimated to be completed in June 2025. Reich Farm In August 1971, the Reich family leased a large portion of their farm off Route 9 to independent waste hauler Nicholas Fernicola. The lease was to allow Fernicola to temporarily store used drums on the property, located approximately from an intermittent stream draining into the Toms River. In December 1971, the Reichs discovered nearly 4,500 waste-filled drums from Union Carbide's Bound Brook, New Jersey, plant, the source of which they identified by the labels on many of the drums. The labels also indicated the contents, which included "blend of resin and oil", "tar pitch", and "lab waste solvent". Evidence of the waste being dumped was found on the property, in the form of trenches that had not existed before the land was rented. The full drums were leaching their contents into the soil and the nearby water table. The Reichs sued Fernicola and Union Carbide, and in 1972, the court ordered an end to the dumping and the removal of the drums and contaminated soil. In early 1974, residents commented on an unusual smell and taste of their well water. The NJEPA inspected the site and found the groundwater was heavily contaminated with organic compounds, such as phenol and toluene. The Reich Farm site was included on the EPA's National Priorities List (NPL) in September 1983. After over two decades of remediation and testing, it was removed from the Superfund list in June 2021. The site was ordered to undergo five reviews to be conducted every five years by the EPA. The first sitewide review was performed in September 2003, and the final review was estimated to be completed between September and November 2023. Cancer cluster The Ciba-Geigy and Reich Farms sites resulted in the contamination of an overlapping area of groundwater during a coinciding period. In September 1997, the New Jersey Department of Health (NJDOH), at the request of the Agency for Toxic Substances and Disease Registry, evaluated childhood cancer incidences in Toms River. The NJDOH reviewed data from the State Cancer Registry (SCR) from 1979 to 1991. According to the summary report the NJDOH released: "The results of the 1995 NJDHSS cancer evaluation indicated that Ocean County as a whole and the Toms River section of Dover ... had an excess of childhood brain and central nervous system (CNS) cancer relative to the entire State". The NJDOH reviewed the entire county but found Toms River, which was then known as Dover Township, was "the only statistically significantly elevated town in the county". As a result of the findings, the NJDOH ordered a case–control study of the area to examine and identify risks factors. The results of this study were made available in January 2003; according to the primary hypothesis, the cancer rates were related to the "environmental exposure pathways" reported over the previous 30 years. The study reported: "No consistent patterns of association were seen between the environmental factors of primary interest and any of the cancer groupings during the postnatal exposure period" and "No consistent patterns of association were seen between the other environmental factors and any of the cancer groupings evaluated". The report acknowledged the findings could be easily biased due to the small sample size, and recommended the continuation of clean-up efforts at the Reich Farm and Ciba-Geigy sites. It was also recommended an additional five-year incidence evaluation be made once the data from 1996 to 2000 was available from the SCR. A 2014 Pulitzer Prize-winning book, Toms River: A Story of Science and Salvation, examined the cancer cluster in detail. Recent public-private coalitions to restore the river and to preserve the wetland areas near its source in the Pinelands, as well as the EPA stage assessments, have resulted in an increase in water quality. Flood events Because the Toms River is tidal with a direct feed into Barnegat Bay and a substantial subwatershed area, it is prone to flooding, particularly at the mouth. The United States Geological Survey (USGS) tracks and reports on significant flood events, and the National Oceanic and Atmospheric Administration (NOAA) tracks daily tide levels. The official USGS flood stage for the river is considered water levels at and above , and major flood events occur at and above . 2010 Nor'easter From March 12–15, 2010, a Nor'easter hit the New Jersey coastline. The Toms River USGS station (01408500) recorded its highest water level to that point since 1929, before which records were not tracked, and a record discharge of per second on March 15; the predicted discharge prior to the storm was only per second. Hurricane Irene On August 28, 2011, Hurricane Irene hit the eastern coast of the US for a second time, making landfall near the Little Egg Inlet, about south of the Toms River's mouth. Irene was the first hurricane to make landfall in New Jersey since 1903. The storm surge that followed, combined with the rainfall from the hurricane and the wet conditions in the weeks prior, led to record USGS gage readings for over 40% of all stations with at least 20 years of data. The highest-recorded flood crest of the Toms River was recorded on August 29, 2011, at . The previous record was , set on September 23, 1938, after the 1938 New England hurricane. The river also saw significant water levels in November 2018 (), October 2005 (), and May 1984 (). Tributaries Davenport Branch Ridgeway Branch Union Branch Wrangle Brook See also List of New Jersey rivers List of cancer clusters Toms Canyon impact crater References External links U.S. Geological Survey: NJ stream gaging stations Toms River Township web site Toms River Online TomsRiver.org - Community News, Business Directory and Events Portal The Toms River Times A History of Monmouth and Ocean Counties... Thomas Jefferys' 1776 map Aaron Arrowsmith's 1804 Map of New Jersey Mathew Carey's 1795 Map Carey's 1814 State Map of New Jersey Rivers of Ocean County, New Jersey Rivers in the Pine Barrens (New Jersey) Rivers of New Jersey Tributaries of Barnegat Bay Watersheds of the United States American Revolutionary War sites Superfund sites
Toms River
[ "Technology" ]
2,333
[ "Hazardous waste", "Superfund sites" ]
628,931
https://en.wikipedia.org/wiki/Abraham%20bar%20Hiyya
Abraham bar Ḥiyya ha-Nasi (; – 1136 or 1145), also known as Abraham Savasorda, Abraham Albargeloni, and Abraham Judaeus, was a Catalan Jewish mathematician, astronomer and philosopher who resided in Barcelona, then in the County of Barcelona. Bar Ḥiyya was active in translating the works of Islamic science into Latin and was likely the earliest to introduce algebra from the Muslim world into Christian Europe. He also wrote several original works on mathematics, astronomy, Jewish philosophy, chronology, and surveying. His most influential work is his Ḥibbur ha-Meshiḥah ve-ha-Tishboret, translated in 1145 into Latin as Liber embadorum. A Hebrew treatise on practical geometry and algebra, the book contains the first known complete solution of the quadratic equation , and influenced the work of Fibonacci. Biography Abraham bar Ḥiyya was the great-grandson of Hezekiah ben David, the last Gaon of the Talmudic academies in Babylonia. Bar Ḥiyya occupied a high position in the royal court, serving as minister of police, and bore the title of governor (). Scholars assume that Bar Hiyya would have obtained this title in the court of Banu Hud of Zaragoza-Lleida; there is even a record of a Jewish Savasorda there in the beginning of the 12th century. In his travelogues, Benjamin of Tudela mentions bar Ḥiyya living in Barcelona in the 1160s. According to Adolph Drechsler, bar Ḥiyya was a pupil of Moshe ha-Darshan and teacher of Abraham ibn Ezra. He was held in high consideration by the ruler he served on account of his astronomical knowledge and had disputes with learned Catholic priests, to whom he demonstrated the accuracy of the Jewish calendar. Abraham bar Hiyya is said to have been a great astronomer and wrote some works on astronomy and geography. One talks about the form of the earth, the elements, and the structure of the spheres. Other works included papers on astrology, trigonometry, and music. Some scholars think that the Magister Abraham who dictated De Astrolabio (probably at Toulouse) to Rudolf of Bruges (a work that the latter finished in 1143) was identical with Abraham bar Ḥiyya. Although the title "Sephardi" is always appended to his name, Barcelona was at the time no longer under Muslim rule, and therefore not part of Sepharad. Abraham Albargeloni (i.e., from Barcelona) thus belonged to the community of the Jews of Catalonia. Catalonia joined Provence in 1112 and Aragon in 1137, and thus the County of Barcelona became the capital of a new Catalan-Aragonese confederation called the Crown of Aragon. The kings of Aragon extended their domains to Occitania in what is now southern France. Abraham Albargeloni spent some time in Narbonne, where he composed some works for the Hachmei Provence, in which he complained about the Provençal community's ignorance of mathematics. Work Abraham bar Ḥiyya was one of the most important figures in the scientific movement which made the Jews of Provence, the Jews of Catalonia, Spain, and Italy the intermediaries between Arabic science and the Christian world, in both his original works and his translations. Bar Ḥiyya's Yesode ha-Tebunah u-Migdal ha-Emunah (), usually referred to as the Encyclopedia, was the first European attempt to synthesize Greek and Arabic mathematics. Likely written in the first quarter of the 12th century, the book is said to elaborate on the interdependence of number theory, mathematical operations, business arithmetic, geometry, optics, and music. The book draws from a number of Greek sources then available in Arabic, as well as the works of al-Khwarizmi and Al-Karaji. Only a few short fragments of this work have been preserved. Bar Ḥiyya's most notable work is his Ḥibbur ha-Meshiḥah ve-ha-Tishboret (), probably intended to be a part of the preceding work. This is the celebrated geometry translated in 1145 by Plato of Tivoli, under the title Liber embadorum a Savasordo in hebraico compositus. Fibonacci made the Latin translation of the Ḥibbūr the basis of his Practica Geometriae, following it even to the sameness of some of the examples. Bar Ḥiyya also wrote two religious works in the field of Judaism and the Tanach: Hegyon ha-Nefesh ("Contemplation of the Soul") on repentance, and Megillat ha-Megalleh ("Scroll of the Revealer") on the redemption of the Jewish people. The latter was partly translated into Latin in the 14th century under the title Liber de redemptione Israhel. Even these religious works contain scientific and philosophical speculation. His Megillat ha-Megalleh was also astrological in nature, and drew a horoscope of favourable and unfavourable days. Bar Ḥiyya forecasted that the Messiah would appear in AM 5118 (1358 CE). Abraham bar Ḥiyya wrote all his works in Hebrew, not in Judaeo-Arabic of the earlier Jewish scientific literature, which made him a pioneer in the use of the Hebrew language for scientific purposes. Other notable works "Form of the Earth" (), an astronomical work on the formation of the heavens and the earth, which was to have been followed by a second part on the course of the stars. A portion was translated into Latin by Sebastian Münster and Erasmus Oswald Schreckenfuchs. It appears also that complete translations into Latin and French were made. The Bodleian Library contains a copy with a commentary, apparently by Ḥayyim Lisker. "Calculation of the Courses of the Stars" (), the sequel to the preceding work, which is found sometimes in manuscripts with the notes of Abraham ibn Ezra. "Tables" or "Tables of the Prince" (, Luḥot ha-Nasi), astronomical tables, called also the "Tables of Al-Battani" and the "Jerusalem Tables". Several manuscripts of this work contain notes by Abraham ibn Ezra. "Book of Intercalation" (). This work was published in 1851, in London, by Filipowski. It is the oldest-known Hebrew work treating of the calculation of the Hebrew calendar. "Meditation of the Soul" (), an ethical work upon a rationalistic religious basis. It was published in 1860 by Freimann, with a biography of the author (by the editor), a list of his works, and learned introduction by Rapoport. "Scroll of the Revealer" (), a controversial work in defense of the theory that the Messiah would appear in the year AM 5118 (AD 1358). Its fifth and last chapter, the largest part of the work, may be read as an independent treatise providing an astrological explanation of Jewish and universal history based on an analysis of the periodical conjunctions of Saturn and Jupiter. An apologetic epistle addressed to Judah ben Barzilai al-Barzeloni. Translations Abraham bar Ḥiyya co-operated with a number of scholars in the translation of scientific works from Arabic into Latin, most notably Plato of Tivoli with their translation of Ptolemy's Tetrabiblos in 1138 at Barcelona. There remains doubt as to the particulars: a number of Jewish translators named Abraham existed during the 12th century, and it is not always possible to identify the one in question. Known translations of bar Ḥiyya include: De Horarum Electionibus, the well-known treatise of Ali ben Aḥmad al-Imrani. Capitula Centiloquium, astrological aphorisms. A commentary of Aḥmad ibn Yusuf on the Centiloquium, attributed to Ptolemy. De Astrolabio of Rudolph de Bruges. Liber Augmenti et Diminutionis, a treatise on mathematics. In the preface to Ẓurat ha-Areẓ, bar Ḥiyya modestly states that, because none of the scientific works such as those which exist in Arabic were accessible to his brethren in France, he felt called upon to compose books which, though containing no research of his own, would help to popularize knowledge among Hebrew readers. His Hebrew terminology, therefore, occasionally lacks the clearness and precision of later writers and translators. Philosophy Bar Ḥiyya was a pioneer in the field of philosophy: as shown by Guttmann in refutation of David Kaufmann's assumption that the Hegyon ha-Nefesh was originally written in Arabic, Abraham bar Ḥiyya had to wrestle with the difficulties of a language not yet adapted to philosophic terminology. Whether composed especially for the Ten Days of Repentance, as Rapoport and Rosin think, or not, the object of the work was a practical, rather than a theoretical, one. It was to be a homily in four chapters on repentance based on the Hafṭarot of the Day of Atonement and Shabbat Shuvah. In it, he exhorts the reader to lead a life of purity and devotion. At the same time he does not hesitate to borrow ideas from non-Jewish philosophers, and he pays homage to the ancient Greek philosophers who, without knowledge of the Torah, arrived at certain fundamental truths regarding the beginning of things, though in an imperfect way, because both the end and the divine source of wisdom remained hidden to them. In his opinion the non-Jew may attain to as high a degree of godliness as the Jew. Matter and Form Abraham bar Ḥiyya's philosophical system is neoplatonic like that of Solomon ibn Gabirol and of the author of Torot ha-Nefesh "Reflections on the Soul" as Plotinus stated: He agrees with Plato that the soul in this world of flesh is imprisoned, while the animal soul craves for worldly pleasures, and experiences pain in foregoing them. Still, only the sensual man requires corrections of the flesh to liberate the soul from its bondage; the truly pious need not, or rather should not, undergo fasting or other forms of asceticism except such as the law has prescribed. But, precisely as man has been set apart among his fellow creatures as God's servant, so Israel is separate from the nations, the same three terms (bara, yaṣar, and asah) being used by the prophet for Israel's creation as for that of man in Genesis. Three Classes of Pious Men Like Baḥya ibn Paquda, Abraham bar Ḥiyya distinguishes three classes of pious men: such as lead a life altogether apart from worldly pursuits and devoted only to God ("these are but few in number and may in their sovereignty over the world be regarded as one individuality"). such as take part in the world's affairs, but are, as regards their conduct, ruled only by the divine laws and statutes without concerning themselves with the rest of men (these form the "holy congregation" or the "faithful city") such as lead righteous lives, but take care also that the wrong done outside of their sphere is punished and the good of all the people promoted (these form the "kingdom of justice" or the "righteous nation"). In accordance with these three classes of servants of God, he finds the laws of the Torah to be divided into three groups: The Decalogue, containing the fundamental laws with especial reference to the God-devoted man who, like Moses, lives solely in the service of God (the singular being used because only Moses or the one who emulates him is addressed). The first of the Ten Commandments, which he considers merely as an introductory word, accentuates the divine origin and the eternal goal of the Law; the other nine present the various laws in relation to God, to domestic life, and to society at large. Each of these three classes again refers either to the heart or sentiment, to the speech or to the action of man. The group of laws contained in the second, third, and fourth books of Moses, intended for the people of Israel during their wandering in the desert or during the Exile, to render them a holy congregation relying solely upon the special protection of God without resorting to warfare. The Deuteronomic legislation intended for the people living in an agricultural state and forming a "kingdom of justice." However, in the time of the Messianic redemption, when the evil spirit shall have vanished altogether, when the sensual man shall have become a spiritual one, and the passions that created hatred and strife shall have given way to love of man and to faithful obedience to the will of God, no other laws than those given to the God-devoted one in the Decalogue—the law written upon the heart of man—will be necessary. Men, imbued solely with love for their fellows, free from sin, will rise to the standard of the God-devoted man, and, like him, share in the eternal bliss of God. Guttmann has shown that Naḥmanides read and used the Hegyon ha-Nefesh, though occasionally differing from it; but while Saadia Gaon is elsewhere quoted by Abraham bar Ḥiyya, he never refers to him in Hegyon. Characteristic of the age is the fact that while Abraham bar Ḥiyya contended against every superstition, against the superstitions of the tequfoth, against prayers for the dead, and similar practises, he was, nevertheless, like Ibn Ezra, a firm believer in astrology. In his Megillat ha-Megalleh he calculated from Scripture the exact time for the advent of the Messiah to be the year of the world 5118. He wrote also a work on redemption, from which Isaac Abravanel appropriated many ideas. It is in defense of Judaism against Christian arguments, and also discusses Muhammed "the Insane", announcing the downfall of Islam, according to astrological calculation, for the year 4946 A.M. Mathematics Bar Ḥiyya's Ḥibbur ha-meshīḥah ve-ha-tishboret contains the first appearance of quadratic equations in the West. Bar Ḥiyya proved by the method of indivisibles the following equation for any circle: , where is the surface area, is the circumference length and is radius. The same proof appears in the commentary of the Tosafists (12th century) on the Babylonian Talmud. See also Golden age of Jewish culture in Spain Notes References Footnotes External links (PDF version) PDF biography 1070s births 1136 deaths 11th-century astrologers 12th-century astrologers 11th-century mathematicians 12th-century mathematicians 12th-century Catalan rabbis Arabic–Hebrew translators Philosophers from Catalonia Scientists from Catalonia Medieval Catalan astronomers Medieval Jewish astrologers Medieval Jewish astronomers Medieval Jewish scientists Medieval Spanish mathematicians Rabbis from Barcelona Jewish astronomers
Abraham bar Hiyya
[ "Astronomy" ]
3,089
[ "Astronomers", "Jewish astronomers" ]
628,948
https://en.wikipedia.org/wiki/Signal%20strength%20and%20readability%20report
A signal strength and readability report is a standardized format for reporting the strength of the radio signal and the readability (quality) of the radiotelephone (voice) or radiotelegraph (Morse code) signal transmitted by another station as received at the reporting station's location and by their radio station equipment. These report formats are usually designed for only one communications mode or the other, although a few are used for both telegraph and voice communications. All but one of these signal report formats involve the transmission of numbers. History As the earliest radio communication used Morse code, all radio signal reporting formats until about the 1920s were for radiotelegraph, and the early voice radio signal report formats were based on the telegraph report formats. Timeline of signal report formats The first signal report format code may have been QJS. The U.S. Navy used R and K signals starting in 1929. The QSK code was one of the twelve Q Codes listed in the 1912 International Radiotelegraph Convention Regulations, but may have been in use earlier. The QSA code was included in the Madrid Convention (Appendix 10, General Regulations) sometime prior to 1936. The Amateur radio R-S-T system signal report format currently in use was first developed in 1934. As early as 1943, the U.S and UK military published the first guidance that included the modern "Weak but readable", "Strong but distorted", and "Loud and clear" phrases. By 1951, the CCEB had published ACP 125(A) (a.k.a. SGM-1O82-51), which formalized the 1943 "Loud and clear" format. Radiotelegraph report formats Q-Code signal report formats The QSA code and QRK code are interrelated and complementary signal reporting codes for use in wireless telegraphy (Morse code). They replaced the earlier QSJ code. Currently, the QSA and QRK codes are officially defined in the ITU Radio Regulations 1990, Appendix 13: Miscellaneous Abbreviations and Signals to Be Used in Radiotelegraphy Communications Except in the Maritime Mobile Service. They are also described identically in ACP131(F),: R-S-T system Amateur radio users in the U.S. and Canada have used the R-S-T system since 1934. This system was developed by amateur radio operator Arthur W. Braaten, W2BSR. It reports the readability on a scale of 1 to 5, the signal strength on a scale of 1 to 9, and the tone of the Morse code continuous wave signal on a scale of 1 to 9. During amateur radio contests, where the rate of new contacts is paramount, contest participants often give a perfect signal report of 599 even when the signal is lower quality, because always providing the same signal format enables them to send Morse code with less thought and thus increased speed. SINPO code SINPO is an acronym for Signal, Interference, Noise, Propagation, and Overall, which was developed by the CCIR in 1951 (as C.C.I.R. Recommendation No. 251) for use in radiotelegraphy, and the standard is contained in Recommendation ITU-R Sm.1135, SINPO and SINPFEMO codes. This format is most notably used by the BBC for receiving signal reports on postcards mailed from listeners, even though that same standard specifies that the SINPFEMO code should be used for radiotelephony transmissions. SINPO is the official radiotelegraph signal reporting codes for international civil aviation and ITU-R. Radiotelephony report formats R-S-T system Amateur radio operators use the R-S-T system to describe voice transmissions, dropping the last digit (Tone report) because there is no continuous wave tone to report on. SINPEMFO code An extension of SINPO code, for use in radiotelephony (voice over radio) communications, SINPFEMO is an acronym for Signal, Interference, Noise, Propagation, Frequency of Fading, Depth, Modulation, and Overall. Plain-language radio checks The move to plain-language radio communications means that number-based formats are now considered obsolete, and are replaced by plain language radio checks. These avoid the ambiguity of which number stands for which type of report and whether a 1 is considered good or bad. This format originated with the U.S. military in World War II, and is currently defined by ACP 125 (G)., published by the Combined Communications Electronics Board. The prowords listed below are for use when initiating and answering queries concerning signal strength and readability. Use in analog vs. digital radio transmission modes In analog radio systems, as receiving stations move away from a radio transmitting site, the signal strength decreases gradually, causing the relative noise level to increase. The signal becomes increasingly difficult to understand until it can no longer be heard as anything other than static. These reporting systems are usable for, but perhaps not completely appropriate for, rating digital signal quality. This is because digital signals have fairly consistent quality as the receiver moves away from the transmitter until reaching a threshold distance. At this threshold point, sometimes called the "digital cliff,"the signal quality takes a severe drop and is lost". This difference in reception reduces attempts to ascertain subjective signal quality to simply asking, "Can you hear me now?" or similar. The only possible response is "yes"; otherwise, there is just dead air. This sudden signal drop was also one of the primary arguments of analog proponents against moving to digital systems. However, the "five bars" displayed on many cell phones does directly correlate to the signal strength rating. Informal terminology and slang The phrase "five by five" can be used informally to mean "good signal strength" or "loud and clear". An early example of this phrase was in 1946, recounting a wartime conversation. The phrase was used in 1954 in the novel The Blackboard Jungle. Another example usage of this phrase is from June 1965 by the crew of the Gemini IV spacecraft. This phrase apparently refers to the fact that the format consists of two digits, each ranging from one to five, with five/five being the best signal possible. Some radio users have inappropriately started using the Circuit Merit telephone line quality measurement. This format is unsuitable for radiotelegraph or radio-telephony use because it focuses on voice-to-noise ratios, for judging whether a particular telephone line is suitable for commercial (paying customer) use, and does not include separate reports for signal strength and voice quality. See also Mean opinion score Perceptual Evaluation of Speech Quality (PESQ) Perceptual Objective Listening Quality Analysis (POLQA) Procedure word References External links Ham Radio RST Signal Reporting System for CW Operation, by Charlie Bautsch, W5AM itu.int: SM.1135 - Sinpo and sinpfemo codes - ITU English phrases Operating signals Quality control Nondestructive testing
Signal strength and readability report
[ "Materials_science" ]
1,436
[ "Nondestructive testing", "Materials testing" ]
628,991
https://en.wikipedia.org/wiki/Adriaan%20van%20Roomen
Adriaan van Roomen (29 September 1561 – 4 May 1615), also known as Adrianus Romanus, was a mathematician, professor of medicine and medical astrologer from the Duchy of Brabant in the Habsburg Netherlands who was active throughout Central Europe in the late 16th and early 17th centuries. As a mathematician he worked in algebra, trigonometry and geometry; and on the decimal expansion of pi. He solved the Problem of Apollonius using a new method that involved intersecting hyperbolas. He also wrote on the Gregorian calendar reform. Life Van Roomen was born in Leuven, the son of Adriaan Van Roomen and Maria Van Den Daele. He was educated partly in Leuven and partly After studying at the Jesuit College in Cologne, also attending the University of Cologne where he began his study of medicine. He also briefly studied medicine at Leuven University. Roomen was professor of mathematics and medicine at Louvain from 1586 to 1592. He met Kepler, and discussed with François Viète two questions about equations and tangencies. He then spent some time in Italy, particularly with Clavius in Rome in 1585. His publication of 1595, Parvum theatrum urbium, contained Latin verse on the cities of Italy (possibly written by Thomas Edwards). In June 1593 Van Roomen became the inaugural professor of medicine at the newly refounded University of Würzburg. He was also appointed physician in ordinary to the court of Rudolf II. From around 1595 to 1603 he produced calendars, almanacs and prognostications published under the patronage of Julius Echter, prince-bishop of Würzburg. At the same time, he served as mathematician of the king of Poland and become famous for the computation of the value of pi to sixteen decimals, surpassing François Viète who had arrived at ten digits. After being widowed he was ordained to the priesthood in 1604 and on 8 October 1608 was installed as a canon of the collegiate church of St John the Evangelist in Würzburg. His Mathesis Polemica, published in Frankfurt in 1605, explained the military applications of mathematics. In June 1610 he was in Prague, after which he travelled to Poland at the invitation of Jan Zamoyski to give public lectures on mathematics at Zamość in Red Ruthenia. He made the return journey via Hungary, arriving back in Würzburg at the end of 1611. Struggling with health problems, Van Roomen undertook a journey to Spa to take the waters but died en route at Mainz in the arms of his son, who was travelling with him. See also The Adriaan van Roomen affair Zamojski Academy Works Ouranographia sive caeli descriptio (Leuven, Joannes Masius, 1591) Ideae mathematicae pars prima, sive methodus polygonorum (Antwerp, Jan van Keerbergen, 1593) Canon triangulorum rectangulorum, tam sphaericorum quam rectilineorum, methodo brevissima ([Leuven], 1593) Supputatio Ecclesiastica Secundum novam et antiquam Calendarii rationem (Würzburg, 1595) Parvum Theatrum Urbium (Frankfurt, Nicolaus Bassalus, 1595). Almanack Wurztburger Bisthumbs, awff das Jar nach Christi unsers Seligmacher Geburt 1596 (Würzburg, Georgius Fleischmann, 1596) In Archimedis circuli dimensionem expositio et analysis (Würzburg, 1597) Newer und Alter Schreib Calender auf das M.D.XCVIII Jar (Würzburg, Georgius Fleischmann, 1598) Newer und Alter Screib Calender Auff das MDXCIX Jar (Würzburg, Georgius Fleischmann, 1599) Almanach Würtzburger Bisthumbs auff das Jar nach der heilsamen Geburt Jesu Christi MDC (Würzburg, Georgius Fleischmann, 1600) Prognosticon Astrologicum oder Teutsche Practica auff das Jar nach der allein selichmachenden Geburt Unsers Heylands Jesu Christi M.DC. (Würzburg, Georgius Fleischmann, 1600) Newer und Alter Schreib Calender auff das M.DCI. Jar (Würzburg, Georgius Fleischmann, 1601) Prognosticon Astrologicum oder Teutsche Practica auff das Jar nach der Glorwürdigen Geburt Jesu Christi M.DCI. (Würzburg, Georgius Fleischmann, 1601) Almanach Wurtzburger Bisthumbs auff dass Jar nach der heilsamen Geburt Jesu Christi M.DC.II. (Würzburg, Georgius Fleischmann, 1602) Newer und Alter Schreib Calender auff das M.DC.II. Jar (Würzburg, Georgius Fleischmann, 1602) Prognosticum astrologicum oder Teutsche Practica auff das M.DC.II. Jar (Würzburg, Georgius Fleischmann, 1602) Universae mathesis idea (Würzburg, Georgius Fleischmann, 1602) Chordarum arcubus circuli primariis (Würzburg, Georgius Fleischmann, 1602) Newer und Alter Schreib Calender auff das M.DC.III. Jar (Würzburg, Georgius Fleischmann, 1603) Prognosticon Astrologicum oder Teutsche Practica auff das Jar ... M.DC.III. (Würzburg, Georgius Fleischmann, 1603) Arithmetica quatuor instrumenta nova Methodo ac forma patente exhibita (Würzburg, 1603) Mathesis Polemica (Frankfurt, 1605) Speculum Astronomicum sive Organum Forma Mappae Expressum (Leuven, Joannes Masius, 1606) Canon triangulorum sphaericorum (Mainz, Joannis Albini, 1609) Pyrotechnia, hoc est, de ignibus festivis, jocosis artificialibus et seriis, variisque eorum structuris libri duo (Frankfurt, Palthenius, 1611) References External links 1561 births 1615 deaths 16th-century Dutch mathematicians 17th-century Dutch mathematicians Pi-related people Scientists from Leuven Catholic clergy scientists Astronomers from the Spanish Netherlands
Adriaan van Roomen
[ "Mathematics" ]
1,407
[ "Pi-related people", "Pi" ]
629,026
https://en.wikipedia.org/wiki/Meridian%20%28astronomy%29
In astronomy, the meridian is the great circle passing through the celestial poles, as well as the zenith and nadir of an observer's location. Consequently, it contains also the north and south points on the horizon, and it is perpendicular to the celestial equator and horizon. Meridians, celestial and geographical, are determined by the pencil of planes passing through the Earth's rotation axis. For a location not on this axis, there is a unique meridian plane in this axial-pencil through that location. The intersection of this plane with Earth's surface defines two geographical meridians (either one east and one west of the prime meridian, or else the prime meridian itself and its anti-meridian), and the intersection of the plane with the celestial sphere is the celestial meridian for that location and time. There are several ways to divide the meridian into semicircles. In one approach, the observer's upper meridian extends from a celestial pole and passes through the zenith to contact the opposite pole, while the lower meridian passes through the nadir to contact both poles at the opposite ends. In another approach known as the horizontal coordinate system, the meridian is divided into the local meridian, the semicircle that contains the observer's zenith and the north and south points of their horizon, and the opposite semicircle, which contains the nadir and the north and south points of their horizon. On any given (sidereal) day/night, a celestial object will appear to drift across, or transit, the observer's upper meridian as Earth rotates, since the meridian is fixed to the local horizon. At culmination, the object contacts the upper meridian and reaches its highest point in the sky. An object's right ascension and the local sidereal time can be used to determine the time of its culmination (see hour angle). The term meridian comes from the Latin meridies, which means both "midday" and "south", as the celestial equator appears to tilt southward from the Northern Hemisphere. See also Meridian (geography) Prime meridian (planets) Prime vertical, the vertical circle perpendicular to a meridian Longitude (planets) References Millar, William (2006). The Amateur Astronomer's Introduction to the Celestial Sphere. Cambridge University Press. Local Horizon and Meridian. (A Modern Almagest - An Updated Version of Ptolemy’s Model of the Solar System by Richard Fitzpatrick, Professor of Physics, The University of Texas at Austin) Astronomical coordinate systems Circles
Meridian (astronomy)
[ "Astronomy", "Mathematics" ]
504
[ "Astronomical coordinate systems", "Circles", "Pi", "Coordinate systems" ]
629,028
https://en.wikipedia.org/wiki/Meridian%20%28geography%29
In geography and geodesy, a meridian is the locus connecting points of equal longitude, which is the angle (in degrees or other units) east or west of a given prime meridian (currently, the IERS Reference Meridian). In other words, it is a coordinate line for longitudes, a line of longitude. The position of a point along the meridian at a given longitude is given by its latitude, measured in angular degrees north or south of the Equator. On a Mercator projection or on a Gall-Peters projection, each meridian is perpendicular to all circles of latitude. Assuming a spherical Earth, a meridian is a great semicircle on Earth's surface. Adopting instead a spheroidal or ellipsoid model of Earth, the meridian is half of a north-south great ellipse. The length of a meridian is twice the length of an Earth quadrant, equal to on a modern ellipsoid (WGS 84). Pre-Greenwich The first prime meridian was set by Eratosthenes in 200 BC. This prime meridian was used to provide measurement of the earth, but had many problems because of the lack of latitude measurement. Many years later around the 19th century there were still concerns of the prime meridian. Multiple locations for the geographical meridian meant that there was inconsistency, because each country had their own guidelines for where the prime meridian was located. Etymology The term meridian comes from the Latin meridies, meaning "midday"; the subsolar point passes through a given meridian at solar noon, midway between the times of sunrise and sunset on that meridian. Likewise, the Sun crosses the celestial meridian at the same time. The same Latin stem gives rise to the terms a.m. (ante meridiem) and p.m. (post meridiem) used to disambiguate hours of the day when utilizing the 12-hour clock. International Meridian Conference Because of a growing international economy, there was a demand for a set international prime meridian to make it easier for worldwide traveling which would, in turn, enhance international trading across countries. As a result, a Conference was held in 1884, in Washington, D.C. Twenty-six countries were present at the International Meridian Conference to vote on an international prime meridian. Ultimately the outcome was as follows: there would be only one prime meridian, the prime meridian was to cross and pass at Greenwich (which was the 0°), there would be two longitude direction up to 180° (east being plus and west being minus), there will be a universal day, and the day begins at the mean midnight of the initial meridian. Geographic Toward the ending of the 12th century there were two main locations that were acknowledged as the geographic location of the meridian, France and Britain. These two locations often conflicted and a settlement was reached only after there was an International Meridian Conference held, in which Greenwich was recognized as the 0° location. The meridian through Greenwich (inside Greenwich Park), England, called the Prime Meridian, was set at zero degrees of longitude, while other meridians were defined by the angle at the center of the Earth between where it and the prime meridian cross the equator. As there are 360 degrees in a circle, the meridian on the opposite side of the Earth from Greenwich, the antimeridian, forms the other half of a circle with the one through Greenwich, and is at 180° longitude near the International Date Line (with land mass and island deviations for boundary reasons). The meridians from Greenwich (0°) west to the antimeridian (180°) define the Western Hemisphere and the meridians from Greenwich (0°) east to the antimeridian (180°) define the Eastern Hemisphere. Most maps show the lines of longitude. The position of the prime meridian has changed a few times throughout history, mainly due to the transit observatory being built next door to the previous one (to maintain the service to shipping). Such changes had no significant practical effect. Historically, the average error in the determination of longitude was much larger than the change in position. The adoption of World Geodetic System 84" (WGS84) as the positioning system has moved the geodetic prime meridian 102.478 metres east of its last astronomic position (measured at Greenwich). The position of the current geodetic prime meridian is not identified at all by any kind of sign or marking at Greenwich (as the older astronomic position was), but can be located using a GPS receiver. Effect of Prime Meridian (Greenwich Time) It was in the best interests of the nations to agree to one standard meridian to benefit their fast growing economy and production. The disorganized system they had before was not sufficient for their increasing mobility. The coach services in England had erratic timing before the GWT. United States and Canada were also improving their railroad system and needed a standard time as well. With a standard meridian, stage coach and trains were able to be more efficient. The argument of which meridian is more scientific was set aside in order to find the most convenient for practical reasons. They were also able to agree that the universal day was going to be the mean solar day. They agreed that the days would begin at midnight and the universal day would not impact the use of local time. A report was submitted to the "Transactions of the Royal Society of Canada", dated 10 May 1894; on the "Unification of the Astronomical, Civil and Nautical Days"; which stated that: civil day- begins at midnight and ends at midnight following, astronomical day- begins at noon of civil day and continue until following noon, and nautical day- concludes at noon of civil day, starting at preceding noon. Magnetic meridian The magnetic meridian is an equivalent imaginary line connecting the magnetic south and north poles and can be taken as the horizontal component of magnetic force lines along the surface of the Earth. Therefore, a compass needle will be parallel to the magnetic meridian. However, a compass needle will not be steady in the magnetic meridian, because of the longitude from east to west being complete geodesic. The angle between the magnetic and the true meridian is the magnetic declination, which is relevant for navigating with a compass. Navigators were able to use the azimuth (the horizontal angle or direction of a compass bearing) of the rising and setting Sun to measure the magnetic variation (difference between magnetic and true north). True meridian The true meridian is the chord that goes from one pole to the other, passing through the observer, and is contrasted with the magnetic meridian, which goes through the magnetic poles and the observer. The true meridian can be found by careful astronomical observations, and the magnetic meridian is simply parallel to the compass needle. The arithmetic difference between the true and magnetic meridian is called the magnetic declination, which is important for the calibration of compasses. Henry D. Thoreau classified this true meridian versus the magnetic meridian in order to have a more qualitative, intuitive, and abstract function. He used the true meridian since his compass varied by a few degrees. There were some variations. When he noted the sight line for the True Meridian from his family's house to the depot, he could check the declination of his compass before and after surveying throughout the day. He noted this variation down. Meridian passage The meridian passage is the moment when a celestial object passes the meridian of longitude of the observer. At this point, the celestial object is at its highest point. When the Sun passes two times an altitude while rising and setting can be averaged to give the time of meridian passage. Navigators utilized the Sun's declination and the Sun's altitude at local meridian passage, in order to calculate their latitude with the formula. Latitude = (90° – noon altitude + declination) The declination of major stars are their angles north and south from the celestial equator. The meridian passage will not occur exactly at 12 hours because of the Earth orbit excentricity (see Equation of time). Standard meridian A standard meridian is a meridian used for determining standard time. For instance, the 30th meridian east (UTC+02:00) is the standard meridian for Eastern European Time. Since the adoption of time zones – as opposed to local mean time or solar time – in the late 19th century and early 20th century, most countries have adopted the standard time of one of the 24 meridians closest to their geographical position, as decided by the International Meridian Conference in 1884. Although, a few time zones are offset by an additional 30 or 45 minutes, such as in the Chatham Islands, South Australia and Nepal. Measurement of Earth rotation Many of these instruments rely on the ability to measure the longitude and latitude of the Earth. These instruments also were typically affected by local gravity, which paired well with existing technologies such as the magnetic meridian. See also Meridian (astronomy) Meridian arc Prime meridian Meridian Centre Principal meridian Public Land Survey System, United States Dominion Land Survey, Canada References External links The Principal Meridian Project (US) Resources page of the U.S. Department of the Interior, Bureau of Land Management Surveying hu:Földrajzi koordináta-rendszer#Első és második dimenzió: a földrajzi szélesség és hosszúság
Meridian (geography)
[ "Engineering" ]
1,901
[ "Surveying", "Civil engineering" ]
629,186
https://en.wikipedia.org/wiki/Carnosine
Carnosine (beta-alanyl-L-histidine) is a dipeptide molecule, made up of the amino acids beta-alanine and histidine. It is highly concentrated in muscle and brain tissues. Carnosine was discovered by Russian chemist Vladimir Gulevich. Carnosine is naturally produced by the body in the liver from beta-alanine and histidine. Like carnitine, carnosine is composed of the root word carn, meaning "flesh", alluding to its prevalence in meat. There are no plant-based sources of carnosine. Carnosine is readily available as a synthetic nutritional supplement. Biosynthesis Carnosine is synthesized within the body from beta-alanine and histidine. Beta-alanine is a product of pyrimidine catabolism and histidine is an essential amino acid. Since beta-alanine is the limiting substrate, supplementing just beta-alanine effectively increases the intramuscular concentration of carnosine. Physiological effects pH buffer Carnosine has a pKa value of 6.83, making it a good buffer for the pH range of animal muscles. Since beta-alanine is not incorporated into proteins, carnosine can be stored at relatively high concentrations (millimolar). Occurring at 17–25 mmol/kg (dry muscle), carnosine (β-alanyl-L-histidine) is an important intramuscular buffer, constituting 10-20% of the total buffering capacity in type I and II muscle fibres. Anti-oxidant Carnosine has been shown to scavenge reactive oxygen species (ROS) as well as alpha-beta unsaturated aldehydes formed from peroxidation of cell membrane fatty acids during oxidative stress. It also buffers pH in muscle cells, and acts as a neurotransmitter in the brain. It is also a zwitterion, a neutral molecule with a positive and negative end. Antiglycating Carnosine acts as an antiglycating agent, reducing the rate of formation of advanced glycation end-products (substances that can be a factor in the development or worsening of many degenerative diseases, such as diabetes, atherosclerosis, chronic kidney failure, and Alzheimer's disease), and ultimately reducing development of atherosclerotic plaque build-up. Geroprotective Carnosine is considered as a geroprotector. Carnosine can increase the Hayflick limit in human fibroblasts, as well as appearing to reduce the telomere shortening rate. Carnosine may also slow aging through its anti-glycating properties (chronic glycolyating is speculated to accelerate aging). Other Carnosine can chelate divalent metal ions. It has been suggested that binding Ca2+ may displace protons, thereby providing a link between Ca2+ and H+ buffering. However, there is still controversy as to how much Ca2+ is bound to carnosine under physiological conditions. Research has demonstrated a positive association between muscle tissue carnosine concentration and exercise performance. β-Alanine supplementation is thought to increase exercise performance by promoting carnosine production in muscle. Exercise has conversely been found to increase muscle carnosine concentrations, and muscle carnosine content is higher in athletes engaging in anaerobic exercise. See also Acetylcarnosine, a similar molecule used to treat lens cataracts Anserine, another dipeptide antioxidant (found in birds) Carnosine synthase, an enzyme that helps carnosine production Carnosinemia, a disease of excess carnosine due to an enzyme defect/deficiency References Dipeptides Anti-aging substances Dietary supplements
Carnosine
[ "Chemistry", "Biology" ]
799
[ "Senescence", "Anti-aging substances" ]
629,203
https://en.wikipedia.org/wiki/Stefan%20Mazurkiewicz
Stefan Mazurkiewicz (25 September 1888 – 19 June 1945) was a Polish mathematician who worked in mathematical analysis, topology, and probability. He was a student of Wacław Sierpiński and a member of the Polish Academy of Learning (PAU). His students included Karol Borsuk, Bronisław Knaster, Kazimierz Kuratowski, Stanisław Saks, and Antoni Zygmund. For a time Mazurkiewicz was a professor at the University of Paris; however, he spent most of his career as a professor at the University of Warsaw. The Hahn–Mazurkiewicz theorem, a basic result on curves prompted by the phenomenon of space-filling curves, is named for Mazurkiewicz and Hans Hahn. His 1935 paper Sur l'existence des continus indécomposables is generally considered the most elegant piece of work in point-set topology. During the Polish–Soviet War (1919–21), Mazurkiewicz as early as 1919 broke the most common Russian cipher for the Polish General Staff's cryptological agency. Thanks to this, orders issued by Soviet commander Mikhail Tukhachevsky's staff were known to Polish Army leaders. This contributed substantially, perhaps decisively, to Polish victory at the critical Battle of Warsaw and possibly to Poland's survival as an independent country. See also Biuro Szyfrów List of Polish mathematicians External links 1888 births 1945 deaths Warsaw School of Mathematics People from Warsaw Governorate Polish cryptographers Topologists Academic staff of the University of Paris Academic staff of the University of Warsaw Mathematical analysts Cipher Bureau (Poland) University of Warsaw alumni
Stefan Mazurkiewicz
[ "Mathematics" ]
342
[ "Topologists", "Mathematical analysis", "Topology", "Mathematical analysts" ]
629,216
https://en.wikipedia.org/wiki/Purr
A purr or whirr is a tonal fluttering sound made by some species of felids, including both larger, outdoor cats and the domestic cat (Felis catus), as well as two species of genets. It varies in loudness and tone among species and in the same animal. In smaller and domestic cats it is known as a purr, while in larger felids, such as the cheetah, it is called a whirr. Although true purring is exclusive to felids and viverrids, other animals such as raccoons produce vocalizations that sound similar to true purring. Animals that produce purr-like sounds include mongooses, kangaroos, wallabies, wallaroos, badgers, rabbits and guinea pigs. Animals purr for a variety of reasons, including to express happiness or fear, and as a defense mechanism. It has also been shown that cats purr to manage pain and soothe themselves. Purring is a soft buzzing sound, similar to a rolled 'r' in human speech, with a fundamental frequency of around 25 Hz. This sound occurs with noticeable vibrations on the surface of the body, varies in a rhythmic pattern during breathing and occurs continuously during inhalation and exhalation. The intensity and length of the purr can also vary depending on the level of arousal of the animal. Mechanism The mechanism by which cats purr is an object of speculation, with different hypotheses proposed. An early idea was that purring is a hemodynamic process where sound is produced as the blood runs through the thorax. There is a unique "neural oscillator" in the cat's brain of uncertain significance. Although the mechanism has not yet been fully explained, recent studies have inferred it could be the result of oscillatory mechanisms in the central nervous system.  Studies have also shown that purring can be caused through electrically stimulating the infundibular region of the cat's brain, suggesting central control. Vocal folds/laryngeal muscles One hypothesis, backed by electromyographic studies, is that cats produce the purring noise by using the vocal folds or the muscles of the larynx to alternately dilate and constrict the glottis rapidly, causing air vibrations during inhalation and exhalation. Combined with the steady inhalation and exhalation of air as the cat breathes, a purring noise is produced with strong harmonics. Degree of hyoid ossification No cat can both purr and roar. The subdivision of the Felidae into "purring cats" (Felinae) on one hand and "roaring cats" (Pantherinae) on the other goes back to Owen and was definitively introduced by Pocock, based on whether the hyoid bone of the larynx is incompletely ("roarers") or completely ("purrers") ossified. However, Weissengruber et al. argued that the ability of a cat species to purr is not affected by the anatomy of its hyoid. The "roaring cats" (lion, Panthera leo; tiger, P. tigris; jaguar, P. onca; leopard, P. pardus) have an incompletely ossified hyoid, which, according to this hypothesis, enables them to roar but not to purr. However, the snow leopard (Uncia uncia, or P. uncia), as the fifth felid species with an incompletely ossified hyoid, purrs. All remaining species of the family Felidae ("purring cats") have a completely ossified hyoid, which enables them to purr but not to roar. Based on a technical acoustic definition of roaring, the presence of this vocalization type depends on specific characteristics of the vocal folds and an elongated vocal tract, which is rendered possible by an incompletely ossified hyoid. Frequency, amplitude, and respiratory variation Domestic cats purr at a frequency of 20 to 30 vibrations per second. Eklund, Peters & Duthie, comparing purring in a cheetah (Acinonyx jubatus) and a domestic cat (Felis catus) found that the cheetah purred with an average frequency of 20.87 Hz (egressive phases) and 18.32 Hz (ingressive phases), while the much smaller domestic cat purred with an average frequency of 21.98 Hz (egressive phases) and 23.24 Hz (ingressive phases). Schötz & Eklund studied purring in four domestic cats and found that the fundamental frequency varied between 20.94 and 27.21 Hz for egressive phases and between 23.0 and 26.09 Hz for ingressive phases. Schötz & Eklund also observed considerable variation between the four cats as regards relative amplitude, duration and frequency between egressive and ingressive phases, but that this variation occurred within the same general range. In a follow-up study of purring in four adult cheetahs, Eklund, Peters, Weise & Munro found that egressive phases were longer than ingressive phases in all four cheetahs. Likewise, ingressive phases had a lower frequency than egressive phases in all four cheetahs. Mean frequency were between 19.3 Hz and 20.5 Hz in ingressive phases, and between 21.9 Hz and 23.4 Hz in egressive phases. Moreover, the amplitude was louder in the egressive phases in four cheetahs. Eklund & Peters compared purring in adult, subadult and juvenile cheetahs and reported that while there was considerable variation across most of the parameters analyzed (amplitude, phase duration, cycles per phase and fundamental frequency) – mainly attributable to degree of relaxation/agitation in the animals resting or playing– previously reported observations that ingressive phases tend to be lower in frequency were largely confirmed. There were no major differences in these parameters as a function of age. Purpose In domestic cats, many signals that occur when interacting with humans seem to originate from when the animal was dependent on the mother. Cats have been observed to purr for most of their lifespan, starting from when they were young and suckling from their mother. Purring may be a signaling mechanism of reassurance between mother cats and nursing kittens. Post-nursing cats often purr as a sign of contentment when being petted, becoming relaxed or eating. Some purring may be a signal to another animal that the purring cat does not pose a threat. Cats have been shown to have different types of purrs depending on situations. For example, purring appears to be a way for cats to signal their caretakers for food. This purring has a high-frequency component not present in other purrs. These are called solicitation purrs (when the cat is seeking a result) and non-solicitation purrs, and the two are distinguishable to humans. In a study, 50 humans were subjected to playbacks of purrs recorded in solicitation and non-solicitation situations at the same amplitude. Humans regularly judged the solicitation purrs as less pleasant and more urgent than the non-solicitation purrs. This variety of purring seems to be found more frequently in cats in a one-to-one relationship with a caretaker. Similarities have been drawn between an infant's cry and the isolation cry of domestic cats. The high-frequency aspect of the purr may subtly exploit humans' sensitivity to these cries. Using sensory biases in communication between species provides cats with a productive means of improving the care that they receive. Cats often purr when distressed or in pain, such as during the three stages of labor. In the first stage, the uterus begins to contract, the cervix relaxes, the water breaks and the cat begins to purr. The female cat (queen) will purr and socialize during the first stage of labor. The purring is thought to be a self-relaxation technique. See also Cat communication Kneading References Further reading Stogdale L, Delack JB. Feline purring. Compendium on Continuing Education for the Practicing Veterinarian 1985; 7: 551–553. Reprinted in: Voith VL, Borchelt PL (eds). Readings in Companion Animal Behavior. Trenton: Veterinary Learning Systems, 1996; 269–270. External links Why do cats purr? Why do Cheshire cats grin? Robert Eklund's Purring Page Animal sounds Cat behavior Ethology
Purr
[ "Biology" ]
1,811
[ "Behavioural sciences", "Ethology", "Behavior", "Animal sounds" ]
629,218
https://en.wikipedia.org/wiki/Sodium%20hexametaphosphate
Sodium hexametaphosphate (SHMP) is a salt of composition . Sodium hexametaphosphate of commerce is typically a mixture of metaphosphates (empirical formula: NaPO3), of which the hexamer is one, and is usually the compound referred to by this name. Such a mixture is more correctly termed sodium polymetaphosphate. They are white solids that dissolve in water. Uses SHMP is used as a sequestrant and has applications within a wide variety of industries, including as a food additive in which it is used under the E number E452i. Sodium carbonate is sometimes added to SHMP to raise the pH to 8.0–8.6, which produces a number of SHMP products used for water softening and detergents. A significant use for sodium hexametaphosphate is as a deflocculant in the production of clay-based ceramic particles. It is also used as a dispersing agent to break down clay and other soil types for soil texture assessment. It is used as an active ingredient in toothpastes as an anti-staining and tartar prevention ingredient. Food additive As a food additive, SHMP is used as an emulsifier. Artificial maple syrup, canned milk, cheese powders and dips, imitation cheese, whipped topping, packaged egg whites, roast beef, fish fillets, fruit jelly, frozen desserts, salad dressing, herring, breakfast cereal, ice cream, beer, and bottled drinks, among other foods, can contain SHMP. Water softener salt SHMP is used in Diamond Crystal brand Bright & Soft Salt Pellets for water softeners in a concentration of 0.03%. It is the only additive other than sodium chloride. Preparation SHMP is prepared by heating monosodium orthophosphate to generate sodium acid pyrophosphate: Subsequently, the pyrophosphate is heated to give the corresponding sodium hexametaphosphate: followed by rapid cooling. Reactions SHMP hydrolyzes in aqueous solution, particularly under acidic conditions and/or heat, to sodium trimetaphosphate and sodium orthophosphate. History Sodium hexametaphosphate is the alkali salt of one of the series of polymetaphosphoric acids (acids formed by the polymerization of phosphate groups). Hexametaphosphoric acid was first made in 1825 by the German chemist Johann Frederich Philipp Engelhart (1797-1853). For his doctoral thesis, Engelhart intended to determine whether iron was responsible for the red color of blood. In order to purify his blood samples, Engelhart had found that he could coagulate the blood serum's albumin (dissolved proteins) by treating the blood with phosphoric acid. This contradicted the findings of the famous Swedish chemist Jöns Jacob Berzelius, who had stated that phosphoric acid did not coagulate water-soluble proteins such as egg white. Berzelius and Engelhart collaborated with the intention of resolving the contradiction; they concluded that Engelhart had produced a new form of phosphoric acid simply by burning phosphorus in air and then dissolving the resulting substance in water. However they did not determine the new acid's composition. That analysis was accomplished in 1833 by the Scottish chemist Thomas Graham, who named the sodium salt of the new acid "metaphosphate of soda". Graham's findings were confirmed by the German chemists Justus von Liebig and Theodor Fleitmann. In 1849 Fleitmann coined the name "hexametaphosphoric acid". By 1956, chromatographic analysis of hydrolysates of Graham's salt (sodium polyphosphate) indicated the presence of cyclic anions containing more than four phosphate groups; these findings were confirmed in 1961. In 1963, the German chemists Erich Thilo and Ulrich Schülke succeeded in preparing sodium hexametaphosphate by heating anhydrous sodium trimetaphosphate. Safety Sodium phosphates are recognized to have low acute oral toxicity. SHMP concentrations not exceeding 10,000 mg/L or mg/kg are considered protective levels by the EFSA and US FDA. Extreme concentrations of this salt may cause acute side effects from excessive blood serum concentrations of sodium, such as: “irregular pulse, bradycardia, and hypocalcemia." References External links Occupational Health and Safety Agency for Healthcare in British Columbia Material Safety Data Sheet Sodium compounds Metaphosphates Water treatment Photographic chemicals Glass compositions Twelve-membered rings Cyclic phosphates
Sodium hexametaphosphate
[ "Chemistry", "Engineering", "Environmental_science" ]
966
[ "Glass chemistry", "Water treatment", "Glass compositions", "Water pollution", "Environmental engineering", "Water technology" ]
629,451
https://en.wikipedia.org/wiki/Poste%20restante
(, "waiting mail"), also known as general delivery in North American English, is a service where the post office holds the mail until the recipient calls for it. It is a common destination for mail for people who are visiting a particular location and have no need, or no way, of having mail delivered directly to their place of residence at that time. By country Argentina The Poste Restante service is run by the national postal company, Correo Argentino. Letter or parcels can be addressed with either the recipient's full name or their ID card number: [Recipient's full name or Recipient's ID number] Poste Restante [Post Office's Postal Code] [Office name] [(City)] There is no specified time period for the recipient to pick up the letter. Upon retiring, the recipient must provide proof of identity and pay a charge equivalent to a simple mail service. Australia (Counter Delivery) is a long-established service within Australia run by the national postal service, Australia Post, which allows one's post to be sent to a city-centre holding place. It will be held for up to 1 month and can be collected by providing proof of identity, such as a passport. For example, for the Adelaide GPO (General Post Office, i.e. the main post office in the city of Adelaide) one would address a letter or parcel thus: Recipient's Full Name CARE OF POST OFFICE GPO Adelaide SA 5001 Australia The recipient would then need to go to the Adelaide GPO at 10 Franklin St to collect it when it was due to arrive or shortly afterwards. The approved abbreviation is "CARE PO". For outward bound international mail, the form "POSTE RESTANTE" is recognised. Mail that is not addressed in the approved form may be delayed for hand sorting. Austria In Austria, can be used with a name or an agreed passphrase under which the envelope can be collected, followed by the indication (in capitals). A tax of 1€ is collected from the addressee. Belgium Bpost uses the term (French), (Dutch) or (German) which needs to be indicated on the post together with the name, optionally the first name, full name of the post office, postal code and city. The recipient has one month to collect the post, after which it will be sent back to the sender. A special tax applies valued at the price of one stamp of value 2; this price is €1.74. This tax is usually paid by the recipient but can also be paid by the sender. Name of the recipient Poste Restante Full name of the post office B-1234 City Belgium Bosnia and Herzegovina In Bosnia and Herzegovina, mail is addressed to , which is written after the full name of the recipient as shown in their ID document or passport. Name of recipient POSTE RESTANTE 12345 City Cambodia In Cambodia, mail is addressed to , which is written after the full name of the recipient. Parcels and letters should contain a phone number, but recipients are more likely to get notified by phone for a parcel. Name of recipient POSTE RESTANTE City Postcode ##### Province Name Cambodia Tel +855 ### ### ### Canada Canada Post uses the term general delivery in English, and in French. Travellers may use the service for up to four months, and it is also used in circumstances where other methods of delivery (such as post office boxes) are not available. Canada Post guidelines for addressing of letter mail stipulate that general delivery mail should be addressed in the following fashion: MR JOHN JONES GD STN A FREDERICTON NB  E3B 1A1 where GD is the abbreviation for general delivery (alternatively "PR" for in French), and STN A is the station or post office (in this case, Station A). When the mail is to be delivered to a retail postal outlet, then the abbreviation RPO is used, and in French the station is indicated by SUCC for , or retail postal outlet by "CSP" for . In the above example, the French version of the address would use "PR SUCC A" for the second line. Croatia In Croatia, mail is addressed to , which is written after the full name of the recipient (as appears on the identification to be presented ID document or a passport, if abroad). Name of recipient POSTE RESTANTE 12345 City In Croatia letters received to address are kept in post office for 30 calendar days. Czech Republic In the Czech Republic, mail can be addressed to the post office if and the full name of the recipient is included. To retrieve the mail, the recipient has to have a valid ID, passport, or a driving licence. The mail will be retained for only 15 days. Name of the recipient Details (such as – summer camp etc.), or leave blank POSTE RESTANTE 12345 City Denmark Mail is held for 14 days. Finland is offered widely throughout Finland in 425 post office addresses In Finland, letters received to address are kept in post office for 14 days after arrival. Before 1 April 2013, it was possible to use a nickname instead of real name and without providing an ID. In 2019, picking up mail from was to become paid service, but the planned fee was cancelled. France In France, offers the service for letters at an additional cost to be paid by the receiver Letters are kept for 15 days. The service is offered at any post office. The only requirement is to add after the recipient's name and before the full address of the post office. For the post office of Paris Louvre the address would be : Recipient's Name Poste Restante 50 rue du Louvre 75001 PARIS Germany offer the service for all mail items under the (General Delivery) service. Domestic items may make use of a passphrase instead of the recipients name if preferred. As a general rule, the item should be addressed to the (main post office) of the location. Address Format: Recipient Name Kennwort: [passphrase] (optional) Postlagernd Post Office address Post Office postcode/town The phrase (Password) in front of the passphrase requires the consignee to specify the passphrase at time of collection. Items may be collected from the designated post office upon production of valid identification (e.g. Passport) and the passphrase. It is important to use the term in preference to, or in conjunction with, to prevent returned mail. Gibraltar Royal Gibraltar Post Office allows letters and parcels to be received. ID is required for collection. The address format is: Name of recipient Poste Restante General Post Office Gibraltar Greece In Greece you can send small parcels under 2 kg to the Greece post office. Address Format: POSTE RESTANTE Recipient's Name Post office address Post office zip code Town Greece To collect your mail if it is addressed and stamped to you by bring your passport and ID with you. Hong Kong In Hong Kong, Hongkong Post is responsible for handling the mail and mail can be sent to General Post Office only. At the address side, mark the mail with or in bold. The address format is like this: Name of recipient Name of post office (optional) Hong Kong The name of recipient must be real and in full. No initial or figure is allowed. Hungary Hungarian post requires () delivery to be addressed to specific post offices. Additionally, the postal service provides a "pick-up" service for web-based businesses where a business can ship to a pick-up point. India India Post offers a Poste Restante service of up to two months at various post offices throughout the country. Mail is addressed thus: Mr./Mrs. SURNAME First name Poste Restante C/O POSTMASTER GPO [for General Post Office] City name State name India Carry a passport or some other form of national identification to pick it up. In India, letters are said to be held for around a month, up to two in larger GPOs, before being returned; so senders should add their return address, or use the recipient's home address as a return destination. Ireland In Ireland, 's "Address Point" service is ″a fixed address for those without a fixed home.″ also offer , as a free service which allows visitors to a town to have their mail addressed to the local Post Office, where it will be held for collection. Italy In Italy, writing together with the full name of the addressee and postal code will activate general delivery at the relevant post office. Letter post is retained for 30 days; parcel post is normally retained for 15 days, with an exceptional period of 1 month. Japan In Japan, the term indicates a mail. The Japan Post Service guidelines for addressing stipulate that general delivery domestic mail should be addressed in the following fashion: 163-8799 (TEL 090-xxxx-xxxx) For international mail, Mr. Taro Yamada Shinjuku-Yuubinkyoku-dome Nishi-Shinjuku, Shinjuku-ku, Tokyo-to 163–8799 JAPAN (TEL 090-xxxx-xxxx) where "163-8799" is the postcode of the post office, "Nishi-Shinjuku, Shinjuku-ku, Tokyo-to" is the address of the post office, "Shinjuku-Yuubinkyoku" is the name of the post office, and "-dome" means "keep it there". A mail will be kept in the post office for 10 days, identification is required for pickup. Lithuania In Lithuania, the words or (used locally in Lithuania) should be made bold. For local mail: Name of recipient (in dative case) Name of post office 12345 City Mob. tel.: 86xxxxxxx and/or (optional mobile number for notifying with SMS message about received registered mail) email@address.com (optional e-mail address for notifying about received registered mail) Luxembourg The Luxembourg Post has discontinued the service and is not possible anymore. Macau CTT (Macau) offers Poste Restante service (Posta Restante) to Macau's visitors or those who do not have a fixed residence in Macau. Upon collection, the addressee needs to pay MOP 6.50 as the administration fee (additional charges may apply). The recipient's address must include the words (in Chinese) or (in Portuguese). The addressee's name must be real name, in conformity to his/her name shown on an identity card or passport. The address format is as follows: Name of addressee POSTA RESTANTE Name of post office or branch office Locality (e.g. Macau, Taipa, etc.) Macau Special Administrative Region Malta MaltaPost offers service from a post office in the capital Valletta for up to three months, after which mail is returned to sender. Prior to using the service, the person must inform MaltaPost via email, and a means of identification needs to be provided when picking up the mail. Mail should be addressed as follows: Name of recipient Poste Restante Valletta Boxes 75, Old Bakery Str IL-BELT VALLETTA VLT 1000 MALTA Netherlands PostNL offers service for up to one month, after which the mail will be returned to the original sender (if provided). This service is provided by all Dutch post offices. Mail should be addressed thus: Last name of recipient First name of recipient Poste Restante Name and address of post office The Netherlands New Zealand New Zealand Post offers a service of up to two months at various post offices throughout the country. Mail is addressed thus: Name of Recipient Poste Restante Name of post office Post office's address Town and postcode Norway In Norway, Norway Post provides the service in all post offices. Ordinary mail is returned to sender three weeks after arrival. Parcels and registered letters are returned to sender after two weeks. Valid foreign identification for obtaining mail is limited to passports and European Identity cards. Mail is addressed according to the following format: Name of recipient POSTE RESTANTE Name of post office Post office's address Post office's postcode and place Poland Mail should be addressed according to the following format: Name of recipient Poste Restante 12-345 City 6 where 12-345 represents the postal code of the post office and 6 represents post office number within given city. (In Poland every post office is uniquely identified by city and number, e.g. "Warszawa 1" or "Kraków 35". These numbers are used only when the post office itself is the point of delivery, e.g. mailboxes or ). There is no address of a recipient on the envelope, and if no sender address is provided, mail is considered undeliverable after 14 days. Every office is valid target for delivery and the service is provided with no additional cost. All types of mail, e.g. letters, parcels and money orders, can be sent to . Portugal In Portugal, CTT provides the service () in all post offices. The mail will be kept in the post office until the last day of the following month after it was received. Mail is addressed according to the following format: Name of recipient Posta Restante Loja XXXXXXX (name of the post office) Address of Post Office Postcode and place The recipient has to pay from €0.98 up to €3.01 tax in the act of receiving the mail, depending on the weight. Russia The most widely available Russian mail delivery (state-run) company, Russian Post, provides services. In order to collect a parcel, the individual is required to provide an ID document (usually a passport) that matches the indicated receiver's information. Alternatively, an electronic signature (SMS code) may be used if the package's tracking number is known. The parcel is stored for up to 30 days, depending on the type of mail. As of 2019, the packages delivered by courier may be stored up to 15 days and letters – up to 30. The information on parcels should be as following (as per Russian Post) in order for the parcel to be stored: Name of recipient Address of destination (optional, may be omitted): city, street, building, entry (if any), apartment number Postal code DO VOSTREBOVANIYA / Poste restante The Russian phrase may be used instead of . The phrase is a phonetic transliteration and indicates that the parcel has to be stored in a postal office () until the receiver collects it. The storage time is calculated from the moment when the parcel was received by the office. Parcels with such marks are not delivered to the mail boxes. Russian Post advises outgoing international packages be marked with in the same way. For incoming parcels, a certain fee must be paid if the package's cost or weight exceeds the duty-free limits. Until early 2022, the limits were €200 and 31 kg (approx. 68 lbs) per package (regardless of the number of packages received over a time period). After Russian imports had been sanctioned following the invasion of Ukraine, the cost limit was temporarily increased to €1,000. Serbia In Serbia, mail is addressed to or , which is written after the full name of the recipient (as appears on the identification to be presented ID document or a passport, if abroad). The example of address is as follows : John Richmond POST-RESTANT or POSTE RESTANTE Sremčica 11253 Srbija Upon arrival in the post-residual parcel section of the destination post office, the parcel is stored for 30 days, from the day of arrival. Singapore This service is only for the convenience of travelers to Singapore. The words or To Be Called For must appear next to the recipient's full name on a postal article. Slovakia In Slovakia, the post office will hold mail sent to an address containing words , even when the full address is specified. The post office will not notify the recipient about the delivery, and will hold the mail for 18 days, unless told otherwise by the sender. After that, the mail is returned to the sender. To pick up the mail, the recipient needs to show their ID card or passport. They also may have to provide a tracking number. Full legal name of the recipient Poste restante PSČ City Slovenia In Slovenia mail is addressed to , which is written after the full name of the recipient (as appears on the identification to be presented ID document or a passport, if abroad). Name of recipient POŠTNO LEŽEČE 1234 City In Slovenia letters received for a address are kept in post office for 15 calendar days. Spain Mail can also be received at Spanish post offices. There is normally no charge for the service. letters should be addressed , followed by the address of the post office (including the postcode, town and province). Put the recipient's surname in capitals. An example address is: John SMITH Lista de Correos Pl. Rosa dels Vents 9 46730 Gandia Valencia Spain To find the address of the nearest to you, check at www.correos.es. Click on the link for the . When collecting mail from the post office recipients will need to show some form of identification such as a passport or photo driving license. They will need to specify that they are collecting a item as these are stored separately from the other stored mail. The mail is filed alphabetically and it may be worth asking them to check if an expected package was filed under your first name if they cannot find it under your surname. Post offices vary in how long they will hold your mail. When addressing mail for Spain, always ensure that there is a return address on the parcel or packet. Sweden Not all post offices offer . The address is entered using the following format: Name of recipient Poste Restante 123 45 City Here, 123 45 represents the postal number of the post office. Post offices providing services can be located by searching for as "street address" in a given city (giving a postal number) and then searching for post offices with that postal number. Switzerland In Switzerland, all post offices offer for free for up to one month. Addressing takes the following format: Name of recipient Poste restante [street/n° of the preferred post office] 1234 City Here, 1234 represents the postal number of the post office. The correct street address can be found online. Thailand In Thailand, only a destination post office with a storage facility for will offer this service. Senders should write Poste restante at __ Post Office ( or ) or Poste restante at a destination Post Office (the second instance will place the mail at the distributional post office nearest to the recipient's address) after the name of the recipient. For example. Mr. Meedee Khonthai (Name of recipient must match Official ID or Passport) Poste restante at Phasi Charoen Post Office, 10160 +66xx-xxx-xxxx or Mr. Meedee Khonthai (Name of recipient must match Official ID or Passport) Poste restante at a destination Post Office, Bang Khae Nuea, Bang Khae Bangkok, Thailand 10160 +66xx-xxx-xxxx or in the Thai language: (Name of recipient must match Official ID or Passport) +66xx-xxx-xxxx There is a small amount of storage fee (1–2 Thai baht) when picking up the mail or parcel. The recipient must track the delivery status themselves via the tracking number provided. Accordingly, it is recommended to deliver using registered mail or EMS as the regular mail delivery service does not provide a tracking number. Turkey For Istanbul, the main parcel and office is in Sirkeci, a few minutes walk from the Sirkeci tram stop. Name of recipient as appears on Passport or Official ID Post Restante (in large and clear letters) PTT Baş Müdürlüğü, Hobyar Mahallesi, Yeni Postane Caddesi No.25, Sirkeci, Istanbul United Kingdom In the United Kingdom, mail is addressed to (or To be called for), which is written after the full name of the recipient (as appears on the identification to be presented e.g. a passport, if abroad), then the name and full address of the destination post office, thus: Mr. John Smith POSTE RESTANTE Islington Post Office 160-161 Upper Street London N1 1US If only addressed to a town name, for example "POSTE RESTANTE, LONDON", mail will go to the closest main post office branch (there being more than a hundred "crown offices" – directly managed post office branches – in London). Some small branches do not offer the poste restante service. The sender should also include their return address. In the United Kingdom, the Royal Mail will hold mail posted from within the UK for two weeks, whereas mail posted from abroad is normally held for one month. If the recipient is at sea, however, it will be held for two months. Where mail is not collected within that time, it will be returned to the sender, or if there is no sender indicated, will be treated as undeliverable. If the sender would like uncollected mail returned sooner, they can indicate this on the envelope. Timescales vary from country to country according to local practice. United States In the United States, the U.S. Postal Service uses the term General Delivery. Mail for general delivery to recipients in the United States is addressed in the following fashion: JOHN DOE GENERAL DELIVERY WASHINGTON DC 20090-9999 General delivery is normally available at only one facility under the administration of a post office with multiple facilities. Post offices that hold mail for general delivery may hold such mail for up to 30 days. When sending to general delivery, the optional ZIP+4 add-on code is -9999. See also Post office box Will call References Postal systems Philatelic terminology
Poste restante
[ "Technology" ]
4,525
[ "Transport systems", "Postal systems" ]
629,486
https://en.wikipedia.org/wiki/Cued%20speech
Cued speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues (representing consonants), in different locations near the mouth (representing vowels) to convey spoken language in a visual format. The National Cued Speech Association defines cued speech as "a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other." It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is not a sign language such as American Sign Language (ASL), which is a separate language from English. Cued speech is considered a communication modality but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development. History Cued speech was invented in 1966 by R. Orin Cornett at Gallaudet College, Washington, D.C. After discovering that children with prelingual and profound hearing impairments typically have poor reading comprehension, he developed the system with the aim of improving the reading abilities of such children through better comprehension of the phonemes of English. At the time, some were arguing that deaf children were earning these lower marks because they had to learn two different systems: American Sign Language (ASL) for person-to-person communication and English for reading and writing. As many sounds look identical on the lips (such as and ), the hand signals introduce a visual contrast in place of the formerly acoustic contrast. Cued Speech may also help people hearing incomplete or distorted sound—according to the National Cued Speech Association at cuedspeech.org, "cochlear implants and Cued Speech are perfect partners". Since cued speech is based on making sounds visible to the hearing impaired, it is not limited to use in English-speaking nations. Because of the demand for use in other languages/countries, by 1994 Cornett had adapted cueing to 25 other languages and dialects. Originally designed to represent American English, the system was adapted to French in 1977. , Cued speech has been adapted to approximately 60 languages and dialects, including six dialects of English. For tonal languages such as Thai, the tone is indicated by inclination and movement of the hand. For English, cued speech uses eight different hand shapes and four different positions around the mouth. Nature and use Though to a hearing person, cued speech may look similar to signing, it is not a sign language; nor is it a manually coded sign system for a spoken language. Rather, it is a manual modality of communication for representing any language at the phonological level (phonetics). A manual cue in cued speech consists of two components: hand shape and hand position relative to the face. Hand shapes distinguish consonants and hand positions distinguish vowel. A hand shape and a hand position (a "cue") together with the accompanying mouth shape, makes up a CV unit - a basic syllable. Cuedspeech.org lists 64 different dialects to which CS has been adapted. Each language takes on CS by looking through the catalog of the language's phonemes and distinguishing which phonemes appear similar when pronounced and thus need a hand sign to differentiate them. Literacy Cued speech is based on the hypothesis that if all the sounds in the spoken language looked clearly different from each other on the lips of the speaker, people with a hearing loss would learn a language in much the same way as a hearing person, but through vision rather than audition. Literacy is the ability to read and write proficiently, which allows one to understand and communicate ideas so as to participate in a literate society. Cued speech was designed to help eliminate the difficulties of English language acquisition and literacy development in children who are deaf or hard-of-hearing. Results of research show that accurate and consistent cueing with a child can help in the development of language, communication and literacy but its importance and use is debated. Studies address the issues behind literacy development, traditional deaf education, and how using cued speech affects the lives of deaf and HOH children. Cued speech does indeed achieve its goal of distinguishing phonemes received by the learner, but there is some question of whether it is as helpful to expression as it is to reception. An article by Jacqueline Leybaert and Jesús Alegría discusses how children who are introduced to cued speech before the age of one are up-to-speed with their hearing peers on receptive vocabulary, though expressive vocabulary lags behind. The writers suggest additional and separate training to teach oral expression if such is desired, but more importantly this reflects the nature of cued speech; to adapt children who are deaf and hard-of-hearing to a hearing world, as such discontinuities of expression and reception are not as commonly found for children with a hearing loss who are learning sign language. In her paper "The Relationship Between Phonological Coding And Reading Achievement In Deaf Children: Is Cued Speech A Special Case?" (1998), Ostrander notes, "Research has consistently shown a link between lack of phonological awareness and reading disorders (Jenkins & Bowen, 1994)" and discusses the research basis for teaching cued speech as an aid to phonological awareness and literacy. Ostrander concludes that further research into these areas is needed and well justified. The editor of the Cued Speech Journal (currently sought but not discovered) reports that "Research indicating that Cued Speech does greatly improve the reception of spoken language by profoundly deaf children was reported in 1979 by Gaye Nicholls, and in 1982 by Nicholls and Ling." In the book Choices in Deafness: A Parents' Guide to Communication Options, Sue Schwartz writes on how cued speech helps a deaf child recognize pronunciation. The child can learn how to pronounce words such as "hors d'oeuvre" or "tamale" or "Hermione" that have pronunciations different from how they are spelled. A child can learn about accents and dialects. In New York, coffee may be pronounced "caw fee"; in the South, the word friend ("fray-end") can be a two-syllable word. Debate over cued speech vs. sign language The topic of deaf education has long been filled with controversy. Two strategies for teaching the deaf exist: an aural/oral approach and a manual approach. Those who use aural-oralism believe that children who are deaf or hard of hearing should be taught through the use of residual hearing, speech and speechreading. Those promoting a manual approach believe the deaf should be taught through the use of signed languages, such as American Sign Language (ASL). Within the United States, proponents of cued speech often discuss the system as an alternative to ASL and similar sign languages, although others note that it can be learned in addition to such languages. For the ASL-using community, cued speech is a unique potential component for learning English as a second language. Within bilingual-bicultural models, cued speech does not borrow or invent signs from ASL, nor does CS attempt to change ASL syntax or grammar. Rather, CS provides an unambiguous model for language learning that leaves ASL intact. Languages Cued speech has been adapted to more than 50 languages and dialects. However, it is not clear in how many of them it is actually in use. Afrikaans Alu Arabic Belarusian Bengali Cantonese Catalan Czech Danish Dutch English (different conventions in different countries) American English Australian English British English (Standard Southern) Scottish English South African English Trinidadian English Finnish and Finnish Swedish (same conventions) French Canadian French French (France) () German German (Germany) (, "Phonemic Manual System") Swiss German Greek Gujarati Hausa Hawaiian Hebrew Hindustani Hindi Urdu Hungarian Idoma Igbo Italian Japanese Kiluba and Kituba (same conventions) Korean Lingala Malagasy Malay Indonesian Malaysian Malayalam Maltese Mandarin Marathi Navajo Odia Polish Portuguese Brazilian Portuguese European Portuguese () Punjabi (Lahore dialect) Russian Serbo-Croatian Setswana Shona Somali Spanish () Swahili Swedish (Sweden) Tagalog/Filipino Telugu Thai Tshiluba Turkish Ukrainian Yoruba Similar systems have been used for other languages, such as the Assisted Kinemes Alphabet in Belgium and the Baghcheban phonetic hand alphabet for Persian. See also Lip reading Oralism References External links Organizations National Cued Speech Association of the USA (NCSA) Language Matters (LMI) Cued Speech UK Testing, Evaluation, and Certification Unit (TECUnit) The cued language national testing body of the United States Cue Everything See The Latest Cued Speech Videos And Why They Rock On Cue Newsletter from the NCSA DailyCues.com Skills Development and Training Follow Up Resource Featuring The Cuer Connector and Word Generators SPK KIU SPK Pertuturan KIU Pemulihan Dalam Komuniti (PDK) KIU - Community Based Rehabilitation (CBR) Tutorials and general information Cued Speech Discovery/CuedSpeech.com - The NCSA bookstore. Some overview information and how to incorporate Cued Speech effectively. DailyCues.com - News, Cued English Dictionary, Cuer Database, and learning resources – some free online. learntocue.co.uk - Multimedia resources for learning Cued Speech (British English) NCSA Mini-Documentary - A 10-minute video explaining Cued Speech. Audio, ASL, sub-titles and some cueing. The Art of Cueing - extensive free course for cueing American dialects of English, containing QuickTime video samples. Cued languages other than English Cued Portuguese: Português Falado Complementado Cued Spanish: Método Oral Complementado (MOC) Cued French: Langage Parlé Complété (LPC) Cued Polish: Fonogesty Cued Persian Cued Finnish Cued Malay Deafness Human communication Phonology Education for the deaf 1966 introductions
Cued speech
[ "Biology" ]
2,110
[ "Human communication", "Behavior", "Human behavior" ]
629,554
https://en.wikipedia.org/wiki/Hadwiger%27s%20theorem
In integral geometry (otherwise called geometric probability theory), Hadwiger's theorem characterises the valuations on convex bodies in It was proved by Hugo Hadwiger. Introduction Valuations Let be the collection of all compact convex sets in A valuation is a function such that and for every that satisfy A valuation is called continuous if it is continuous with respect to the Hausdorff metric. A valuation is called invariant under rigid motions if whenever and is either a translation or a rotation of Quermassintegrals The quermassintegrals are defined via Steiner's formula where is the Euclidean ball. For example, is the volume, is proportional to the surface measure, is proportional to the mean width, and is the constant is a valuation which is homogeneous of degree that is, Statement Any continuous valuation on that is invariant under rigid motions can be represented as Corollary Any continuous valuation on that is invariant under rigid motions and homogeneous of degree is a multiple of See also References An account and a proof of Hadwiger's theorem may be found in An elementary and self-contained proof was given by Beifang Chen in Integral geometry Theorems in convex geometry Probability theorems
Hadwiger's theorem
[ "Mathematics" ]
241
[ "Theorems in convex geometry", "Theorems in probability theory", "Theorems in geometry", "Mathematical problems", "Mathematical theorems" ]
629,599
https://en.wikipedia.org/wiki/Kismet%20%28robot%29
Kismet is a robot head which was made in the 1990s at Massachusetts Institute of Technology (MIT) by Dr. Cynthia Breazeal as an experiment in affective computing; a machine that can recognize and simulate emotions. The name Kismet comes from a Turkish word meaning "fate" or sometimes "luck". Hardware design and construction In order for Kismet to properly interact with human beings, it contains input devices that give it auditory, visual, and proprioception abilities. Kismet simulates emotion through various facial expressions, vocalizations, and movement. Facial expressions are created through movements of the ears, eyebrows, eyelids, lips, jaw, and head. The cost of physical materials was an estimated US$25,000. In addition to the equipment mentioned above, there are four Motorola 68332s, nine 400 MHz PCs, and another 500 MHz PC. Software system Kismet's social intelligence software system, or synthetic nervous system (SNS), was designed with human models of intelligent behavior in mind. It contains six subsystems as follows. Low-level feature extraction system This system processes raw visual and auditory information from cameras and microphones. Kismet's vision system can perform eye detection, motion detection and, albeit controversial, skin-color detection. Whenever Kismet moves its head, it momentarily disables its motion detection system to avoid detecting self-motion. It also uses its stereo cameras to estimate the distance of an object in its visual field, for example to detect threats—large, close objects with a lot of movement. Kismet's audio system is mainly tuned towards identifying affect in infant-directed speech. In particular, it can detect five different types of affective speech: approval, prohibition, attention, comfort, and neutral. The affective intent classifier was created as follows. Low-level features such as pitch mean and energy (volume) variance were extracted from samples of recorded speech. The classes of affective intent were then modeled as a gaussian mixture model and trained with these samples using the expectation-maximization algorithm. Classification is done with multiple stages, first classifying an utterance into one of two general groups (e.g. soothing/neutral vs. prohibition/attention/approval) and then doing more detailed classification. This architecture significantly improved performance for hard-to-distinguish classes, like approval ("You're a clever robot") versus attention ("Hey Kismet, over here"). Motivation system Dr. Breazeal figures her relations with the robot as 'something like an infant-caretaker interaction, where I'm the caretaker essentially, and the robot is like an infant'. The overview sets the human-robot relation within a frame of learning, with Dr. Breazeal providing the scaffolding for Kismet's development. It offers a demonstration of Kismet's capabilities, narrated as emotive facial expressions that communicate the robot's 'motivational state', Dr. Breazeal: "This one is anger (laugh) extreme anger, disgust, excitement, fear, this is happiness, this one is interest, this one is sadness, surprise, this one is tired, and this one is sleep." At any given moment, Kismet can only be in one emotional state at a time. However, Breazeal states that Kismet is not conscious, so it does not have feelings. Motor system Kismet speaks a proto-language with a variety of phonemes, similar to a baby's babbling. It uses the DECtalk voice synthesizer, and changes pitch, timing, articulation, etc. to express various emotions. Intonation is used to vary between question and statement-like utterances. Lip synchronization was important for realism, and the developers used a strategy from animation: "simplicity is the secret to successful lip animation." Thus, they did not try to imitate lip motions perfectly, but instead "create a visual shorthand that passes unchallenged by the viewer." See also Affective computing Artificial intelligence References External links Description de Kismet Humanoid robots Prototype robots Social robots Robots of the United States Massachusetts Institute of Technology 1990s robots Robot heads
Kismet (robot)
[ "Technology" ]
872
[ "Social robots", "Computing and society" ]
630,017
https://en.wikipedia.org/wiki/Feynman%E2%80%93Kac%20formula
The Feynman–Kac formula, named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations and stochastic processes. In 1947, when Kac and Feynman were both faculty members at Cornell University, Kac attended a presentation of Feynman's and remarked that the two of them were working on the same thing from different directions. The Feynman–Kac formula resulted, which proves rigorously the real-valued case of Feynman's path integrals. The complex case, which occurs when a particle's spin is included, is still an open question. It offers a method of solving certain partial differential equations by simulating random paths of a stochastic process. Conversely, an important class of expectations of random processes can be computed by deterministic methods. Theorem Consider the partial differential equation defined for all and , subject to the terminal condition where are known functions, is a parameter, and is the unknown. Then the Feynman–Kac formula expresses as a conditional expectation under the probability measure where is an Itô process satisfying and a Wiener process (also called Brownian motion) under . Intuitive interpretation Suppose that the position of a particle evolves according to the diffusion process Let the particle incur "cost" at a rate of at location at time . Let it incur a final cost at . Also, allow the particle to decay. If the particle is at location at time , then it decays with rate . After the particle has decayed, all future cost is zero. Then is the expected cost-to-go, if the particle starts at Partial proof A proof that the above formula is a solution of the differential equation is long, difficult and not presented here. It is however reasonably straightforward to show that, if a solution exists, it must have the above form. The proof of that lesser result is as follows: Let be the solution to the above partial differential equation. Applying the product rule for Itô processes to the process one gets: Since the third term is and can be dropped. We also have that Applying Itô's lemma to , it follows that The first term contains, in parentheses, the above partial differential equation and is therefore zero. What remains is: Integrating this equation from to , one concludes that: Upon taking expectations, conditioned on , and observing that the right side is an Itô integral, which has expectation zero, it follows that: The desired result is obtained by observing that: and finally Remarks The proof above that a solution must have the given form is essentially that of with modifications to account for . The expectation formula above is also valid for N-dimensional Itô diffusions. The corresponding partial differential equation for becomes: where, i.e. , where denotes the transpose of . More succinctly, letting be the infinitesimal generator of the diffusion process, This expectation can then be approximated using Monte Carlo or quasi-Monte Carlo methods. When originally published by Kac in 1949, the Feynman–Kac formula was presented as a formula for determining the distribution of certain Wiener functionals. Suppose we wish to find the expected value of the function in the case where x(τ) is some realization of a diffusion process starting at . The Feynman–Kac formula says that this expectation is equivalent to the integral of a solution to a diffusion equation. Specifically, under the conditions that , where and The Feynman–Kac formula can also be interpreted as a method for evaluating functional integrals of a certain form. If where the integral is taken over all random walks, then where is a solution to the parabolic partial differential equation with initial condition . Applications Finance In quantitative finance, the Feynman–Kac formula is used to efficiently calculate solutions to the Black–Scholes equation to price options on stocks and zero-coupon bond prices in affine term structure models. For example, consider a stock price undergoing geometric Brownian motion where is the risk-free interest rate and is the volatility. Equivalently, by Itô's lemma, Now consider a European call option on an expiring at time with strike . At expiry, it is worth Then, the risk-neutral price of the option, at time and stock price , is Plugging into the Feynman–Kac formula, we obtain the Black–Scholes equation: where More generally, consider an option expiring at time with payoff . The same calculation shows that its price satisfies Some other options like the American option do not have a fixed expiry. Some options have value at expiry determined by the past stock prices. For example, an average option has a payoff that is not determined by the underlying price at expiry but by the average underlying price over some predetermined period of time. For these, the Feynman–Kac formula does not directly apply. Quantum mechanics In quantum chemistry, it is used to solve the Schrödinger equation with the pure diffusion Monte Carlo method. See also Itô's lemma Kunita–Watanabe inequality Girsanov theorem Kolmogorov backward equation Kolmogorov forward equation (also known as Fokker–Planck equation) Stochastic mechanics References Further reading Richard Feynman Stochastic processes Parabolic partial differential equations Articles containing proofs Mathematical finance
Feynman–Kac formula
[ "Mathematics" ]
1,099
[ "Applied mathematics", "Mathematical finance", "Articles containing proofs" ]
630,020
https://en.wikipedia.org/wiki/Dung%20beetle
Dung beetles are beetles that feed on feces. Some species of dung beetles can bury dung 250 times their own mass in one night.. Many dung beetles, known as rollers, roll dung into round balls, which are used as a food source or breeding chambers. Other dung beetles like Euoniticellus intermedius, known as tunnelers, bury the dung wherever they find it. A third group, the dwellers, neither roll nor burrow: they simply live in dung. They are often attracted by the feces collected by burrowing owls. There are dung beetle species of various colors and sizes, and some functional traits such as body mass (or biomass) and leg length can have high levels of variability. All the species belong to the superfamily Scarabaeoidea, most of them to the subfamilies Scarabaeinae and Aphodiinae of the family Scarabaeidae (scarab beetles). As most species of Scarabaeinae feed exclusively on feces, that subfamily is often dubbed true dung beetles. There are dung-feeding beetles which belong to other families, such as the Geotrupidae (the earth-boring dung beetle). The Scarabaeinae alone comprises more than 5,000 species. The nocturnal African dung beetle Scarabaeus satyrus is one of the few known invertebrate animals that navigate and orient themselves using the Milky Way. The daily dung of one elephant can support 2,000,000 beetles. Taxonomy Dung beetles are not a single taxonomic group; dung feeding is found in a number of families of beetles, so the behaviour cannot be assumed to have evolved only once. Coleoptera (order), beetles Scarabaeoidea (superfamily), scarabs (most families in the group do not use dung) Geotrupidae (family), "earth-boring dung beetles" Scarabaeidae (family), "scarab beetles" (not all species use dung) Scarabaeinae (subfamily), "true dung beetles" Aphodiinae (subfamily), "small dung beetles" (not all species use dung) Ecology and behavior Dung beetles live in many habitats, including desert, grasslands and savannas, farmlands, and native and planted forests. They are highly influenced by the environmental context, and do not prefer extremely cold or dry weather. They are found on all continents except Antarctica. They eat the dung of herbivores and omnivores, and prefer that produced by the latter. Many of them also feed on mushrooms and decaying leaves and fruits. The Neotropical Deltochilum valgum, D. kolbei and D. viridescens are carnivores with a strong preference for preying upon millipedes. Two other species from Brazil, Canthon dives and Canthon virens, prey on queens and other winged forms of leafcutter ants. One species from the Iberian Peninsula, Thorectes lusitanicus, feeds on acorns. Dung beetles do not necessarily have to eat or drink anything else, because the dung provides all the necessary nutrients. Most dung beetles search for dung using their sensitive sense of smell. Some smaller species simply attach themselves to the dung-providers to wait for the dung. After capturing the dung, a dung beetle rolls it, following a straight line despite all obstacles. Sometimes, dung beetles try to steal the dung ball from another beetle, so the dung beetles have to move rapidly away from a dung pile once they have rolled their ball to prevent it from being stolen. Dung beetles can roll up to 10 times their weight. Male Onthophagus taurus beetles can pull 1,141 times their own body weight: the equivalent of an average person pulling six double-decker buses full of people. A species of dung beetle (the African Scarabaeus zambesianus) navigates by polarization patterns in moonlight, the first animal known to do so. Dung beetles can also navigate when only the Milky Way or clusters of bright stars are visible, making them the only insects known to orient themselves by the Milky Way. Research using 1 kg bolus of elephant dung found that a larger number exploit it during the night (13,700) than during the day (3,330). The eyes of dung beetles are superposition compound eyes typical of many scarabaeid beetles; The sequence of images shows a sequence of the beetle rolling a dung ball. It does this to navigate. Cambefort and Hanski (1991) classified dung beetles into three functional types based on their feeding and nesting strategies such as – Rollers, Tunnelers and Dwellers. The "rollers" roll and bury a dung ball either for food storage or for making a brooding ball. In the latter case, two beetles, one male and one female, stay around the dung ball during the rolling process. Usually it is the male that rolls the ball, while the female hitch-hikes or simply follows behind. In some cases, the male and the female roll together. When a spot with soft soil is found, they stop and bury the ball, then mate underground. After the mating, one or both of them prepares the brooding ball. When the ball is finished, the female lays eggs inside it, a form of mass provisioning. Some species do not leave after this stage, but remain to safeguard their offspring. The dung beetle goes through a complete metamorphosis. The larvae live in brood balls made with dung prepared by their parents. During the larval stage, the beetle feeds on the dung surrounding it. The behavior of the beetles was poorly understood until the studies of Jean Henri Fabre in the late 19th century. For example, Fabre corrected the myth that a dung beetle would seek aid from other dung beetles when confronted by obstacles. By observation and experiment, he found the seeming helpers were in fact awaiting an opportunity to steal the roller's food source. They are widely used in ecological research as a good bioindicator group to examine the impacts of climate disturbances, such as extreme droughts and associated fires, and human activities on tropical biodiversity and ecosystem functioning, such as seed dispersal, soil bioturbation and nutrient cycling. Benefits and uses Dung beetles play a role in agriculture and tropical forests. By burying and consuming dung, they improve nutrient recycling and soil structure. Dung beetles have been further shown to improve soil conditions and plant growth on rehabilitated coal mines in South Africa. They are also important for the dispersal of seeds present in animals' dung, influencing seed burial and seedling recruitment in tropical forests. They can protect livestock, such as cattle, by removing the dung which, if left, could provide habitat for pests such as flies. Therefore, many countries have introduced the creatures for the benefit of animal husbandry. The American Institute of Biological Sciences reports that dung beetles save the United States cattle industry an estimated US$380 million annually through burying above-ground livestock feces. In Australia, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) commissioned the Australian Dung Beetle Project (1965–1985) which, led by George Bornemissza, sought to introduce species of dung beetles from South Africa and Europe. The successful introduction of 23 species was made, most notably Digitonthophagus gazella and Euoniticellus intermedius, which has resulted in improvement of the quality and fertility of Australian cattle pastures, along with a reduction in the population of pestilent Australian bush flies by around 90%. In 1995 it was reported that dung beetles were being trialled in the Sydney beach suburb of Curl Curl to deal with dog droppings. An application made by Landcare Research to import up to 11 species of dung beetle into New Zealand was approved in 2011. As well as improving pasture soils the Dung Beetle Release Strategy Group said that it would result in a reduction in emissions of nitrous oxide (a greenhouse gas) from agriculture. There was, however, strong opposition from some at the University of Auckland, and a few others, based on the risks of the dung beetles acting as vectors of disease. There were public health researchers at the University of Auckland who agreed with the Environmenal Protection Authority's risk assessment. Several Landcare programmes in Australia involved schoolchildren collecting dung beetles. The African dung beetle (D. gazella) was introduced in several locations in North and South America and has been spreading its distribution to other regions by natural dispersal and accidental transportation, and is now probably naturalized in most countries between Mexico and Argentina. The exotic species might be useful for controlling diseases of livestock in commercial areas, and might displace native species in modified landscapes; however, data is not conclusive about its effect on native species in natural environments and further monitoring is required. The Mediterranean dung beetle (Bubas bison) has been used in conjunction with biochar stock fodder to reduce emissions of nitrous oxide and carbon dioxide, which are both greenhouse gases. The beetles work the biochar-enriched dung into the soil without the use of machines. Scientists in Canberra in 1965 discovered that Dung beetles (Scarabaeids), specifically Onthophagus australis Guérin-Méneville, improve plant yields using their dung. Japanese millet was studied and data on nutrient uptake. These plants were placed in pots lacking nitrogen, phosphorus, and sulfur. Cow-dung was then added in treatment groups with or without O. australis. Some treatment groups even had two out of the three nutrients supplemented in the pots. Comparisons of the treatment and control groups were made to show that top growth and roots significantly increased when the dung was mixed well into the soil in the pots. Results showed that dung beetle activity greatly improved plant life. The dung has little impact alone, but in combination with the dung beetle, the nutritional value for the plants increases greatly. This suggests that dung beetles have many positive implications for the environment, including a beneficial role with plant life. In culture Some dung beetles are used as food in South East Asia and a variety of dung beetle species have been used therapeutically (and are still being used in traditionally living societies) in potions and folk medicines to treat a number of illnesses and disorders. In Isan, Northeastern Thailand, the local people eat many different kinds of insects, including the dung beetle. There is an Isan song กุดจี่หายไปใหน "Where Did the Dung Beetle Go", which relates the replacement of water buffalo with the "metal" buffalo, which does not provide the dung needed for the dung beetle and has led to the increasing rarity of the dung beetle in the agricultural region. Ancient Egypt Several species of the dung beetle, most notably the species Scarabaeus sacer (often referred to as the sacred scarab), enjoyed a sacred status among the ancient Egyptians. Egyptian hieroglyphic script uses the image of the beetle to represent a triliteral phonetic that Egyptologists transliterate as xpr or ḫpr and translate as "to come into being", "to become" or "to transform". The derivative term xprw or ḫpr(w) is variously translated as "form", "transformation", "happening", "mode of being" or "what has come into being", depending on the context. It may have existential, fictional, or ontologic significance. The scarab was linked to Khepri ("he who has come into being"), the god of the rising sun. The ancients believed that the dung beetle was only male-sexed, and reproduced by depositing semen into a dung ball. The supposed self-creation of the beetle resembles that of Khepri, who creates himself out of nothing. Moreover, the dung ball rolled by a dung beetle resembles the sun. Plutarch wrote: The ancient Egyptians believed that Khepri renewed the sun every day before rolling it above the horizon, then carried it through the other world after sunset, only to renew it, again, the next day. Some New Kingdom royal tombs exhibit a threefold image of the sun god, with the beetle as symbol of the morning sun. The astronomical ceiling in the tomb of Ramses VI portrays the nightly "death" and "rebirth" of the sun as being swallowed by Nut, goddess of the sky, and re-emerging from her womb as Khepri. The image of the scarab, conveying ideas of transformation, renewal, and resurrection, is ubiquitous in ancient Egyptian religious and funerary art. Excavations of ancient Egyptian sites have yielded images of the scarab in bone, ivory, stone, Egyptian faience, and precious metals, dating from the Sixth Dynasty and up to the period of Roman rule. They are generally small, bored to allow stringing on a necklace, and the base bears a brief inscription or cartouche. Some have been used as seals. Pharaohs sometimes commissioned the manufacture of larger images with lengthy inscriptions, such as the commemorative scarab of Queen Tiye. Massive sculptures of scarabs can be seen at Luxor Temple, at the Serapeum in Alexandria (see Serapis) and elsewhere in Egypt. The scarab was of prime significance in the funerary cult of ancient Egypt. Scarabs, generally, though not always, were cut from green stone, and placed on the chest of the deceased. Perhaps the most famous example of such "heart scarabs" is the yellow-green pectoral scarab found among the entombed provisions of Tutankhamen. It was carved from a large piece of Libyan desert glass. The purpose of the "heart scarab" was to ensure that the heart would not bear witness against the deceased at judgement in the Afterlife. Other possibilities are suggested by the "transformation spells" of the Coffin Texts, which affirm that the soul of the deceased may transform (xpr) into a human being, a god, or a bird and reappear in the world of the living. One scholar comments on other traits of the scarab connected with the theme of death and rebirth: In contrast to funerary contexts, some of ancient Egypt's neighbors adopted the scarab motif for seals of varying types. The best-known of these being Judean LMLK seals (8 of 21 designs contained scarab beetles), which were used exclusively to stamp impressions on storage jars during the reign of Hezekiah. The scarab remains an item of popular interest thanks to modern fascination with the art and beliefs of ancient Egypt. Scarab beads in semiprecious stones or glazed ceramics can be purchased at most bead shops, while at Luxor Temple a massive ancient scarab has been roped off to discourage visitors from rubbing the base of the statue "for luck". In literature In Aesop's fable "The Eagle and the Beetle", the eagle kills a hare that has asked for sanctuary with a beetle. The beetle then takes revenge by twice destroying the eagle's eggs. The eagle, in despair, flies up to Olympus and places her latest eggs in Zeus's lap, beseeching the god to protect them. When the beetle finds out what the eagle has done, it stuffs itself with dung, goes straight up to Zeus and flies right into his face. Zeus is startled at the sight of the unpleasant creature, jumping to his feet so that the eggs are broken. Learning of the origin of their feud, Zeus attempts to mediate and, when his efforts to mediate fail, he changes the breeding season of the eagle to a time when the beetles are not above ground. Aristophanes alluded to Aesop's fable several times in his plays. In Peace, the hero rides up to Olympus to free the goddess Peace from her prison. His steed is an enormous dung beetle which has been fed so much dung that it has grown to monstrous size. Hans Christian Andersen's "The Dung Beetle" tells the story of a dung beetle who lives in the stable of the king's horses in an imaginary kingdom. When he demands golden shoes like those the king's horse wears and is refused, he flies away and has a series of adventures, which are often precipitated by his feeling of superiority to other animals. He finally returns to the stable having decided (against all logic) that it is for him that the king's horse wears golden shoes. In Franz Kafka's The Metamorphosis, the transformed character of Gregor Samsa is called an "old dung beetle" (alter Mistkäfer) by a charwoman. See also Catharsius, an important dung beetle genus in African and Asian environments Addo Elephant National Park, site of the largest remaining population of the endangered flightless dung beetle (Circellium bacchus). List of dung beetle and chafer (Scarabaeoidea) species recorded in Britain Rotating locomotion in living systems References Bibliography External links Feature: 'What to do with too much poo' – The success story behind the introduction of dung beetles in Australia at cosmosmagazine.com Beetles as religious symbols at insects.org Scarabaeinae Research Network Dung beetles at the Australia Museum Catharsius, an international group working on taxonomy and ecology of Western African dung beetles Tomas Libich, Congo dung beetle sp1 and Congo dung beetle sp2 photos Dung Beetles in action (video) by The WILD Foundation/Boyd Norton Dung Beetles in New Zealand (proposed release of dung beetles, with background research) Marcus Byrne The dance of the dung beetle Ted conference about dung beetle behavior. Dung Beetle Ecosystem Engineers Dung Beetle Ecosystem Engineers – expanding the range of dung beetles in Australia and analysing their performance for livestock producers. Coprophagous insects Scarabaeidae Articles containing video clips Polyphyletic groups Insect common names
Dung beetle
[ "Biology" ]
3,633
[ "Phylogenetics", "Polyphyletic groups" ]
630,022
https://en.wikipedia.org/wiki/Abstract%20nonsense
In mathematics, abstract nonsense, general abstract nonsense, generalized abstract nonsense, and general nonsense are nonderogatory terms used by mathematicians to describe long, theoretical parts of a proof they skip over when readers are expected to be familiar with them. These terms are mainly used for abstract methods related to category theory and homological algebra. More generally, "abstract nonsense" may refer to a proof that relies on category-theoretic methods, or even to the study of category theory itself. Background Roughly speaking, category theory is the study of the general form, that is, categories of mathematical theories, without regard to their content. As a result, mathematical proofs that rely on category-theoretic ideas often seem out-of-context, somewhat akin to a non sequitur. Authors sometimes dub these proofs "abstract nonsense" as a light-hearted way of alerting readers to their abstract nature. Labeling an argument "abstract nonsense" is usually not intended to be derogatory, and is instead used jokingly, in a self-deprecating way, affectionately, or even as a compliment to the generality of the argument. Alexander Grothendieck was critical of this notion, and stated that: Certain ideas and constructions in mathematics share a uniformity throughout many domains, unified by category theory. Typical methods include the use of classifying spaces and universal properties, use of the Yoneda lemma, natural transformations between functors, and diagram chasing. When an audience can be assumed to be familiar with the general form of such arguments, mathematicians will use the expression "Such and such is true by abstract nonsense" rather than provide an elaborate explanation of particulars. For example, one might say that "By abstract nonsense, products are unique up to isomorphism when they exist", instead of arguing about how these isomorphisms can be derived from the universal property that defines the product. This allows one to skip proof details that can be considered trivial or not providing much insight, focusing instead on genuinely innovative parts of a larger proof. History The term predates the foundation of category theory as a subject itself. Referring to a joint paper with Samuel Eilenberg that introduced the notion of a "category" in 1942, Saunders Mac Lane wrote the subject was 'then called "general abstract nonsense"'. The term is often used to describe the application of category theory and its techniques to less abstract domains. The term is believed to have been coined by the mathematician Norman Steenrod, himself one of the developers of the categorical point of view. Notes and references External links Usage in mathematical exposition from Noam Elkies' class notes Mathematical terminology Category theory
Abstract nonsense
[ "Mathematics" ]
540
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory", "nan" ]
630,088
https://en.wikipedia.org/wiki/Vela%20%28satellite%29
Vela was the name of a group of satellites developed as the Vela Hotel element of Project Vela by the United States to detect nuclear detonations and monitor Soviet Union compliance with the 1963 Partial Test Ban Treaty. Vela started out as a small budget research program in 1959. It ended 26 years later as a successful, cost-effective military space system, which also provided scientific data on natural sources of space radiation. In the 1970s, the nuclear detection mission was taken over by the Defense Support Program (DSP) satellites. In the late 1980s, it was augmented by the Navstar Global Positioning System (GPS) satellites. The program is now called the Integrated Operational NuDet (Nuclear Detonation) Detection System (IONDS). Deployment Twelve satellites were built, six of the Vela Hotel design and six of the Advanced Vela design. The Vela Hotel series was to detect nuclear tests in space, while the Advanced Vela series was to detect not only nuclear explosions in space but also in the atmosphere. All spacecraft were manufactured by TRW and launched in pairs, either on an Atlas–Agena or Titan III-C boosters. They were placed in orbits of 118,000 km (73,000 miles) to avoid particle radiation trapped in the Van Allen radiation belts. Their apogee was about one-third of the distance to the Moon. The first Vela Hotel pair was launched on 17 October 1963, one week after the Partial Test Ban Treaty went into effect, and the last in 1965. They had a design life of six months, but were only actually shut down after five years. Advanced Vela pairs were launched in 1967, 1969, and 1970. They had a nominal design life of 18 months, later changed to seven years. However, the last satellite to be shut down was Vehicle 9 in 1984, which had been launched in 1969 and had lasted nearly 15 years. The Vela series began with the launch of Vela 1A and 1B on 17 October 1963, a flight also marking the maiden voyage of the Atlas-Agena SLV-3 vehicle. The second pair of satellites launched on 17 July 1964, and the third on 20 July 1965. The last launch miscarried slightly when one Atlas vernier engine shut down at liftoff, while the other vernier operated at above-normal thrust levels. This resulted in a slightly lower than normal inclination for the satellites, however the mission was carried out successfully. The problem was traced to a malfunction of the vernier LOX poppet valve. Subsequent Vela satellites were switched to the Titan IIIC booster due to their increased weight and complexity. Three more sets were launched on 28 April 1967, 23 May 1969, and 8 April 1970. The last pair of Vela satellites operated until 1985, when they were finally shut down; the Air Force claimed them to be the world's longest operating satellites. They remained in orbit until their orbits decayed at the end of 1992. Instruments The original Vela satellites were equipped with 12 external X-ray detectors and 18 internal neutron and gamma-ray detectors. They were equipped with solar panels generating 90 watts. The Advanced Vela satellites were additionally equipped with two non-imaging silicon photodiode sensors called bhangmeters which monitored light levels over sub-millisecond intervals. They could determine the location of a nuclear explosion to within about 3,000 miles. Atmospheric nuclear explosions produce a unique signature, often called a "double-humped curve": a short and intense flash lasting around 1 millisecond, followed by a second much more prolonged and less intense emission of light taking a fraction of a second to several seconds to build up. The effect occurs because the surface of the early fireball is quickly overtaken by the expanding atmospheric shock wave composed of ionised gas. Although it emits a considerable amount of light itself it is opaque and prevents the far brighter fireball from shining through. As the shock wave expands, it cools down becoming more transparent allowing the much hotter and brighter fireball to become visible again. No single natural phenomenon is known to produce this signature, although there was speculation that the Velas could record exceptionally rare natural double events, such as a meteoroid strike on the spacecraft that produces a bright flash or triggering on a lightning superbolt in the Earth's atmosphere, as may have occurred in the Vela incident. They were also equipped with sensors which could detect the electromagnetic pulse from an atmospheric explosion. Additional power was required for these instruments, and these larger satellites consumed 120 watts generated from solar panels. Serendipitously, the Vela satellites were the first devices ever to detect cosmic gamma ray bursts. Role in discovering gamma-ray bursts On 2 July 1967, at 14:19 UTC, the Vela 4 and Vela 3 satellites detected a flash of gamma radiation unlike any known nuclear weapons signature. Uncertain what had happened but not considering the matter particularly urgent, the team at the Los Alamos Scientific Laboratory, led by Ray Klebesadel, filed the data away for investigation. As additional Vela satellites were launched with better instruments, the Los Alamos team continued to find inexplicable gamma-ray bursts in their data. By analyzing the different arrival times of the bursts as detected by different satellites, the team was able to determine rough estimates for the sky positions of sixteen bursts and definitively rule out a terrestrial or solar origin. Contrary to popular belief, the data was never classified. After thorough analysis, the findings were published in 1973 as an Astrophysical Journal article entitled "Observations of Gamma-Ray Bursts of Cosmic Origin". This alerted the astronomical community to the existence of gamma-ray bursts, now recognised as the most violent events in the universe. Vela 5A and 5B The scintillation X-ray detector (XC) aboard Vela 5A and its twin Vela 5B consisted of two 1 mm thick NaI(Tl) crystals mounted on photomultiplier tubes and covered by a 0.13 mm thick beryllium window. Electronic thresholds provided two energy channels, 3–12 keV and 6–12 keV. In addition to the x-ray Nova announcement indicated above the XC Detector aboard Vela 5A and 5B also discovered and announced the first X-ray burst ever reported. The announcement of this discovery predated the initial announcement of the discovery of gamma-ray bursts by 2 years. In front of each crystal was a slat collimator providing a full width at half maximum (FWHM) aperture of c. 6.1 × 6.1 degrees. The effective detector area was c. 26 cm2. The detectors scanned a great circle every 60 seconds, and covered the whole sky every 56 hours. Sensitivity to celestial sources was severely limited by the high intrinsic detector background, equivalent to about 80% of the signal from the Crab Nebula, one of the brightest sources in the sky at these wavelengths. The Vela 5B satellite X-ray detector remained functional for over ten years. Vela 6A and 6B Like the previous Vela 5 satellites, the Vela 6 nuclear test detection satellites were part of a program run jointly by the Advanced Research Projects of the U.S. Department of Defense and the U.S. Atomic Energy Commission, managed by the U.S. Air Force. The twin spacecraft, Vela 6A and 6B, were launched on 8 April 1970. Data from the Vela 6 satellites were used to look for correlations between gamma-ray bursts and X-ray events. At least two good candidates were found, GB720514 and GB740723. The X-ray detectors failed on Vela 6B on 27 January 1972 and on Vela 6A on 12 March 1972. Controversial observations Some controversy still surrounds the Vela program. On 22 September 1979 the Vela 5B satellite (also known as Vela 10 and IRON 6911) detected the characteristic double flash of an atmospheric nuclear explosion near the Prince Edward Islands. Still unsatisfactorily explained, this event has become known as the Vela incident. President Jimmy Carter initially deemed the event to be evidence of a joint Israeli and South African nuclear test, though the now-declassified report of a scientific panel he subsequently appointed while seeking reelection concluded that it was probably not the event of a nuclear explosion. In 2018, a new study confirmed that it is highly likely that it was a nuclear test, conducted by Israel. An alternative explanation involves a magnetospheric event affecting the instruments. An earlier incident occurred when an intense solar storm on 4 August 1972 triggered the system to event mode as if an explosion occurred, but this was quickly resolved by personnel monitoring the data in real-time. See also Timeline of artificial satellites and space probes References External links Includes material from NASA Goddard's Remote Sensing Tutorial. Orbits (the orbital elements are not updated, as no reliable tracking information is being provided for these satellites. The orbits in the following links may be based on data from older epochs): 1963 in spaceflight Reconnaissance satellites of the United States Nuclear weapons testing Military space program of the United States Gamma-ray bursts Satellite series Military equipment introduced in the 1960s Measurement and signature intelligence
Vela (satellite)
[ "Physics", "Astronomy", "Technology" ]
1,884
[ "Physical phenomena", "Nuclear weapons testing", "Astronomical events", "Gamma-ray bursts", "Environmental impact of nuclear power", "Stellar phenomena" ]
630,099
https://en.wikipedia.org/wiki/Beam%20%28structure%29
A beam is a structural element that primarily resists loads applied laterally across the beam's axis (an element designed to carry a load pushing parallel to its axis would be a strut or column). Its mode of deflection is primarily by bending, as loads produce reaction forces at the beam's support points and internal bending moments, shear, stresses, strains, and deflections. Beams are characterized by their manner of support, profile (shape of cross-section), equilibrium conditions, length, and material. Beams are traditionally descriptions of building or civil engineering structural elements, where the beams are horizontal and carry vertical loads. However, any structure may contain beams, such as automobile frames, aircraft components, machine frames, and other mechanical or structural systems. Any structural element, in any orientation, that primarily resists loads applied laterally across the element's axis is a beam. Overview Historically a beam is a squared timber, but may also be made of metal, stone, or a combination of wood and metal such as a flitch beam. Beams primarily carry vertical gravitational forces, but they are also used to carry horizontal loads such as those due to earthquake or wind, or in tension to resist rafter thrust (tie beam) or compression (collar beam). The loads carried by a beam are transferred to columns, walls, or girders, then to adjacent structural compression members, and eventually to the ground. In light frame construction, joists may rest on beams. Classification based on supports In engineering, beams are of several types: Simply supported – a beam supported on the ends which are free to rotate and have no moment resistance. Fixed or encastré (encastrated) – a beam supported on both ends and restrained from rotation. Overhanging – a simple beam extending beyond its support on one end. Double overhanging – a simple beam with both ends extending beyond its supports on both ends. Continuous – a beam extending over more than two supports. Cantilever – a projecting beam fixed only at one end. Trussed – a beam strengthened by adding a cable or rod to form a truss. Beam on spring supports Beam on elastic foundation Second moment of area (area moment of inertia) In the beam equation, the variable I represents the second moment of area or moment of inertia: it is the sum, along the axis, of dA·r2, where r is the distance from the neutral axis and dA is a small patch of area. It measures not only the total area of the beam section, but the square of each patch's distance from the axis. A larger value of I indicates a stiffer beam, more resistant to bending. Stress Loads on a beam induce internal compressive, tensile and shear stresses (assuming no torsion or axial loading). Typically, under gravity loads, the beam bends into a slightly circular arc, with its original length compressed at the top to form an arc of smaller radius, while correspondingly stretched at the bottom to enclose an arc of larger radius in tension. This is known as sagging; while a configuration with the top in tension, for example over a support, is known as hogging. The axis of the beam retaining its original length, generally halfway between the top and bottom, is under neither compression nor tension, and defines the neutral axis (dotted line in the beam figure). Above the supports, the beam is exposed to shear stress. There are some reinforced concrete beams in which the concrete is entirely in compression with tensile forces taken by steel tendons. These beams are known as prestressed concrete beams, and are fabricated to produce a compression more than the expected tension under loading conditions. High strength steel tendons are stretched while the beam is cast over them. Then, when the concrete has cured, the tendons are slowly released and the beam is immediately under eccentric axial loads. This eccentric loading creates an internal moment, and, in turn, increases the moment-carrying capacity of the beam. Prestressed beams are commonly used on highway bridges. The primary tool for structural analysis of beams is the Euler–Bernoulli beam equation. This equation accurately describes the elastic behaviour of slender beams where the cross sectional dimensions are small compared to the length of the beam. For beams that are not slender a different theory needs to be adopted to account for the deformation due to shear forces and, in dynamic cases, the rotary inertia. The beam formulation adopted here is that of Timoshenko and comparative examples can be found in NAFEMS Benchmark Challenge Number 7. Other mathematical methods for determining the deflection of beams include "method of virtual work" and the "slope deflection method". Engineers are interested in determining deflections because the beam may be in direct contact with a brittle material such as glass. Beam deflections are also minimized for aesthetic reasons. A visibly sagging beam, even if structurally safe, is unsightly and to be avoided. A stiffer beam (high modulus of elasticity and/or one of higher second moment of area) creates less deflection. Mathematical methods for determining the beam forces (internal forces of the beam and the forces that are imposed on the beam support) include the "moment distribution method", the force or flexibility method and the direct stiffness method. General shapes Most beams in reinforced concrete buildings have rectangular cross sections, but a more efficient cross section for a beam is an - or H-shaped section which is typically seen in steel construction. Because of the parallel axis theorem and the fact that most of the material is away from the neutral axis, the second moment of area of the beam increases, which in turn increases the stiffness. An -beam is only the most efficient shape in one direction of bending: up and down looking at the profile as an ''. If the beam is bent side to side, it functions as an 'H', where it is less efficient. The most efficient shape for both directions in 2D is a box (a square shell); the most efficient shape for bending in any direction, however, is a cylindrical shell or tube. For unidirectional bending, the -beam or wide flange beam is superior. Efficiency means that for the same cross sectional area (volume of beam per length) subjected to the same loading conditions, the beam deflects less. Other shapes, like L-beam (angles), C (channels), T-beam and double-T or tubes, are also used in construction when there are special requirements. Walers and struts This system provides horizontal bracing for small trenches, ensuring the secure installation of utilities. It's specifically designed to work in conjunction with steel trench sheets. Thin walled A thin walled beam is a very useful type of beam (structure). The cross section of thin walled beams is made up from thin panels connected among themselves to create closed or open cross sections of a beam (structure). Typical closed sections include round, square, and rectangular tubes. Open sections include I-beams, T-beams, L-beams, and so on. Thin walled beams exist because their bending stiffness per unit cross sectional area is much higher than that for solid cross sections such a rod or bar. In this way, stiff beams can be achieved with minimum weight. Thin walled beams are particularly useful when the material is a composite laminate. Pioneer work on composite laminate thin walled beams was done by Librescu. The torsional stiffness of a beam is greatly influenced by its cross sectional shape. For open sections, such as I sections, warping deflections occur which, if restrained, greatly increase the torsional stiffness. See also Airy points Beam engine Building code Cantilever Classical mechanics Deflection (engineering) Elasticity (physics) and Plasticity (physics) Euler–Bernoulli beam theory Finite element method in structural mechanics Flexural modulus Free body diagram Influence line Materials science and Strength of materials Moment (physics) Poisson's ratio Post and lintel Shear strength Statics and Statically indeterminate Stress (mechanics) and Strain (materials science) Thin-shell structure Timber framing Truss Ultimate tensile strength and Hooke's law Yield (engineering) References Further reading External links American Wood Council: Free Download Library Wood Construction Data Introduction to Structural Design , U. Virginia Dept. Architecture Glossary Course Sampler Lectures, Projects, Tests Beams and Bending review points (follow using next buttons) Structural Behavior and Design Approaches lectures (follow using next buttons) U. Wisconsin–Stout, Strength of Materials online lectures, problems, tests/solutions, links, software Beams I – Shear Forces and Bending Moments Bridge components Solid mechanics Statics Structural system
Beam (structure)
[ "Physics", "Technology", "Engineering" ]
1,780
[ "Structural engineering", "Solid mechanics", "Statics", "Building engineering", "Classical mechanics", "Structural system", "Mechanics", "Bridge components", "Components" ]
630,142
https://en.wikipedia.org/wiki/Transdermal%20implant
Transdermal implants, or dermal piercings, are a form of body modification used both in a medical and aesthetic context that, in contrast to subdermal implants, consist of an object placed partially below and partially above the skin, thus implanted transdermal. Two techniques are prevalent using post-like and microdermal implants respectively. Although the skin around such implants generally heals as if it were a piercing, in the body piercing community, these types of modification are commonly called fairly "heavy" due to the complexity of the procedure, but the potential social implications either. Procedure When the procedure is done using a post-like implant, an incision is made a small distance from the site. The skin is then lifted and the implant is passed through. Then, a hole is opened at the site for it to pass through, and it is moved so that the top part fills the hole. The implants used for this are generally small and not textured in any way except rounding. If a more graphic implant is desired, it is generally done in two parts. First, the base is inserted the same way a single-part would be, except that the base implant is threaded. It may either stick out like a bolt, or be inward like a nut. When this is done, the top half is screwed on. This type is usually done for spikes and/or horns. In any case, the part of the implant which passes under the skin generally is somewhat large and has holes. The skin will grow into them, making it more permanent. Microdermal implants Microdermal implants are a form of body modification which gives the aesthetic appearance of a transdermal implant, without the complications of the much more complicated surgery associated with transdermal implants. Microdermals are single point piercings which are a sort of surface piercing. Microdermal implants can be placed practically anywhere on the surface of the skin on the body, but are different from conventional piercings in that they are composed of two components: an anchor, which is implanted underneath the skin, with a step protruding from (or flush with) the surface of the surrounding skin, and the changeable jewellery, which is screwed into the threaded hole in the step of the anchor. They should not be implanted in hands, feet, wrists, collarbones or any area where it is not flat or that is near a joint. Procedure The procedure is usually performed using a dermal punch or needle. When a dermal piercing is done with a punch, the pouch is made in a different way. When using a needle, the pouch is made by separating the skin. When using a dermal punch, the pouch is made by removing a bit of tissue. A microdermal punch is less painful and therefore commonly used. The process starts by identifying the point of piercing on the sterilized area that will be marked with a surgical marker. The microdermal punch is then used to remove skin tissues. The anchor is then placed under the skin and a piece of jewelry is placed using surgical forceps. See also Body modification Body piercing Body piercing materials Subdermal implant References Body piercing jewellery Implants (medicine) Drug delivery devices Dosage forms
Transdermal implant
[ "Chemistry" ]
662
[ "Pharmacology", "Drug delivery devices" ]
630,148
https://en.wikipedia.org/wiki/Scimitar%20babbler
The scimitar babblers are birds in the genera Pomatorhinus and Jabouilleia of the large Old World babbler family of passerines. These are birds of tropical Asia, with the greatest number of species occurring in hills of the Himalayas. Scimitar babblers are rangy, medium-sized, floppy-tailed land birds with soft fluffy plumage. They have strong legs and are quite terrestrial. This group is not strongly migratory, and most species have short rounded wings, and a weak flight. Scimitar babblers have long, downward-curved bills, used to work through the leaf litter, which give the group its name. They are typically long tailed, dark brown above, and white or orange-brown below. Many have striking head patterns, with a broad black band through the eye, bordered with white above and below. Most scimitar babblers are jungle species, difficult to observe in the dense vegetation they prefer, but like other babblers, these are noisy birds, and the characteristic bubbling calls are often the best indication that these birds are present. As with other babbler species, they frequently occur in groups of up to a dozen, and the rain forest species like Indian scimitar babbler often occur in the mixed feeding flocks typical of tropical Asian jungle. The genera are: Pomatorhinus - 15 species Jabouilleia - 2 species References Old World babblers Timaliidae Bird common names Paraphyletic groups
Scimitar babbler
[ "Biology" ]
313
[ "Phylogenetics", "Paraphyletic groups" ]
630,209
https://en.wikipedia.org/wiki/Radio%20Service%20Software
Radio Service Software (RSS) is a software package used to program commercial Motorola two-way radios and cellular telephones. An update of RSS is CPS, a Windows-based version of the package used for some of Motorola's newer radio models. Radios are connected to PCs via the serial port, and proprietary programming cables. The use of genuine Motorola OEM programming cables is strongly suggested, as aftermarket brands are not as reliable and could lead to radio damage. Licensing RSS (the Carrier and Super Agent forms) is available to authorized professionals from Motorola. The license does not permit resale of the software. Unauthorized possession and use of RSS can lead to criminal charges and prosecution, as well as legal action by Motorola. In September 1999, Motorola alleged unauthorised use of this software as part of a case involving sale of non-US Motorola 2-way radios in the US. On July 17, 2000, Motorola filed a lawsuit against five individuals accused of selling copies of RSS through eBay. References Mobile software
Radio Service Software
[ "Technology" ]
212
[ "Mobile software stubs", "Mobile technology stubs" ]
630,237
https://en.wikipedia.org/wiki/Vertical%20loop
The generic roller coaster vertical loop, also known as a Loop-the-loop, or a Loop-de-loop, where a section of track causes the riders to complete a 360 degree turn, is the most basic of roller coaster inversions. At the top of the loop, riders are completely inverted. History The vertical loop is not a recent roller coaster innovation. Its origins can be traced back to the 1850s when centrifugal railways were built in France and Great Britain. The rides relied on centripetal forces to hold the car in the loop. One early looping coaster was shut down after an accident. Later attempts to build a looping roller coaster were carried out during the late 19th century with the Flip Flap Railway at Sea Lion Park, designed by Roller coaster engineer Lina Beecher. The ride was designed with a completely circular loop (rather than the teardrop shape used by many modern looping roller coasters), and caused neck injuries due to the intense G-forces pulled with the tight radius of the loop. The next attempt at building a looping roller coaster was in 1901 when Edwin Prescott built the Loop the Loop at Coney Island. This ride used the modern teardrop-shaped loop and a steel structure, however more people wanted to watch the attraction, rather than ride. In 1904, Beecher further redesigned the vertical loop to have an even more elliptical design with Olentangy Park's Loop-the-Loop. Vertical loops weren't attempted again until the design of Great American Revolution at Six Flags Magic Mountain, which opened in 1976. Its success depended largely on its clothoid-based (rather than circular) loop. The loop became a phenomenon, and many parks hastened to build roller coasters featuring them. In 2000, a modern looping wooden roller coaster was built, the Son of Beast at Kings Island. Although the ride itself was made of wood, the loop was supported with steel structure. Due to maintenance issues however, the loop was removed at the end of the 2006 season. The loop was not the cause of the ride's issues, but was removed as a precautionary measure. Due to an unrelated issue in 2009, Son of Beast was closed until 2012, when Kings Island announced that it would be removed. On June 22, 2013, Six Flags Magic Mountain introduced Full Throttle, a steel launch coaster with a loop, the tallest in the world at the time of its opening. , the largest vertical loop is located on Flash, a roller coaster produced by Mack Rides at Lewa Adventure in Shaanxi, China. The record is shared by Hyper Coaster in Turkey's Land of Legends theme park, built in 2018, which is identical to Flash at Lewa Adventure. Loops on non-roller coasters In 2002, the Swiss company Klarer Freizeitanlagen AG began working on a safe design for a looping water slide. Since then, multiple installations of the slide, named the AquaLoop and constructed by companies including Polin, Klarer, Aquarena and WhiteWater West, have appeared in many parks. This ride does not feature a vertical loop, instead using an inclined loop (a vertical loop tilted at an angle), which puts less force on the rider. AquaLoop slides feature a safety hatch, which can be opened by a rider in case they do not reach the highest point of the looping. Physics/mechanics Most roller coaster loops are not circular in shape. A commonly used shape is the clothoid loop, which resembles an inverted tear drop and allows for less intense G-forces throughout the element for the rider. The use of this shape was pioneered in 1976 on The New Revolution at Six Flags Magic Mountain, by Werner Stengel of leading coaster engineering firm Ing.-Büro Stengel GmbH. On the way up, from the bottom to the top of the loop, gravity is in opposition to the direction of the cars and will slow the train. The train is slowest at the top of the loop. Once beyond the top, gravity helps to pull the cars down around the bend. If the loop's curvature is constant, the rider is subjected to the greatest force at the bottom. If the curvature of the track changes suddenly, as from level to a circular loop, the greatest force is imposed almost instantly (see jerk). Gradual changes in curvature, as in the clothoid, reduce the force maximum (permitting more speed) and allow the rider time to cope safely with the changing force. This "gentling" runs somewhat contrary to the coaster's raison d'être. Schwarzkopf-designed roller coasters often feature near-circular loops (in case of Thriller even without any reduction of curvature between two almost perfectly circular loops) resulting in intense rides—a trademark for the designer. It is rare for a roller coaster to stall in a vertical loop, although this has happened before. The Psyké Underground coaster (then known as Sirocco) at Walibi Belgium once stranded riders upside-down for several hours. The design of the trains and the rider restraint system (in this case, a simple lap bar) prevented any injuries from occurring, and the riders were removed with the use of a cherry picker. A similar incident occurred on Demon at Six Flags Great America. References External links vertical loop simulator Ing.-Buero Stengel GmbH Loop Shapes in Roller Coasters Roller coaster elements
Vertical loop
[ "Technology" ]
1,107
[ "Roller coaster elements", "Components" ]
630,245
https://en.wikipedia.org/wiki/Roller%20coaster%20inversion
A roller coaster inversion is a roller coaster element in which the track turns riders upside-down and then returns them to an upright position. Early forms of inversions were circular in nature and date back to 1848 on the Centrifugal railway in Paris. These vertical loops produced massive g-force that was often dangerous to riders. As a result, the element eventually became non-existent with the last rides to feature the looping inversions being dismantled during the Great Depression. In 1975, designers from Arrow Development created the corkscrew, reviving interest in the inversion during the modern age of steel roller coasters. Elements have since evolved from simple corkscrews and vertical loops to more complex inversions such as Immelmann loops and cobra rolls. The Smiler at Alton Towers holds the world record for the number of inversions on a roller coaster with 14. History Prototypes (1848–1903) The first inversion in roller coaster history was part of the Centrifugal Railway of Paris, France, built in 1848. It consisted of a sloping track leading into a nearly circular vertical loop in diameter. During the early 1900s, many rides including vertical loops appeared around the world. These early loops had a major design flaw: the circular structure produced intense g-forces (hereafter "Gs"). The Flip Flap Railway, designed by Lina Beecher and built in 1895 on Coney Island of Brooklyn, United States, had a 25-foot circular loop at the end which though initially popular caused some discomfort in passenger's necks, and the ride soon closed. Loop the Loop, another looping coaster, was built later in Coney Island as well. This time the loops were slightly oval-shaped rather than circular, though not clothoid in shape like modern loops. Although the ride was safe, it had a low capacity, loading four people every five minutes (48 people per hour, compared to 1800 riders per hour on Corkscrew, an early modern coaster that opened in 1976), and was poorly received after the discomfort of the Flip Flap Railway. As their novelty wore off and their dangerous reputation spread, compounded with the developing Great Depression, the early looping coasters disappeared. Corkscrew (1968–1976) The concept of inverting riders was not revisited until the 1970s. In 1968, Karl Bacon of Arrow Dynamics created a prototype steel roller coaster with a corkscrew, the first of its kind. The prototype proved that a tubular steel track, first pioneered by Arrow to create Disneyland's Matterhorn Bobsleds in 1959, could execute inversions both safely and reliably. The full model of the prototype, aptly named Corkscrew, was then installed in Knott's Berry Farm in Buena Park, United States, making history as the world's first modern inverting roller coaster (it was relocated to Silverwood Park of Idaho in 1990). In 1976, the previously disastrous vertical loop was successfully revived when Anton Schwarzkopf constructed the Great American Revolution at Six Flags Magic Mountain of Valencia, United States, which became the world's first complete circuit looping roller coaster. Another roller coaster named Corkscrew, built in Cedar Point of Ohio in the same year, became the first with three inversions. Inversions (1977–present) The next few years brought innovations that are still popular in modern coasters. The shuttle roller coaster (non-complete circuit) was invented by Schwarzkopf in 1977 and realized at Kings Island with the Screamin' Demon coaster. These early incarnations used the weight-drop mechanism (as opposed to the later flywheel methods) to launch the trains. Built in 1978, the Loch Ness Monster in Busch Gardens Williamsburg became the first coaster with interlocking loops. It is still the only coaster with this feature, as the only other coasters containing interlocking loops are now defunct: Lightnin' Loops, built by Arrow in Six Flags Great Adventure, was sold in 1992, and Orient Express of Worlds of Fun was demolished in 2003. The first Schwarzkopf shuttle loops with a flywheel launch also first appeared in 1978. Arrow's Revolution, Europe's first looping coaster, was built in 1979 at Blackpool Pleasure Beach of England. In 1980, Carolina Cyclone opened at Carowinds as the first roller coaster with four inversions. The Orient Express opened at Worlds of Fun of Kansas City, United States, in 1980, with the newly invented batwing (not to be confused with a boomerang), a single track element with two inversions. In 1981, Vekoma invented the Boomerang coaster model, which became the most duplicated roller coaster ever. The first Boomerang was built at Reino Aventura (now Six Flags México) of Mexico City, Mexico in 1982. The Boomerang has had over 50 clones built worldwide from Doha, Qatar, to Tashkent, Uzbekistan. 1982 also brought the first five-inversion coaster, Arrow's Viper at Darien Lake in Darien, New York. The record for number of inversions was broken quickly in the following years. Arrow's Vortex at Kings Island, built in 1987, was the first to have six. The next year, Shockwave at Six Flags Great America broke that record with seven inversions. In 1995, Dragon Khan in Spain's Port Aventura became the first to have eight. In 2002, Colossus at Thorpe Park in Chertsey, Surrey, England was the first with ten. In 2013, The Smiler at Alton Towers in Staffordshire, England, broke the record again with 14 inversions. In 2000, Kings Island built Son of Beast, the world's first wooden roller coaster with a vertical loop. Until then, all roller coasters with any inversions were steel. After structural problems caused an incident in July 2006 that injured several riders, Son of Beast's loop was removed in December 2006 to make it possible to use lighter trains. In 2002, X, now X2, designed by Arrow, opened in Six Flags Magic Mountain. It is marketed as the world's first fourth-dimension roller coaster, capable of rotating riders upside-down independently of any track elements. This adds difficulty in delineating the number of inversions such rides have. As the riders physically rotate 360 degrees forward and backwards, proponents insist the number of inversions should not include only track elements. According to Guinness World Records, the roller coaster with the most inversions counted this way is Eejanaika (, Ain't it great?), another 4th Dimension roller coaster, in Fuji-Q Highland of Fujiyoshida, Japan, which rotates riders 14 times. Counting only track elements, however, Alton Tower's The Smiler has the world record for number of inversions, also rotating riders 14 times. Two or more wooden roller coasters with inversions opened in each of 2013, 2014, and 2017. As opposed to the vertical loop that Son of Beast had, Outlaw Run and Hades 360, Mine Blower and Goliath (at Six Flags Great America) have more complex inversions. Outlaw Run at Silver Dollar City has a double barrel roll and a 153° over-banked turn, and Hades 360 has a single corkscrew. Other elements which partially invert riders, such as the overbanked turn which occasionally turn riders beyond 90 degrees, are not typically considered inversions. See also Roller coaster elements – includes a list of inversions List of roller coaster inversion records References External links Element Cross Reference at Roller Coaster Database Roller coaster elements de:Achterbahnelemente#Inversionen fr:Éléments de montagnes russes#Éléments à sensations
Roller coaster inversion
[ "Technology" ]
1,568
[ "Roller coaster elements", "Components" ]
630,360
https://en.wikipedia.org/wiki/Proper%20convex%20function
In mathematical analysis, in particular the subfields of convex analysis and optimization, a proper convex function is an extended real-valued convex function with a non-empty domain, that never takes on the value and also is not identically equal to In convex analysis and variational analysis, a point (in the domain) at which some given function is minimized is typically sought, where is valued in the extended real number line Such a point, if it exists, is called a of the function and its value at this point is called the () of the function. If the function takes as a value then is necessarily the global minimum value and the minimization problem can be answered; this is ultimately the reason why the definition of "" requires that the function never take as a value. Assuming this, if the function's domain is empty or if the function is identically equal to then the minimization problem once again has an immediate answer. Extended real-valued function for which the minimization problem is not solved by any one of these three trivial cases are exactly those that are called . Many (although not all) results whose hypotheses require that the function be proper add this requirement specifically to exclude these trivial cases. If the problem is instead a maximization problem (which would be clearly indicated, such as by the function being concave rather than convex) then the definition of "" is defined in an analogous (albeit technically different) manner but with the same goal: to exclude cases where the maximization problem can be answered immediately. Specifically, a concave function is called if its negation which is a convex function, is proper in the sense defined above. Definitions Suppose that is a function taking values in the extended real number line If is a convex function or if a minimum point of is being sought, then is called if for and if there also exists point such that That is, a function is if it never attains the value and its effective domain is nonempty. This means that there exists some at which and is also equal to Convex functions that are not proper are called convex functions. A is by definition, any function such that is a proper convex function. Explicitly, if is a concave function or if a maximum point of is being sought, then is called if its domain is not empty, it takes on the value and it is not identically equal to Properties For every proper convex function there exist some and such that for every The sum of two proper convex functions is convex, but not necessarily proper. For instance if the sets and are non-empty convex sets in the vector space then the characteristic functions and are proper convex functions, but if then is identically equal to The infimal convolution of two proper convex functions is convex but not necessarily proper convex. See also Citations References Convex analysis Types of functions
Proper convex function
[ "Mathematics" ]
578
[ "Mathematical objects", "Functions and mappings", "Types of functions", "Mathematical relations" ]
630,451
https://en.wikipedia.org/wiki/Hugo%20Hadwiger
Hugo Hadwiger (23 December 1908 in Karlsruhe, Germany – 29 October 1981 in Bern, Switzerland) was a Swiss mathematician, known for his work in geometry, combinatorics, and cryptography. Biography Although born in Karlsruhe, Germany, Hadwiger grew up in Bern, Switzerland. He did his undergraduate studies at the University of Bern, where he majored in mathematics but also studied physics and actuarial science. He continued at Bern for his graduate studies, and received his Ph.D. in 1936 under the supervision of Willy Scherrer. He was for more than forty years a professor of mathematics at Bern. Mathematical concepts named after Hadwiger Hadwiger's theorem in integral geometry classifies the isometry-invariant valuations on compact convex sets in d-dimensional Euclidean space. According to this theorem, any such valuation can be expressed as a linear combination of the intrinsic volumes; for instance, in two dimensions, the intrinsic volumes are the area, the perimeter, and the Euler characteristic. The Hadwiger–Finsler inequality, proven by Hadwiger with Paul Finsler, is an inequality relating the side lengths and area of any triangle in the Euclidean plane. It generalizes Weitzenböck's inequality and was generalized in turn by Pedoe's inequality. In the same 1937 paper in which Hadwiger and Finsler published this inequality, they also published the Finsler–Hadwiger theorem on a square derived from two other squares that share a vertex. Hadwiger's name is also associated with several important unsolved problems in mathematics: The Hadwiger conjecture in graph theory, posed by Hadwiger in 1943 and called by “one of the deepest unsolved problems in graph theory,” describes a conjectured connection between graph coloring and graph minors. The Hadwiger number of a graph is the number of vertices in the largest clique that can be formed as a minor in the graph; the Hadwiger conjecture states that this is always at least as large as the chromatic number. The Hadwiger conjecture in combinatorial geometry concerns the minimum number of smaller copies of a convex body needed to cover the body, or equivalently the minimum number of light sources needed to illuminate the surface of the body; for instance, in three dimensions, it is known that any convex body can be illuminated by 16 light sources, but Hadwiger's conjecture implies that only eight light sources are always sufficient. The Hadwiger–Kneser–Poulsen conjecture states that, if the centers of a system of balls in Euclidean space are moved closer together, then the volume of the union of the balls cannot increase. It has been proven in the plane, but remains open in higher dimensions. The Hadwiger–Nelson problem concerns the minimum number of colors needed to color the points of the Euclidean plane so that no two points at unit distance from each other are given the same color. It was first proposed by Edward Nelson in 1950. Hadwiger popularized it by including it in a problem collection in 1961; already in 1945 he had published a related result, showing that any cover of the plane by five congruent closed sets contains a unit distance in one of the sets. Other mathematical contributions Hadwiger proved a theorem characterizing eutactic stars, systems of points in Euclidean space formed by orthogonal projection of higher-dimensional cross polytopes. He found a higher-dimensional generalization of the space-filling Hill tetrahedra. And his 1957 book Vorlesungen über Inhalt, Oberfläche und Isoperimetrie was foundational for the theory of Minkowski functionals, used in mathematical morphology. Cryptographic work Hadwiger was one of the principal developers of a Swiss rotor machine for encrypting military communications, known as NEMA. The Swiss, fearing that the Germans and Allies could read messages transmitted on their Enigma cipher machines, enhanced the system by using ten rotors instead of five. The system was used by the Swiss army and air force between 1947 and 1992. Awards and honors Asteroid 2151 Hadwiger, discovered in 1977 by Paul Wild, is named after Hadwiger. The first article in the "Research Problems" section of the American Mathematical Monthly was dedicated by Victor Klee to Hadwiger, on the occasion of his 60th birthday, in honor of Hadwiger's work editing a column on unsolved problems in the journal Elemente der Mathematik. Selected works Books Altes und Neues über konvexe Körper, Birkhäuser 1955 Vorlesungen über Inhalt, Oberfläche und Isoperimetrie, Springer, Grundlehren der mathematischen Wissenschaften, 1957 with H. Debrunner, V. Klee Combinatorial Geometry in the Plane, Holt, Rinehart and Winston, New York 1964; Dover reprint 2015 Articles "Über eine Klassifikation der Streckenkomplexe", Vierteljahresschrift der Naturforschenden Gesellschaft Zürich, vol. 88, 1943, pp. 133–143 (Hadwiger's conjecture in graph theory) with Paul Glur Zerlegungsgleichheit ebener Polygone, Elemente der Math, vol. 6, 1951, pp. 97-106 Ergänzungsgleichheit k-dimensionaler Polyeder, Math. Zeitschrift, vol. 55, 1952, pp. 292-298 Lineare additive Polyederfunktionale und Zerlegungsgleichheit, Math. Z., vol. 58, 1953, pp. 4-14 Zum Problem der Zerlegungsgleichheit k-dimensionaler Polyeder, Mathematische Annalen vol. 127, 1954, pp. 170–174 References 1908 births 1981 deaths Modern cryptographers 20th-century Swiss mathematicians Scientists from Karlsruhe Scientists from Bern University of Bern alumni Combinatorialists Geometers German emigrants to Switzerland
Hugo Hadwiger
[ "Mathematics" ]
1,253
[ "Combinatorialists", "Geometers", "Geometry", "Combinatorics" ]
630,489
https://en.wikipedia.org/wiki/Theories%20of%20general%20anaesthetic%20action
A general anaesthetic (or anesthetic) is a drug that brings about a reversible loss of consciousness. These drugs are generally administered by an anaesthetist/anesthesiologist to induce or maintain general anaesthesia to facilitate surgery. General anaesthetics have been widely used in surgery since 1842 when Crawford Long for the first time administered diethyl ether to a patient and performed a painless operation. It has long been believed that general anaesthetics exert their effects (analgesia, unconsciousness, immobility) through a membrane mediated mechanism or by directly modulating the activity of membrane proteins in the neuronal membrane. In general, different anaesthetics exhibit different mechanisms of action such that there are numerous non-exclusionary molecular targets at all levels of integration within the central nervous system. However, for certain intravenous anaesthetics, such as propofol and etomidate, the main molecular target is believed to be GABAA receptor, with particular β subunits playing a crucial role. The concept of specific interactions between receptors and drugs first introduced by Paul Ehrlich in 1897 states that drugs act only when they are bound to their targets (receptors). The identification of concrete molecular targets for general anaesthetics was made possible only with the modern development of molecular biology techniques for single amino acid mutations in proteins of genetically engineered mice. Lipid solubility-anaesthetic potency correlation (the Meyer-Overton correlation) A nonspecific mechanism of general anaesthetic action was first proposed by Emil Harless and Ernst von Bibra in 1847. They suggested that general anaesthetics may act by dissolving in the fatty fraction of brain cells and removing fatty constituents from them, thus changing activity of brain cells and inducing anaesthesia. In 1899 Hans Horst Meyer published the first experimental evidence of the fact that anaesthetic potency is related to lipid solubility. Two years later a similar theory was published independently by Charles Ernest Overton. Meyer compared the potency of many agents, defined as the reciprocal of the molar concentration required to induce anaesthesia in tadpoles, with their olive oil/water partition coefficient. He found a nearly linear relationship between potency and the partition coefficient for many types of anaesthetic molecules such as alcohols, aldehydes, ketones, ethers, and esters. The anaesthetic concentration required to induce anaesthesia in 50% of a population of animals (the EC50) was independent of the means by which the anaesthetic was delivered, i.e., the gas or aqueous phase. Meyer and Overton had discovered the striking correlation between the physical properties of general anaesthetic molecules and their potency: the greater the lipid solubility of a compound in olive oil, the greater its anaesthetic potency. This correlation is true for a wide range of anaesthetics with lipid solubilities ranging over 4-5 orders of magnitude if olive oil is used as the oil phase. This correlation can be improved considerably in terms of both the quality of the correlation and the increased range of anaesthetics if bulk octanol or a fully hydrated fluid lipid bilayer is used as the "oil" phase. It was also noted that volatile anaesthetics are additive in their effects. (A mixture of a half dose of two different volatile anaesthetics gave the same anaesthetic effect as a full dose of either drug alone.) The best characterized anesthetics site accounting for the Meyer-Overton correlation resides in ordered lipid domains. Anesthetics adhere non-specifically to the surface of a palmitate specific binding site within the lipid membrane, displacing the palmitate from ordered GM1 lipids. The process gives rise to a component of membrane-mediated anesthesia. A similar mechanism was shown for luciferase. The anesthetics bound non-specifically to a hydrophobic surface and out-competed the specific binding of luciferin. However luciferase is not physiologically relevant to vertebrates as it is not endogenously expressed in vertebrates. Early lipid hypotheses of general anaesthetic action From the correlation between lipid solubility and anaesthetic potency, both Meyer and Overton had surmised a unitary mechanism of general anaesthesia. They assumed that solubilization of lipophilic general anaesthetic in lipid bilayer of the neuron causes its malfunction and anaesthetic effect when critical concentration of anaesthetic is reached. Later in 1973 Miller and Smith suggested the critical volume hypothesis also called lipid bilayer expansion hypothesis. They assumed that bulky and hydrophobic anaesthetic molecules accumulate inside the hydrophobic (or lipophilic) regions of neuronal lipid membrane causing its distortion and expansion (thickening) due to volume displacement. Accumulation of critical amounts of anaesthetic causes membrane thickening sufficient to reversibly alter function of membrane ion channels thus providing anaesthetic effect. Actual chemical structure of the anaesthetic agent per se is not important, but its molecular volume plays the major role: the more space within membrane is occupied by anaesthetic - the greater is the anaesthetic effect. Based on this theory, in 1954 Mullins suggested that the Meyer-Overton correlation with potency can be improved if molecular volumes of anaesthetic molecules are taken into account. This theory existed for over 60 years and was supported by experimental fact that increases in atmospheric pressure reverse anaesthetic effect (pressure reversal effect). Then other, physicochemical theories of anaesthetic action emerged that took into account the diverse chemical nature of general anaesthetics and suggested that anaesthetic effect is exerted through some perturbation of the lipid bilayer. Several types of bilayer perturbations were proposed to cause anaesthetic effect, including (1) changes in phase separation, (2) changes in bilayer thickness, (3) changes in order parameters, or (4) changes in curvature elasticity. According to the lateral phase separation theory anaesthetics exert their action by fluidizing nerve membranes to a point when phase separations in the critical lipid regions disappear. This anaesthetic-induced fluidization makes membranes less able to facilitate the conformational changes in proteins that may be the basis for such membrane events as ion gating, synaptic transmitter release, and transmitter binding to receptors. More recent techniques with super resolution imaging show that the anesthetics do not overcome phase separation—the phase separation persists. Rather saturated lipids within the phase separation can undergo a transition from ordered to disordered which is dramatically affected by anesthetics. Nonetheless the concept of proteins moving between phase separated lipids in response to anesthetic has now been shown to be correct. All these early lipid theories were generally thought to suffer from four weaknesses (full description with rebuttals see in sections below): Stereoisomers of an anaesthetic drug have very different anaesthetic potency whereas their oil/gas partition coefficients are similar. Certain drugs that are highly soluble in lipids, therefore expected to act as anaesthetics, exert convulsive effect instead (and therefore were called nonimmobilizers). A small increase in body temperature affects membrane density and fluidity as much as general anaesthetics, yet it does not cause anaesthesia. Increasing the chain length in a homologous series of straight-chain alcohols or alkanes increases their lipid solubility, but their anaesthetic potency stops increasing beyond a certain cutoff length. The correlation between lipid solubility and potency of general anaesthetics was thought to be a necessary but insufficient condition for inferring a lipid target site. General anaesthetics could equally well be binding to hydrophobic target sites on proteins in the brain, but given the chemical diversity of anesthetics this would likely need to include more than one site and those sites would not inherently preclude a site in the membrane. For proteins, one reason that more polar general anaesthetics could be less potent is that they have to cross the blood–brain barrier to exert their effect on neurons in the brain. Modern lipid hypothesis There are two modern lipid hypotheses which are non-exclusionary with direct protein binding. The most recent hypothesis postulates that ordered lipids in the plasma membrane contain a structured binding site for the lipid palmitate. It is a lipid binding site within a lipid structure, not a protein structure. Proteins that contain a covalently attached palmitate (palmitoylation) are targeted to the ordered lipids through a specific lipid-lipid interaction. The binding of palmitate to the lipid domain is cholesterol dependent and the cell regulates the protein by nanoscopic localization. Anesthetics work by binding non-specifically to the palmitate binding site, which disrupts the ability of the cholesterol to bind to and sequester the protein into an inactive state. This membrane mediated mechanism was demonstrated experimentally by Pavel and colleagues in 2020. They showed that the enzyme phospholipase D2 (PLD2) is anesthetic sensitive and activates the potassium channel TREK-1 through a membrane mediated mechanism. The anesthetics displaced PLD2 from ordered lipid domains, allowing the enzyme to be activated by substrate presentation and activate the channel. The second lipid hypothesis states that anaesthetic effect happens if solubilization of general anaesthetic in the bilayer causes a redistribution of membrane lateral pressures. Each bilayer membrane has a distinct profile of how lateral pressures are distributed within it. Most membrane proteins (especially ion channels) are sensitive to changes in this lateral pressure distribution profile. These lateral stresses are rather large and vary with depth within the membrane. According to the modern lipid hypothesis a change in the membrane lateral pressure profile shifts the conformational equilibrium of certain membrane proteins known to be affected by clinical concentrations of anaesthetics such as ligand-gated ion channels. This mechanism is also nonspecific because the potency of the anaesthetic is determined not by its actual chemical structure, but by the positional and orientational distribution of its segments and bonds within the bilayer. In 1997, Cantor suggested a detailed mechanism of general anesthesia based on lattice statistical thermodynamics. It was proposed that incorporation of amphiphilic and other interfacially active solutes (e.g. general anaesthetics) into the bilayer increases the lateral pressure selectively near the aqueous interfaces, which is compensated by a decrease in lateral pressure toward the centre of the bilayer. Calculations showed that general anaesthesia likely involves inhibition of the opening of the ion channel in a postsynaptic ligand-gated membrane protein by the following mechanism: A channel tries to open in response to a nerve impulse, thus increasing the cross-sectional area of the protein more near the aqueous interface than in the middle of the bilayer; Then the anaesthetic-induced increase in lateral pressure near the interface shifts the protein conformational equilibrium back to the closed state, since channel opening will require greater work against the higher pressure at the interface. This is the first hypothesis that provided not just correlations of potency with structural or thermodynamic properties, but a detailed mechanistic and thermodynamic understanding of anaesthesia. Thus, according to the modern lipid hypothesis anaesthetics do not act directly on their membrane protein targets, but rather perturb specialized lipid matrices at the protein-lipid interface, which act as mediators. This is a new kind of transduction mechanism, different from the usual key-lock interaction of ligand and receptor, where the anaesthetic (ligand) affects the function of membrane proteins by binding to the specific site on the protein. Thus some membrane proteins are proposed to be sensitive to their lipid environment. A slightly different detailed molecular mechanism of how bilayer perturbation can influence the ion channel was proposed in the same year. Oleamide (fatty acid amide of oleic acid) is an endogenous anaesthetic found in vivo (in the cat's brain) and it is known to potentiate sleep and lower the temperature of the body by closing the gap junction channel connexion. The detailed mechanism is shown on the picture: the well-ordered lipid(green)/cholesterol(yellow) ring that exists around connexon (magenta) becomes disordered on treatment with anaesthetic (red triangles), promoting a closure of the connexon ion channel. This decreases brain activity and induces lethargy and anaesthetic effect. Recently super resolution imaging showed direct experimental evidence that volatile anesthetics disrupt the ordered lipid domains as predicted. In the same study, a related mechanism emerged where the anesthetics released the enzyme phospholipase D (PLD) from lipid domains and the enzyme bound to and activated TREK-1 channels by the production of phosphatidic acid. These results showed experimentally that the membrane is a physiologically relevant target of general anesthetics. Membrane protein hypothesis of general anaesthetic action In the early 1980s, Nicholas P. Franks and William R. Lieb demonstrated that the Meyer-Overton correlation can be reproduced using a soluble protein. They found that two classes of proteins are inactivated by clinical doses of anaesthetic in the total absence of lipids. These are luciferases, which are used by bioluminescent animals and bacteria to produce light, and cytochrome P450, which is a group of heme proteins that hydroxylate a diverse group of compounds, including fatty acids, steroids, and xenobiotics such as phenobarbital. Remarkably, inhibition of these proteins by general anaesthetics was directly correlated with their anaesthetic potencies. Luciferase inhibition also exhibits a long-chain alcohol cutoff, which is related to the size of the anaesthetic-binding pocket. These observations were important because they demonstrated that general anaesthetics exert their effect non-specifically including when binding to proteins. This also opened up the possibility that anesthetics could work through direct binding to proteins, rather than affect membrane proteins indirectly through nonspecific interactions with lipid bilayer as mediator. It was shown that anaesthetics alter the functions of many cytoplasmic signalling proteins, including protein kinase C. However, the proteins considered the most likely molecular targets of anaesthetics are ion channels. According to this theory general anaesthetics are much more selective than in the lipid hypotheses, and they bind directly only to a small number of targets in the central nervous system, mostly ligand-gated ion channels in synapses and G-protein coupled receptors, altering their ion flux. Particularly Cys-loop receptors are plausible targets for general anaesthetics that bind at the interface between the subunits. The Cys-loop receptor superfamily includes inhibitory receptors (GABAA receptors, GABAC receptors, glycine receptors) and excitatory receptors (nicotinic acetylcholine receptor and 5-HT3 serotonin receptor). General anaesthetics can inhibit the channel functions of excitatory receptors or potentiate functions of inhibitory receptors, respectively. The location of non-specific binding sites in ion channels is still a major question in the field. In particular, how does a compound that follows Overton-Meyer directly cause a conformational change in the protein? Typically, allosteric regulation involves a change in protein shape that accommodates the binding of the ligand. This mechanism is distinct from the luciferase mechanism. A second important question, how are the non-specific protein binding sites conserved across species and why do they generally inhibit excitatory receptors and potentiate inhibitory receptors? A number of experimental and computational studies have shown that general anaesthetics could alter the dynamics in the flexible loops that connect α-helices in a bundle and are exposed to the membrane-water interface of Cys-loop receptors. The main binding pockets of general anaesthetics, however, are located within transmembrane four-α-helix bundles of Cys-loop receptors. GABAA receptor is a major target of general anesthetics The GABAA receptor (GABAAR) is an ionotropic receptor activated by the inhibitory neurotransmitter γ-aminobutyric acid (GABA). Activation of the GABAA receptor leads to an influx of chloride ions, which causes hyperpolarization of the neuronal membranes. The GABAA receptor has been identified as the main target of intravenous anesthetics such as propofol and etomidate. The binding site of propofol on mammalian GABAA receptors has been identified by photolabeling using a diazirine derivative. Strong activation of tonic GABAA receptor conductance by clinical concentrations of propofol has been confirmed with electrophysiological recordings from hippocampal CA1 neurons in adult rat brain slices. GABAA receptors that contain β3-subunits are the main molecular targets for the anesthetic actions of etomidate, whereas the β2-containing GABAA receptors are involved in the sedation elicited by this drug. Electrophysiological experiments with amnestic concentrations of etomidate have also shown enhancement of tonic GABAA conductance of CA1 pyramidal neurons in hippocampal slices. Potent activation of GABAA receptor-mediated inhibition with resulting strong depression of firing rates of neocortical neurons has also been demonstrated for clinical concentrations of volatile anesthetics such as isoflurane, enflurane and halothane. Other molecular targets The enhancement of GABAA receptor activity is unlikely to be the only mechanism to account for the wide range of behavioural effects of general anaesthetics. Accumulating experimental data suggests that modulation of two-pore domain potassium channels, or voltage-gated sodium channels may also account for some of the actions of volatile anaesthetic agents. Alternatively, inhibition of glutamate-gated N-methyl-D-aspartate receptors by ketamine, xenon, and nitrous oxide provides a mechanism of action in keeping with a predominant analgesic profile. Historical objections to the early lipid hypotheses 1. Stereoisomers of an anaesthetic drug Stereoisomers that represent mirror images of each other are termed enantiomers or optical isomers (for example, the isomers of R-(+)- and S-(−)-etomidate). Physicochemical effects of enantiomers are always identical in an achiral environment (for example in the lipid bilayer). However, in vivo enantiomers of many general anaesthetics (e.g. isoflurane, thiopental, etomidate) can differ greatly in their anaesthetic potency despite the similar oil/gas partition coefficients. For example, the R-(+) isomer of etomidate is 10 times more potent anaesthetic than its S-(-) isomer. This means that optical isomers partition identically into lipid, but have differential effects on ion channels and synaptic transmission. This objection provides a compelling evidence that the primary target for anaesthetics is not the achiral lipid bilayer itself but rather stereoselective binding sites on membrane proteins that provide a chiral environment for specific anaesthetic-protein docking interactions. Rebuttal to the objection: 1) Stereo selective transport of the anesthetic was never considered. Anesthetics are hydrophobic and transported bound to proteins in the blood. Any stereo selective binding to the transport protein would change the concentration at the site of action. Furthermore a protein sink in the membrane could bind one of the isomers slightly better and reduce the effective concentration the membrane experiences. All stereo isomers are effective anesthetics, they only shifted the sensitivity, suggesting selective transport and selective protein sinks need to be considered. 2) Lipids are chiral, the same as proteins. And like proteins, lipids have ordered and disordered regions. The field failed to investigate the chirality of ordered lipids due to a lack of knowledge of their existence. 2. Nonimmobilizers All general anaesthetics induce immobilization (absence of movement in response to noxious stimuli) through depression of spinal cord functions, whereas their amnesic actions are exerted within the brain. According to the Meyer-Overton correlation the anaesthetic potency of the drug is directly proportional to its lipid solubility, however, there are many compounds that do not satisfy this rule. These drugs are strikingly similar to potent general anaesthetics and are predicted to be potent anaesthetics based on their lipid solubility, but they exert only one constituent of the anaesthetic action (amnesia) and do not suppress movement (i.e. do not depress spinal cord functions) as all anaesthetics do. These drugs are referred to as nonimmobilizers. The existence of nonimmobilizers suggests that anaesthetics induce different components of anaesthetic effect (amnesia and immobility) by affecting different molecular targets and not just the one target (neuronal bilayer) as it was believed earlier. Good example of non-immobilizers are halogenated alkanes that are very hydrophobic, but fail to suppress movement in response to noxious stimulation at appropriate concentrations. See also: flurothyl. Rebuttal to the objection: This is a logical fallacy. The hypothesis does not require that every molecule ever tested obeys the hypothesis for the hypothesis to be true. The existence of less than 10-20 related compounds that are known to disobey the Meyer-Overton hypothesis in no way negates the hundreds if not thousands of chemically diverse compounds that do obey the Overton-Meyer hypothesis. Exceptions can exist for reasons unrelated to the mechanism underlying the Meyer-Overton hypothesis. 3. Temperature increases do not have anaesthetic effect Experimental studies have shown that general anaesthetics including ethanol are potent fluidizers of natural and artificial membranes. However, changes in membrane density and fluidity in the presence of clinical concentrations of general anaesthetics are so small that relatively small increases in temperature (~1 °C) can mimic them without causing anaesthesia. The change in body temperature of approximately 1 °C is within the physiological range and clearly it is not sufficient to induce loss of consciousness per se. Thus membranes are fluidized only by large quantities of anaesthetics, but there are no changes in membrane fluidity when concentrations of anaesthetics are small and restricted to be pharmacologically relevant. Rebuttal to the objection: Early studies only considered the fluidity of the bulk lipid membrane. Recent work has shown that temperature changes can occur over several degrees in ordered nanoscopic lipid domains. Furthermore, fluidity is actively regulated by fatty acid desaturases. And lastly, competition of anesthetics with palmitoylated proteins occurs independent of temperature and despite increased ordered lipids. 4. Effect vanishes beyond a certain chain length According to the Meyer-Overton correlation, in a homologous series of any general anaesthetic (e.g. n-alcohols, or alkanes), increasing the chain length increases the lipid solubility, and thereby should produce a corresponding increase in anaesthetic potency. However, beyond a certain chain length the anaesthetic effect disappears. For the n-alcohols, this cutoff occurs at a carbon chain length of about 13 and for the n-alkanes at a chain length of between 6 and 10, depending on the species. If general anaesthetics disrupt ion channels by partitioning into and perturbing the lipid bilayer, then one would expect that their solubility in lipid bilayers would also display the cutoff effect. However, partitioning of alcohols into lipid bilayers does not display a cutoff for long-chain alcohols from n-decanol to n-pentadecanol. A plot of chain length vs. the logarithm of the lipid bilayer/buffer partition coefficient K is linear, with the addition of each methylene group causing a change in the Gibbs free energy of -3.63 kJ/mol. The cutoff effect was first interpreted as evidence that anaesthetics exert their effect not by acting globally on membrane lipids but rather by binding directly to hydrophobic pockets of well-defined volumes in proteins. As the alkyl chain grows, the anaesthetic fills more of the hydrophobic pocket and binds with greater affinity. When the molecule is too large to be entirely accommodated by the hydrophobic pocket, the binding affinity no longer increases with increasing chain length. Thus the volume of the n-alkanol chain at the cutoff length provides an estimate of the binding site volume. This objection provided the basis for protein hypothesis of anaesthetic effect (see below). However, cutoff effect can still be explained in the frame of lipid hypothesis. In short-chain alkanols (A) segments of the chain are rather rigid (in terms of conformational enthropy) and very close to hydroxyl group tethered to aqueous interfacial region ("buoy"). Consequently, these segments efficiently redistribute lateral stresses from the bilayer interior toward the interface. In long-chain alkanols (B) hydrocarbon chain segments are located further from hydroxyl group and are more flexible than in short-chain alkanols. Efficiency of pressure redistribution decreases as the length of hydrocarbon chain increases until anaesthetic potency is lost at some point. It was proposed that polyalkanols (C) will have anaesthetic effect similar to short-chain 1-alkanols if the chain length between two neighbouring hydroxyl groups is smaller than the cutoff. This idea was supported by the experimental evidence because polyhydroxyalkanes 1,6,11,16-hexadecanetetraol and 2,7,12,17-octadecanetetraol exhibited significant anaesthetic potency as was originally proposed. Rebuttal to the objection: The argument assumes that all classes of anaesthetics must work the same on the membrane. It is quite possible that one or two classes of molecules could work through a non-membrane mediated mechanism. For example, the alcohols were shown to incorporate into the lipid membrane via an enzymatic transphosphatidylation reaction. The ethanol metabolite bound to and inhibited an anesthetic channel. And while this mechanism may contradict a single unitary mechanism of anaesthesia, it does not preclude a membrane mediated one. References Anesthesia Membrane biology
Theories of general anaesthetic action
[ "Chemistry" ]
5,664
[ "Membrane biology", "Molecular biology" ]
630,505
https://en.wikipedia.org/wiki/Algebraic%20geometry%20and%20analytic%20geometry
In mathematics, algebraic geometry and analytic geometry are two closely related subjects. While algebraic geometry studies algebraic varieties, analytic geometry deals with complex manifolds and the more general analytic spaces defined locally by the vanishing of analytic functions of several complex variables. The deep relation between these subjects has numerous applications in which algebraic techniques are applied to analytic spaces and analytic techniques to algebraic varieties. Main statement Let X be a projective complex algebraic variety. Because X is a complex variety, its set of complex points X(C) can be given the structure of a compact complex analytic space. This analytic space is denoted Xan. Similarly, if is a sheaf on X, then there is a corresponding sheaf on Xan. This association of an analytic object to an algebraic one is a functor. The prototypical theorem relating X and Xan says that for any two coherent sheaves and on X, the natural homomorphism: is an isomorphism. Here is the structure sheaf of the algebraic variety X and is the structure sheaf of the analytic variety Xan. More precisely, the category of coherent sheaves on the algebraic variety X is equivalent to the category of analytic coherent sheaves on the analytic variety Xan, and the equivalence is given on objects by mapping to . (Note in particular that itself is coherent, a result known as the Oka coherence theorem, and also, it was proved in “Faisceaux Algebriques Coherents” that the structure sheaf of the algebraic variety is coherent. Another important statement is as follows: For any coherent sheaf on an algebraic variety X the homomorphisms are isomorphisms for all qs. This means that the q-th cohomology group on X is isomorphic to the cohomology group on Xan. The theorem applies much more generally than stated above (see the formal statement below). It and its proof have many consequences, such as Chow's theorem, the Lefschetz principle and Kodaira vanishing theorem. Background Algebraic varieties are locally defined as the common zero sets of polynomials and since polynomials over the complex numbers are holomorphic functions, algebraic varieties over C can be interpreted as analytic spaces. Similarly, regular morphisms between varieties are interpreted as holomorphic mappings between analytic spaces. Somewhat surprisingly, it is often possible to go the other way, to interpret analytic objects in an algebraic way. For example, it is easy to prove that the analytic functions from the Riemann sphere to itself are either the rational functions or the identically infinity function (an extension of Liouville's theorem). For if such a function f is nonconstant, then since the set of z where f(z) is infinity is isolated and the Riemann sphere is compact, there are finitely many z with f(z) equal to infinity. Consider the Laurent expansion at all such z and subtract off the singular part: we are left with a function on the Riemann sphere with values in C, which by Liouville's theorem is constant. Thus f is a rational function. This fact shows there is no essential difference between the complex projective line as an algebraic variety, or as the Riemann sphere. Important results There is a long history of comparison results between algebraic geometry and analytic geometry, beginning in the nineteenth century. Some of the more important advances are listed here in chronological order. Riemann's existence theorem Riemann surface theory shows that a compact Riemann surface has enough meromorphic functions on it, making it an (smooth projective) algebraic curve. Under the name Riemann's existence theorem a deeper result on ramified coverings of a compact Riemann surface was known: such finite coverings as topological spaces are classified by permutation representations of the fundamental group of the complement of the ramification points. Since the Riemann surface property is local, such coverings are quite easily seen to be coverings in the complex-analytic sense. It is then possible to conclude that they come from covering maps of algebraic curves—that is, such coverings all come from finite extensions of the function field. The Lefschetz principle In the twentieth century, the Lefschetz principle, named for Solomon Lefschetz, was cited in algebraic geometry to justify the use of topological techniques for algebraic geometry over any algebraically closed field K of characteristic 0, by treating K as if it were the complex number field. An elementary form of it asserts that true statements of the first order theory of fields about C are true for any algebraically closed field K of characteristic zero. A precise principle and its proof are due to Alfred Tarski and are based in mathematical logic. This principle permits the carrying over of some results obtained using analytic or topological methods for algebraic varieties over C to other algebraically closed ground fields of characteristic 0. (e.g. Kodaira type vanishing theorem.) Chow's theorem , proved by Wei-Liang Chow, is an example of the most immediately useful kind of comparison available. It states that an analytic subspace of complex projective space that is closed (in the ordinary topological sense) is an algebraic subvariety. This can be rephrased as "any analytic subspace of complex projective space that is closed in the strong topology is closed in the Zariski topology." This allows quite a free use of complex-analytic methods within the classical parts of algebraic geometry. GAGA Foundations for the many relations between the two theories were put in place during the early part of the 1950s, as part of the business of laying the foundations of algebraic geometry to include, for example, techniques from Hodge theory. The major paper consolidating the theory was by Jean-Pierre Serre, now usually referred to as GAGA. It proves general results that relate classes of algebraic varieties, regular morphisms and sheaves with classes of analytic spaces, holomorphic mappings and sheaves. It reduces all of these to the comparison of categories of sheaves. Nowadays the phrase GAGA-style result is used for any theorem of comparison, allowing passage between a category of objects from algebraic geometry, and their morphisms, to a well-defined subcategory of analytic geometry objects and holomorphic mappings. Formal statement of GAGA Let be a scheme of finite type over C. Then there is a topological space Xan that as a set consists of the closed points of X with a continuous inclusion map λX: Xan → X. The topology on Xan is called the "complex topology" (and is very different from the subspace topology). Suppose φ: X → Y is a morphism of schemes of locally finite type over C. Then there exists a continuous map φan: Xan → Yan such that λY ∘ φan = φ ∘ λX. There is a sheaf on Xan such that is a ringed space and λX: Xan → X becomes a map of ringed spaces. The space is called the "analytification" of and is an analytic space. For every φ: X → Y the map φan defined above is a mapping of analytic spaces. Furthermore, the map φ ↦ φan maps open immersions into open immersions. If X = Spec(C[x1,...,xn]) then Xan = Cn and for every polydisc U is a suitable quotient of the space of holomorphic functions on U. For every sheaf on X (called algebraic sheaf) there is a sheaf on Xan (called analytic sheaf) and a map of sheaves of -modules . The sheaf is defined as . The correspondence defines an exact functor from the category of sheaves over to the category of sheaves of .The following two statements are the heart of Serre's GAGA theorem (as extended by Alexander Grothendieck, Amnon Neeman, and others). If f: X → Y is an arbitrary morphism of schemes of finite type over C and is coherent then the natural map is injective. If f is proper then this map is an isomorphism. One also has isomorphisms of all higher direct image sheaves in this case. Now assume that Xan is Hausdorff and compact. If are two coherent algebraic sheaves on and if is a map of sheaves of -modules then there exists a unique map of sheaves of -modules with . If is a coherent analytic sheaf of -modules over Xan then there exists a coherent algebraic sheaf of -modules and an isomorphism . In slightly lesser generality, the GAGA theorem asserts that the category of coherent algebraic sheaves on a complex projective variety X and the category of coherent analytic sheaves on the corresponding analytic space Xan are equivalent. The analytic space Xan is obtained roughly by pulling back to X the complex structure from C'n through the coordinate charts. Indeed, phrasing the theorem in this manner is closer in spirit to Serre's paper, seeing how the full scheme-theoretic language that the above formal statement uses heavily had not yet been invented by the time of GAGA's publication. See also Flat module - Notion of flatness was introduced by . Algebraic and analytic local rings have the same completion, and thereby they become a "flat couple" (couple plat). Notes References External links Kiran Kedlaya. 18.726 Algebraic Geometry (LEC # 30 - 33 GAGA)Spring 2009. Massachusetts Institute of Technology: MIT OpenCourseWare Creative Commons BY-NC-SA. Complex geometry
Algebraic geometry and analytic geometry
[ "Mathematics" ]
1,957
[ "Fields of abstract algebra", "Algebraic geometry" ]
630,611
https://en.wikipedia.org/wiki/Suicide%20gene
In the field of genetics, a suicide gene is a gene that will cause a cell to kill itself through the process of apoptosis (programmed cell death). Activation of a suicide gene can cause death through a variety of pathways, but one important cellular "switch" to induce apoptosis is the p53 protein. Stimulation or introduction (through gene therapy) of suicide genes is a potential way of treating cancer or other proliferative diseases. Suicide genes form the basis of a strategy for making cancer cells more vulnerable or sensitive to chemotherapy. The approach has been to attach parts of genes expressed in cancer cells to other genes for enzymes not found in mammals that can convert a harmless substance into one that is toxic to the tumor. Most suicide genes mediate this sensitivity by coding for viral or bacterial enzymes that convert an inactive drug into toxic antimetabolites that inhibit the synthesis of nucleic acid. Suicide genes must be introduced into the cells in ways that ensure their uptake and expression by as many cancer cells as possible, while limiting their expression by normal cells. Suicide gene therapy for cancer requires the vector to have the capacity to discriminate between target and non target cells, between the cancer cells and normal cells. Apoptosis Cell death can majorly occur by either necrosis or apoptosis. Necrosis occurs when a cell is damaged by an external force, such as poison, a bodily injury, an infection or getting cut off from blood supply. When cells die from necrosis, it's a rather messy affair. The death causes inflammation that can cause further distress of injury within the body. Whereas, apoptosis causes degradation of cellular components without eliciting an inflammatory response. Many cells undergo programmed cell death, or apoptosis, during fetal development. A form of cell death in which a programmed sequence of events leads to the elimination of cells without releasing harmful substances into the surrounding. Apoptosis plays a crucial role in developing and maintaining the health of the body by eliminating old cells, unnecessary cells, and unhealthy cells. The human body replaces perhaps one million cells per second. When a cell is compelled to commit suicide, proteins called caspases go into action. They break down the cellular components needed for survival, and they spur production of enzymes known as DNase, which destroy the DNA in the nucleus of the cell. The cell shrinks and sends out distress signals, which are answered by macrophages. The macrophages clean away the shrunken cells, leaving no trace, so these cells do not damage surrounding necrotic cells do. Apoptosis is also essential to prenatal development. For example, in embryos, fingers and toes are initially connected to adjacent digits by tissue. The cells of this connecting tissue undergo apoptosis to produce separate digits. In brain development, initially millions of extra neurons are created. The cells that don't form synaptic connections undergo apoptosis. Programmed cell death is also necessary to start the process of menstruation. That's not to say that apoptosis is a perfect process. Rather than dying due to injury, cells that go through apoptosis die in response to signals within the body. When cells recognize viruses and gene mutations, they may induce death to prevent the damage from spreading. Scientist are trying to learn how they can modulate apoptosis, so that they can control which cells live and which undergo programmed cell death. Anti-cancer drugs and radiation, for example, work by triggering apoptosis in diseased cells. Many diseases and disorders are linked with the life and death of cells—increased apoptosis is a characteristic of AIDS, Alzheimer's, and Parkinson's disease, while decreased apoptosis can signal lupus or cancer. Understanding how to regulate apoptosis could be the first step to treating these conditions. Too little or too much apoptosis can play a role in many diseases. When apoptosis does not work correctly, cells that should be eliminated may persist and become immortal, for example, in cancer and leukemia. when apoptosis works overly well, it kills too many cells and inflicts grave tissue damage. This is the case in strokes and neurodegenerative disorders such as Alzheimer's, Huntington's, and Parkinson's disease. Also known as programmed cell death and cell suicide. Applications Cancer suicide gene therapy The ultimate goal of cancer therapy is the complete elimination of all cancer cells, while leaving all healthy cells unharmed. One of the most promising therapeutic strategies in this regard is cancer suicide gene therapy (CSGT), which is rapidly progressing into new frontiers. The therapeutic success, in CSGT, is primarily contingent upon precision in delivery of the therapeutic transgenes to the cancer cells only. This is addressed by discovering and targeting unique or / and over-expressed biomarkers displayed on the cancer cells and cancer stem cells. Specificity of cancer therapeutic effects is further enhanced by designing the DNA constructs, which put the therapeutic genes under the control of the cancer cell specific promoters. The delivery of the suicidal genes to the cancer cells involves viral, as well as synthetic vectors, which are guided by cancer specific antibodies and ligands. The delivery options also include engineered stem cells with tropisms towards cancers. Main mechanisms inducing cancer cells' deaths include: transgenic expression of thymidine kinases, cytosine deaminases, intracellular antibodies, telomeraseses, caspases, DNases. Precautions are undertaken to eliminate the risks associated with transgenesis. Progress in genomics and proteomics should help us in identifying the cancer specific biomarkers and metabolic pathways for developing new strategies towards clinical trials of targeted and personalized gene therapy of cancer. By introducing the gene into a malignant tumor, the tumor would reduce in size and possibly disappear completely, provided all the individual cells have received a copy of the gene. When the DNA sample in the virus is taken from the patient's own healthy cells, the virus does not need to be able to differentiate between cancer cells and healthy ones. In addition, the advantage is that it is also able to prevent metastasis upon the death of a tumor. As a cancer treatment One of the challenges of cancer treatment is how to destroy malignant tumors without damaging healthy cells. A new method that shows great promise for accomplishing this employs the use of a suicide gene. A suicide gene is a gene which will cause a cell to kill itself through apoptosis. Suicide gene therapy involves delivery of a gene which codes for a cytotoxic product into tumor cells. This can be achieved by two approaches, indirect gene therapy and direct gene therapy. Indirect gene therapy employs enzyme-activated prodrug, in which the enzyme converts the prodrug to a toxic substance and the gene coding for this enzyme is delivered to the tumor cells. For example, a commonly studied strategy based on transfection of herpes simplex virus thymidine kinase (HSV-TK) along with administration of ganciclovir (GSV), in which HSK-TK assists in converting GCV to a toxic compound that inhibits DNA synthesis and causes cell death. Whereas, direct gene therapy employs a toxin gene or a gene which has the ability to correct mutated proapoptotic genes, which can in turn induce cell death via apoptosis.  For instance, the most researched immunotoxin for cancer therapy is the diphtheria toxin as it inhibits protein synthesis by inactivating elongation factor 2 (EF-2) which in turn inhibits protein translation, Moreover, p53 is identified to be frequently abnormal in human tumors and studies show that restoring function of p53 can cause apoptosis of cancer cells. Suicide gene therapy is not necessarily expected to eliminate the need for chemotherapy and radiation treatment for all cancerous tumors. The damage inflicted upon the tumor cells, however, makes them more susceptible to the chemo or radiation. This approach has already proven effective against prostate and bladder cancers. The application of suicide gene therapy is being expanded to several other forms of cancer as well. Cancer patients often experience depressed immune systems, so they can suffer some side effects of the use of a virus as a delivery agent. Improved vectors Suicide gene delivery can be broadly classified into three groups which include viral vectors, synthetic vectors and cell-based vectors. The most efficient vehicles for gene delivery are viral vectors. Widely used viruses for gene therapy include retrovirus, adenovirus (Ads), lentivirus and Aden-associated viruses (AAVs). Non-viral vectors like synthetic vectors were used to combat certain disadvantages of viral vectors like immunogenicity, insertional mutagenesis to name a few. Synthetic vectors refer to use of nanoparticles, like gold nanoparticles, to delivery genes to target cells. Lastly, cell-based vectors employ stem cells as carriers of suicide genes. In the last few years, cell-mediated gene therapy for cancer using mesenchymal stem cells (MSCs) was patented. Bystander effect The bystander effect (BE) is phenomenon as a result of which it is possible to kill untransfected tumor cells located adjacent to transduced cells in suicide gene therapy. As hundred percent transduction of all tumor cells is very difficult to achieve, BE is critical feature of suicide gene therapy. Limitations The drug is supposed to show high specificity towards cancer in order to effective, but studies have shown this to be rarely achieved. Moreover, expression of suicide gene was under control of tumor-specific promoters like human telomerase (hTERT), osteocalcin, carcinoembryonic antigen; however, only hTERT promoter was found to enter clinic trials. This is majorly because of the low transcriptional power of these tumor-specific promoters for suicide gene expression. Additionally, poor accessibility to target cells is an important limitation of suicide gene therapy. Another major hurdle of suicide gene therapy is partial vector specificity to target affected cells. Finally, lack of specific animal models to predict the clinical outcome and other effects of SGT. Biotechnology Suicide genes are often utilized in biotechnology to assist in molecular cloning. Vectors incorporate suicide genes for an organism (such as E. coli). The cloning project focuses on replacing the suicide gene by the desired fragment. Selection of vectors carrying the desired fragment is improved since vectors retaining the suicide gene result in cell death. References Genes Programmed cell death Cellular senescence
Suicide gene
[ "Chemistry", "Biology" ]
2,141
[ "Signal transduction", "Senescence", "Cellular senescence", "Cellular processes", "Programmed cell death" ]
630,668
https://en.wikipedia.org/wiki/Anaesthetic%20machine
An anaesthetic machine (British English) or anesthesia machine (American English) is a medical device used to generate and mix a fresh gas flow of medical gases and inhalational anaesthetic agents for the purpose of inducing and maintaining anaesthesia. The machine is commonly used together with a mechanical ventilator, breathing system, suction equipment, and patient monitoring devices; strictly speaking, the term "anaesthetic machine" refers only to the component which generates the gas flow, but modern machines usually integrate all these devices into one combined freestanding unit, which is colloquially referred to as the "anaesthetic machine" for the sake of simplicity. In the developed world, the most frequent type in use is the continuous-flow anaesthetic machine or "Boyle's machine", which is designed to provide an accurate supply of medical gases mixed with an accurate concentration of anaesthetic vapour, and to deliver this continuously to the patient at a safe pressure and flow. This is distinct from intermittent-flow anaesthetic machines, which provide gas flow only on demand when triggered by the patient's own inspiration. Simpler anaesthetic apparatus may be used in special circumstances, such as the triservice anaesthetic apparatus, a simplified anaesthesia delivery system invented for the British Defence Medical Services, which is light and portable and may be used for ventilation even when no medical gases are available. This device has unidirectional valves which suck in ambient air, which can be enriched with oxygen from a cylinder, with the help of a set of bellows. History The original concept of continuous-flow machines was popularised by Boyle's anaesthetic machine, invented by the British anaesthetist Henry Boyle at St Bartholomew's Hospital in London, United Kingdom, in 1917, although similar machines had been in use in France and the United States. Prior to this time, anaesthesiologists often carried all their equipment with them, but the development of heavy, bulky cylinder storage and increasingly elaborate airway equipment meant that this was no longer practical for most circumstances. Contemporary anaesthetic machines are sometimes still referred to metonymously as "Boyle's machine", and are usually mounted on anti-static wheels for convenient transportation. Many of the early innovations in anaesthetic equipment in the United States, including the closed circuit carbon-dioxide absorber (a.k.a. the Guedel-Foregger Midget) and diffusion of such equipment to anaesthesiologists within the United States can be attributed to Richard von Foregger and The Foregger Company. Flow rate In anaesthesia, fresh gas flow is the mixture of medical gases and volatile anaesthetic agents which is produced by an anaesthetic machine and has not been recirculated. The flow rate and composition of the fresh gas flow is determined by the anaesthetist. Typically the fresh gas flow emerges from the common gas outlet, a specific outlet on the anaesthetic machine to which the breathing attachment is connected. Open circuit forms of equipment, such as the Magill attachment, require high fresh gas flows (e.g. 7 litres/min) to prevent the patient from rebreathing their own expired carbon dioxide. Recirculating (rebreather) systems, use soda lime to absorb carbon dioxide, in the scrubber, so that expired gas becomes suitable to re-use. With a very efficient recirculation system, the fresh gas flow may be reduced to the patient's minimum oxygen requirements (e.g. 250ml/min), plus a little volatile as needed to maintain the concentration of anaesthetic agent. Increasing fresh gas flow to a recirculating breathing system can reduce carbon dioxide absorbent consumption. There is a cost/benefit trade-off between gas flow and use of adsorbent material when no inhalational anaesthetic agent is used which may have economic and environmental consequences. High flow anesthesia supplies fresh gas flow which approximates the patient’s minute ventilation, which is usually about 3 to 6 litres per minute in a normal adult. Low flow anesthesia supplies fresh gas flow of less than half the patient's minute ventilation of the patient, which is usually less than 3.0 litres per minute in a normal adult. Minimal flow anesthesia supplies fresh gas flow of about 0.5 litres per minute. Closed system anesthesia supplies fresh gas flow as needed to make up the recirculated gas volume to compensate for the patient’s need for oxygen and anesthetic agents. Anaesthetic vapouriser An anesthetic vaporizer (American English) or anaesthetic vapouriser (British English) is a device generally attached to an anesthetic machine which delivers a given concentration of a volatile anesthetic agent. It works by controlling the vaporization of anesthetic agents from liquid, and then accurately controlling the concentration in which these are added to the fresh gas flow. The design of these devices takes account of varying: ambient temperature, fresh gas flow, and agent vapor pressure. There are generally two types of vaporizers: plenum and drawover. Both have distinct advantages and disadvantages. The dual-circuit gas-vapor blender is a third type of vaporizer used exclusively for the agent desflurane. Plenum vaporizers The plenum vaporizer is driven by positive pressure from the anesthetic machine, and is usually mounted on the machine. The performance of the vaporizer does not change regardless of whether the patient is breathing spontaneously or is mechanically ventilated. The internal resistance of the vaporizer is usually high, but because the supply pressure is constant the vaporizer can be accurately calibrated to deliver a precise concentration of volatile anesthetic vapor over a wide range of fresh gas flows. The plenum vaporizer is an elegant device which works reliably, without external power, for many hundreds of hours of continuous use, and requires very little maintenance. The plenum vaporizer works by accurately splitting the incoming gas into two streams. One of these streams passes straight through the vaporizer in the bypass channel. The other is diverted into the vaporizing chamber. Gas in the vaporizing chamber becomes fully saturated with volatile anesthetic vapor. This gas is then mixed with the gas in the bypass channel before leaving the vaporizer. A typical volatile agent, isoflurane, has a saturated vapor pressure of 32kPa (about 1/3 of an atmosphere). This means that the gas mixture leaving the vaporizing chamber has a partial pressure of isoflurane of 32kPa. At sea-level (atmospheric pressure is about 101kPa), this equates conveniently to a concentration of 32%. However, the output of the vaporizer is typically set at 1–2%, which means that only a very small proportion of the fresh gas needs to be diverted through the vaporizing chamber (this proportion is known as the splitting ratio). It can also be seen that a plenum vaporizer can only work one way round: if it is connected in reverse, much larger volumes of gas enter the vaporizing chamber, and therefore potentially toxic or lethal concentrations of vapor may be delivered. (Technically, although the dial of the vaporizer is calibrated in volume percent (e.g. 2%), what it actually delivers is a partial pressure of anesthetic agent (e.g. 2kPa)). The performance of the plenum vaporizer depends extensively on the saturated vapor pressure of the volatile agent. This is unique to each agent, so it follows that each agent must only be used in its own specific vaporizer. Several safety systems, such as the Fraser-Sweatman system, have been devised so that filling a plenum vaporizer with the wrong agent is extremely difficult. A mixture of two agents in a vaporizer could result in unpredictable performance from the vaporizer. Saturated vapor pressure for any one agent varies with temperature, and plenum vaporizers are designed to operate within a specific temperature range. They have several features designed to compensate for temperature changes (especially cooling by evaporation). They often have a metal jacket weighing about 5 kg, which equilibrates with the temperature in the room and provides a source of heat. In addition, the entrance to the vaporizing chamber is controlled by a bimetallic strip, which admits more gas to the chamber as it cools, to compensate for the loss of efficiency of evaporation. The first temperature-compensated plenum vaporizer was the Cyprane 'FluoTEC' Halothane vaporizer, released onto the market shortly after Halothane was introduced into clinical practice in 1956. Drawover vaporizers The drawover vaporizer is driven by negative pressure developed by the patient, and must therefore have a low resistance to gas flow. Its performance depends on the minute volume of the patient: its output drops with increasing minute ventilation. The design of the drawover vaporizer is much simpler: in general it is a simple glass reservoir mounted in the breathing attachment. Drawover vaporizers may be used with any liquid volatile agent (including older agents such as diethyl ether or chloroform, although it would be dangerous to use desflurane). Because the performance of the vaporizer is so variable, accurate calibration is impossible. However, many designs have a lever which adjusts the amount of fresh gas which enters the vaporizing chamber. The drawover vaporizer may be mounted either way round, and may be used in circuits where re-breathing takes place, or inside the circle breathing attachment. Drawover vaporizers typically have no temperature compensating features. With prolonged use, the liquid agent may cool to the point where condensation and even frost may form on the outside of the reservoir. This cooling impairs the efficiency of the vaporizer. One way of minimising this effect is to place the vaporizer in a bowl of water. The relative inefficiency of the drawover vaporizer contributes to its safety. A more efficient design would produce too much anesthetic vapor. The output concentration from a drawover vaporizer may greatly exceed that produced by a plenum vaporizer, especially at low flows. For safest use, the concentration of anesthetic vapor in the breathing attachment should be continuously monitored. Despite its drawbacks, the drawover vaporizer is cheap to manufacture and easy to use. In addition, its portable design means that it can be used in the field or in veterinary anesthesia. Dual-circuit gas–vapor blender The third category of vaporizer (the dual-circuit gas–vapor blender) was created specifically for the agent desflurane. Desflurane boils at 23.5 °C, which is very close to room temperature. This means that at normal operating temperatures, the saturated vapor pressure of desflurane changes greatly with only small fluctuations in temperature. This means that the features of a normal plenum vaporizer are not sufficient to ensure an accurate concentration of desflurane. Additionally, on a very warm day, all the desflurane would boil, and very high (potentially lethal) concentrations of desflurane might reach the patient. A desflurane vaporizer (e.g. the TEC 6 produced by Datex-Ohmeda) is heated to 39C and pressurized to 194kPa. It is mounted on the anesthetic machine in the same way as a plenum vaporizer, but its function is quite different. It evaporates a chamber containing desflurane using heat, and injects small amounts of pure desflurane vapor into the fresh gas flow. A transducer senses the fresh gas flow. A warm-up period is required after switching on. The desflurane vaporizer will fail if mains power is lost. Alarms sound if the vaporizer is nearly empty. An electronic display indicates the level of desflurane in the vaporizer. The expense and complexity of the desflurane vaporizer have contributed to the relative lack of popularity of desflurane, although in recent years it is gaining in popularity. Historical vaporizers Historically, ether (the first volatile agent) was first used by John Snow's inhaler (1847) but was superseded by the use of chloroform (1848). Ether then slowly made a revival (1862–1872) with regular use via Curt Schimmelbusch's "mask", a narcosis mask for dripping liquid ether. Now obsolete, it was a mask constructed of wire, and covered with cloth. Pressure and demand from dental surgeons for a more reliable method of administering ether helped modernize its delivery. In 1877, Clover invented an ether inhaler with a water jacket, and by the late 1899 alternatives to ether came to the fore, mainly due to the introduction of spinal anesthesia. Subsequently, this resulted in the decline of ether (1930–1956) use due to the introduction of cyclopropane, trichloroethylene, and halothane. By the 1980s, the anesthetic vaporizer had evolved considerably; subsequent modifications lead to a raft of additional safety features such as temperature compensation, a bimetallic strip, temperature-adjusted splitting ratio and anti-spill measures. Components of a typical machine The breathing circuit is the ducting through which the breathing gases flow from the machine to the patient and back, and includes components for mixing, adjusting, and monitoring the composition of the breathing gas, and for removing carbon dioxide. A modern anaesthetic machine includes at minimum the following components: Connections to piped oxygen, medical air, and nitrous oxide from a wall supply in the healthcare facility, or reserve gas cylinders of oxygen, air, and nitrous oxide attached via a pin index safety system yoke with a Bodok seal Pressure gauges, regulators and 'pop-off' valves, to monitor gas pressure throughout the system and protect the machine components and patient from excessive rises. An adjustable pressure-limiting valve (commonly abbreviated to APL valve, and also referred to as an expiratory valve, relief valve or spill valve) is part of the Breathing circuit which allows excess gas to leave the system while preventing ambient air from entering. Flowmeters such as rotameters for oxygen, air, and nitrous oxide Vaporisers to provide accurate dosage control when using volatile anaesthetics A high-flow oxygen flush, which bypasses the flowmeters and vaporisers to provide pure oxygen at 30-75 litres/minute Systems for monitoring the gases being administered to, and exhaled by, the patient, including an oxygen failure warning device Systems for monitoring the patient's heart rate, ECG, blood pressure and oxygen saturation may be incorporated, in some cases with additional options for monitoring end-tidal carbon dioxide and temperature. Breathing systems are also typically incorporated, including a manual reservoir bag for ventilation in combination with an adjustable pressure-limiting valve, as well as an integrated mechanical ventilator, to accurately ventilate the patient during anaesthesia. Safety features of modern machines Based on experience gained from analysis of mishaps, the modern anaesthetic machine incorporates several safety devices, including: an oxygen failure alarm (a.k.a. 'Oxygen Failure Warning Device' or OFWD). In older machines this was a pneumatic device called a Ritchie whistle which sounds when oxygen pressure is 38 psi descending. Newer machines have an electronic sensor. Nitrous cut-off or oxygen failure protection device, OFPD: the flow of medical nitrous-oxide is dependent on oxygen pressure. This is done at the regulator level. In essence, the nitrous-oxide regulator is a 'slave' of the oxygen regulator. i.e., if oxygen pressure is lost then the other gases can not flow past their regulator. hypoxic-mixture alarms (hypoxy guards or ratio controllers) to prevent gas mixtures which contain less than 21–25% oxygen being delivered to the patient. In modern machines it is impossible to deliver 100% nitrous oxide (or any hypoxic mixture) to the patient to breathe. Oxygen is automatically added to the fresh gas flow even if the anaesthesiologist should attempt to deliver 100% nitrous oxide. Ratio controllers usually operate on the pneumatic principle or are chain linked (link 25 system). Both are located on the rotameter assembly, unless electronically controlled. ventilator alarms, which warn of low or high airway pressures. interlocks between the vaporizers preventing inadvertent administration of more than one volatile agent concurrently alarms on all the above physiological monitors the Pin Index Safety System prevents cylinders being accidentally connected to the wrong yoke the NIST (Non-Interchangeable Screw Thread) or Diameter Index Safety System, DISS system for pipeline gases, which prevents piped gases from the wall being accidentally connected to the wrong inlet on the machine pipeline gas hoses have non-interchangeable Schrader valve connectors, which prevents hoses being accidentally plugged into the wrong wall socket The functions of the machine should be checked at the beginning of every operating list in a "cockpit-drill". Machines and associated equipment must be maintained and serviced regularly. Older machines may lack some of the safety features and refinements present on newer machines. However, they were designed to be operated without mains electricity, using compressed gas power for the ventilator and suction apparatus. Modern machines often have battery backup, but may fail when this becomes depleted. The modern anaesthetic machine still retains all the key working principles of the Boyle's machine (a British Oxygen Company trade name) in honour of the British anaesthetist Henry Boyle. In India, however, the trade name 'Boyle' is registered with Boyle HealthCare Pvt. Ltd., Indore MP. Various regulatory and professional bodies have formulated checklists for different countries. Machines should be cleaned between cases as they are at considerable risk of contamination with pathogens. See also References Further reading External links Virtual Anesthesia Machine (VAM) — a free transparent reality simulation of a generic anaesthetic machine from the University of Florida Various anaesthesia-related simulations Virtual Anaesthesia Textbook AnaesthesiaUK.com — resources for UK anaesthetists in training History of Richard von Foregger and the Foregger Company — written by his son, R. Foregger, this website chronicles one of the leading manufacturers and developers of anesthesiology equipment in the early 20th century. Anesthesia Equipment & Instruments: Definitions & Principles of Practical Usage Free resource which graphically explains the principles of anesthesia vaporizers on howequipmentworks.com Anesthetic machine: a presentation The Anesthesia Gas Machine by Michael P. Dosch Precision vaporisers on Anaesthesia.UK Operating Instructions & Maintenance Guidelines for Precision Vaporizers Anesthetic equipment Machines Drug delivery devices Dosage forms Breathing apparatus
Anaesthetic machine
[ "Physics", "Chemistry", "Technology", "Engineering" ]
3,918
[ "Pharmacology", "Machines", "Drug delivery devices", "Physical systems", "Mechanical engineering" ]
630,741
https://en.wikipedia.org/wiki/Arithmetic%20group
In mathematics, an arithmetic group is a group obtained as the integer points of an algebraic group, for example They arise naturally in the study of arithmetic properties of quadratic forms and other classical topics in number theory. They also give rise to very interesting examples of Riemannian manifolds and hence are objects of interest in differential geometry and topology. Finally, these two topics join in the theory of automorphic forms which is fundamental in modern number theory. History One of the origins of the mathematical theory of arithmetic groups is algebraic number theory. The classical reduction theory of quadratic and Hermitian forms by Charles Hermite, Hermann Minkowski and others can be seen as computing fundamental domains for the action of certain arithmetic groups on the relevant symmetric spaces. The topic was related to Minkowski's geometry of numbers and the early development of the study of arithmetic invariant of number fields such as the discriminant. Arithmetic groups can be thought of as a vast generalisation of the unit groups of number fields to a noncommutative setting. The same groups also appeared in analytic number theory as the study of classical modular forms and their generalisations developed. Of course the two topics were related, as can be seen for example in Langlands' computation of the volume of certain fundamental domains using analytic methods. This classical theory culminated with the work of Siegel, who showed the finiteness of the volume of a fundamental domain in many cases. For the modern theory to begin foundational work was needed, and was provided by the work of Armand Borel, André Weil, Jacques Tits and others on algebraic groups. Shortly afterwards the finiteness of covolume was proven in full generality by Borel and Harish-Chandra. Meanwhile, there was progress on the general theory of lattices in Lie groups by Atle Selberg, Grigori Margulis, David Kazhdan, M. S. Raghunathan and others. The state of the art after this period was essentially fixed in Raghunathan's treatise, published in 1972. In the seventies Margulis revolutionised the topic by proving that in "most" cases the arithmetic constructions account for all lattices in a given Lie group. Some limited results in this direction had been obtained earlier by Selberg, but Margulis' methods (the use of ergodic-theoretical tools for actions on homogeneous spaces) were completely new in this context and were to be extremely influential on later developments, effectively renewing the old subject of geometry of numbers and allowing Margulis himself to prove the Oppenheim conjecture; stronger results (Ratner's theorems) were later obtained by Marina Ratner. In another direction the classical topic of modular forms has blossomed into the modern theory of automorphic forms. The driving force behind this effort is mainly the Langlands program initiated by Robert Langlands. One of the main tool used there is the trace formula originating in Selberg's work and developed in the most general setting by James Arthur. Finally arithmetic groups are often used to construct interesting examples of locally symmetric Riemannian manifolds. A particularly active research topic has been arithmetic hyperbolic 3-manifolds, which as William Thurston wrote, "...often seem to have special beauty." Definition and construction Arithmetic groups If is an algebraic subgroup of for some then we can define an arithmetic subgroup of as the group of integer points In general it is not so obvious how to make precise sense of the notion of "integer points" of a -group, and the subgroup defined above can change when we take different embeddings Thus a better notion is to take for definition of an arithmetic subgroup of any group which is commensurable (this means that both and are finite sets) with a group defined as above (with respect to any embedding into ). With this definition, to the algebraic group is associated a collection of "discrete" subgroups all commensurable to each other. Using number fields A natural generalisation of the construction above is as follows: let be a number field with ring of integers and an algebraic group over . If we are given an embedding defined over then the subgroup can legitimately be called an arithmetic group. On the other hand, the class of groups thus obtained is not larger than the class of arithmetic groups as defined above. Indeed, if we consider the algebraic group over obtained by restricting scalars from to and the -embedding induced by (where ) then the group constructed above is equal to . Examples The classical example of an arithmetic group is , or the closely related groups , and . For the group , or sometimes , is called the modular group as it is related to the modular curve. Similar examples are the Siegel modular groups . Other well-known and studied examples include the Bianchi groups where is a square-free integer and is the ring of integers in the field and the Hilbert–Blumenthal modular groups . Another classical example is given by the integral elements in the orthogonal group of a quadratic form defined over a number field, for example . A related construction is by taking the unit groups of orders in quaternion algebras over number fields (for example the Hurwitz quaternion order). Similar constructions can be performed with unitary groups of hermitian forms, a well-known example is the Picard modular group. Arithmetic lattices in semisimple Lie groups When is a Lie group one can define an arithmetic lattice in as follows: for any algebraic group defined over such that there is a morphism with compact kernel, the image of an arithmetic subgroup in is an arithmetic lattice in . Thus, for example, if and is a subgroup of then is an arithmetic lattice in (but there are many more, corresponding to other embeddings); for instance, is an arithmetic lattice in . The Borel–Harish-Chandra theorem A lattice in a Lie group is usually defined as a discrete subgroup with finite covolume. The terminology introduced above is coherent with this, as a theorem due to Borel and Harish-Chandra states that an arithmetic subgroup in a semisimple Lie group is of finite covolume (the discreteness is obvious). The theorem is more precise: it says that the arithmetic lattice is cocompact if and only if the "form" of used to define it (i.e. the -group ) is anisotropic. For example, the arithmetic lattice associated to a quadratic form in variables over will be co-compact in the associated orthogonal group if and only if the quadratic form does not vanish at any point in . Margulis arithmeticity theorem The spectacular result that Margulis obtained is a partial converse to the Borel—Harish-Chandra theorem: for certain Lie groups any lattice is arithmetic. This result is true for all irreducible lattice in semisimple Lie groups of real rank larger than two. For example, all lattices in are arithmetic when . The main new ingredient that Margulis used to prove his theorem was the superrigidity of lattices in higher-rank groups that he proved for this purpose. Irreducibility only plays a role when has a factor of real rank one (otherwise the theorem always holds) and is not simple: it means that for any product decomposition the lattice is not commensurable to a product of lattices in each of the factors . For example, the lattice in is irreducible, while is not. The Margulis arithmeticity (and superrigidity) theorem holds for certain rank 1 Lie groups, namely for and the exceptional group . It is known not to hold in all groups for (ref to GPS) and for when . There are no known non-arithmetic lattices in the groups when . Arithmetic Fuchsian and Kleinian groups An arithmetic Fuchsian group is constructed from the following data: a totally real number field , a quaternion algebra over and an order in . It is asked that for one embedding the algebra be isomorphic to the matrix algebra and for all others to the Hamilton quaternions. Then the group of units is a lattice in which is isomorphic to and it is co-compact in all cases except when is the matrix algebra over All arithmetic lattices in are obtained in this way (up to commensurability). Arithmetic Kleinian groups are constructed similarly except that is required to have exactly one complex place and to be the Hamilton quaternions at all real places. They exhaust all arithmetic commensurability classes in Classification For every semisimple Lie group it is in theory possible to classify (up to commensurability) all arithmetic lattices in , in a manner similar to the cases explained above. This amounts to classifying the algebraic groups whose real points are isomorphic up to a compact factor to . The congruence subgroup problem A congruence subgroup is (roughly) a subgroup of an arithmetic group defined by taking all matrices satisfying certain equations modulo an integer, for example the group of 2 by 2 integer matrices with diagonal (respectively off-diagonal) coefficients congruent to 1 (respectively 0) modulo a positive integer. These are always finite-index subgroups and the congruence subgroup problem roughly asks whether all subgroups are obtained in this way. The conjecture (usually attributed to Jean-Pierre Serre) is that this is true for (irreducible) arithmetic lattices in higher-rank groups and false in rank-one groups. It is still open in this generality but there are many results establishing it for specific lattices (in both its positive and negative cases). S-arithmetic groups Instead of taking integral points in the definition of an arithmetic lattice one can take points which are only integral away from a finite number of primes. This leads to the notion of an -arithmetic lattice (where stands for the set of primes inverted). The prototypical example is . They are also naturally lattices in certain topological groups, for example is a lattice in Definition The formal definition of an -arithmetic group for a finite set of prime numbers is the same as for arithmetic groups with replaced by where is the product of the primes in . Lattices in Lie groups over local fields The Borel–Harish-Chandra theorem generalizes to -arithmetic groups as follows: if is an -arithmetic group in a -algebraic group then is a lattice in the locally compact group . Some applications Explicit expander graphs Arithmetic groups with Kazhdan's property (T) or the weaker property () of Lubotzky and Zimmer can be used to construct expander graphs (Margulis), or even Ramanujan graphs (Lubotzky-Phillips-Sarnak). Such graphs are known to exist in abundance by probabilistic results but the explicit nature of these constructions makes them interesting. Extremal surfaces and graphs Congruence covers of arithmetic surfaces are known to give rise to surfaces with large injectivity radius. Likewise the Ramanujan graphs constructed by Lubotzky-Phillips-Sarnak have large girth. It is in fact known that the Ramanujan property itself implies that the local girths of the graph are almost always large. Isospectral manifolds Arithmetic groups can be used to construct isospectral manifolds. This was first realised by Marie-France Vignéras and numerous variations on her construction have appeared since. The isospectrality problem is in fact particularly amenable to study in the restricted setting of arithmetic manifolds. Fake projective planes A fake projective plane is a complex surface which has the same Betti numbers as the projective plane but is not biholomorphic to it; the first example was discovered by Mumford. By work of Klingler (also proved independently by Yeung) all such are quotients of the 2-ball by arithmetic lattices in . The possible lattices have been classified by Prasad and Yeung and the classification was completed by Cartwright and Steger who determined, by computer assisted computations, all the fake projective planes in each Prasad-Yeung class. References Algebraic groups Group theory Number theory Differential geometry
Arithmetic group
[ "Mathematics" ]
2,487
[ "Group theory", "Fields of abstract algebra", "Discrete mathematics", "Number theory" ]
630,795
https://en.wikipedia.org/wiki/Ecstasy%20%28emotion%29
Ecstasy () is a subjective experience of total involvement of the subject with an object of their awareness. In classical Greek literature, it refers to removal of the mind or body "from its normal place of function." Total involvement with an object of interest is not an ordinary experience. Ecstasy is an example of an altered state of consciousness characterized by diminished awareness of other objects or the total lack of the awareness of surroundings and everything around the object. The word is also used to refer to any heightened state of consciousness or intensely pleasant experience. It is also used more specifically to denote states of awareness of non-ordinary mental spaces, which may be perceived as spiritual (the latter type of ecstasy often takes the form of religious ecstasy). Description From a psychological perspective, ecstasy is a loss of self-control and sometimes a temporary loss of consciousness, which is often associated with religious mysticism, sexual intercourse and the use of certain drugs. For the duration of the ecstasy the ecstatic is out of touch with ordinary life and is capable neither of communication with other people nor of undertaking normal actions. The experience can be brief in physical time, or it can go on for hours. Subjective perception of time, space or self may strongly change or disappear during ecstasy. For instance, if one is concentrating on a physical task, then any intellectual thoughts may cease. On the other hand, making a spirit journey in an ecstatic trance involves the cessation of voluntary bodily movement. Types Ecstasy can be deliberately induced using religious or creative activities, meditation, music, dancing, breathing exercises, physical exercise, sexual intercourse or consumption of psychotropic drugs. The particular technique that an individual uses to induce ecstasy is usually also associated with that individual's particular religious and cultural traditions. Sometimes an ecstatic experience takes place due to occasional contact with something or somebody perceived as extremely beautiful or holy, or without any known reason. "In some cases, a person might obtain an ecstatic experience 'by mistake'. Maybe the person unintentionally triggers one of the probably many, physiological mechanisms through which such an experience can be reached. In such cases, it is not rare to find that the person later, by reading, looks for an interpretation and maybe finds it within a tradition." People interpret the experience afterward according to their culture and beliefs (as a revelation from God, a trip to the world of spirits or a psychotic episode). "When a person is using an ecstasy technique, he usually does so within a tradition. When he reaches an experience, a traditional interpretation of it already exists." The experience together with its subsequent interpretation may strongly and permanently change the value system and the worldview of the subject (e.g. to cause religious conversion). In 1925, James Leuba wrote: "Among most uncivilized populations, as among civilized peoples, certain ecstatic conditions are regarded as divine possession or as union with the divine. These states are induced by means of drugs, by physical excitement, or by psychical means. But, however produced and at whatever level of culture they may be found, they possess certain common features which suggest even to the superficial observer some profound connection. Always described as delightful beyond expression, these awesome ecstatic experiences end commonly in mental quiescence or even in total unconsciousness." He prepares his readers "... to recognize a continuity of impulse, of purpose, of form and of result between the ecstatic intoxication of the savage and the absorption in God of the Christian mystic." "In everyday language, the word 'ecstasy' denotes an intense, euphoric experience. For obvious reasons, it is rarely used in a scientific context; it is a concept that is extremely hard to define." See also Ecstatic seizures Entheogen Flow (psychology) Lisztomania Poem of Ecstasy Soul flight References Further reading William James, "Varieties of Religious Experience", 1902. Milan Kundera on ecstasy: a quote from Milan Kundera's book "Testaments Betrayed" (1993) Marghanita Laski, "Ecstasy. A study of some Secular and Religious Experiences", London, Cresset Press, 1961. Review Marghanita Laski, "Everyday Ecstasy", Thames and Hudson, 1980. . Evelyn Underhill, "Mysticism", 1911. ch. 8 Timothy Leary, "The Politics of Ecstasy", 1967. External links St. Francis in Ecstasy (painting by Caravaggio) "Dances of Ecstasy", documentary by Michelle Mahrer and Nichole Ma Cognition Concepts in aesthetics Concepts in epistemology Concepts in metaphysics Concepts in the philosophy of mind Consciousness Emotions Happiness Love Mental content Mental processes Mental states Metaphysical theories Metaphysics of mind Mysticism Neuroscience Ontology Philosophy of life Philosophy of love Philosophy of music Philosophy of psychology Philosophy of religion Pleasure Positive mental attitude Psychological concepts Spirituality
Ecstasy (emotion)
[ "Biology" ]
979
[ "Neuroscience" ]
630,814
https://en.wikipedia.org/wiki/Space%20Age
The Space Age is a period encompassing the activities related to the space race, space exploration, space technology, and the cultural developments influenced by these events, beginning with the launch of Sputnik 1 on October 4, 1957, and continuing to the present. This period is characterized by changes in emphasis on particular areas of space exploration and applications. Initially, the United States and the Soviet Union invested unprecedented amounts of resources in breaking records and being first to meet milestones in crewed and uncrewed exploration. The United States established the National Aeronautics and Space Administration (NASA) and the USSR established the Kosmicheskaya programma SSSR to meet these goals. This period of competition gave way to cooperation between those nations and emphasis on scientific research and commercial applications of space-based technology. Eventually other nations became spacefaring. They formed organizations such as the European Space Agency (ESA), the Japan Aerospace Exploration Agency (JAXA), the Indian Space Research Organization (ISRO), and the China National Space Administration (CNSA). When the USSR dissolved the Russian Federation continued their program as Roscosmos. In the early 2020s, some journalists have used the phrase "New Space Age" in reference to a resurgence of innovation and public interest in space exploration as well as commercial applications of low Earth orbit (LEO) and more distant destinations. New developments include the participation of billionaires in crewed space travel, including space tourism and interplanetary travel. Periodization The periodization of the Space Age can differ substantially, with some differentiating between a first Space Age and a second Space Age, which are separated at the turn of the 1980s/1990s. Periods Foundational developments to suborbital spaceflights Some vehicles reached suborbital space much earlier than the launch of Sputnik. In June 1944, a German V-2 rocket became the first manmade object to enter space, albeit only briefly. In March 1926 American rocket pioneer Robert H. Goddard launched the world's first liquid fuel rocket but it did not reach outer space. Since Germans undertook the sub-orbital V-2 rocket flight in secrecy, it was not initially public knowledge. Also, the German launches, as well as the subsequent sounding rocket tests performed in both the United States and the Soviet Union during the late 1940s and early 1950s, were not considered significant enough to define the start of the space age because they did not reach orbit. A rocket powerful enough to reach orbit could also be used as an intercontinental ballistic missile, that could deliver a warhead to any location on Earth. Some commentators claim this is why the orbital standard is commonly used to define when the space age began. 1957 to 1970s/1980s: Establishment and Space Race The Space Race was the first era of the Space Age. It was a race between the United States and the Soviet Union which began with the Soviet Union's October 4, 1957, launch of Earth's first artificial satellite Sputnik 1 during the International Geophysical Year. Weighing and orbiting the Earth once every 98 minutes. The race resulted in rapid advances in rocketry, materials science, and other areas. One of the underlying motivations for the space race was military. The two nations were also in a nuclear arms race following the Second World War. Both nations made use of German missile technology and scientists from their missile program. The advantages, in aviation and rocketry, required for delivery systems were seen as necessary for national security and political superiority. The Cold War era competition between the United States and Soviet Union is one of the reasons the space age happened at that time. Since then the space age continues for the generation of scientific knowledge, the innovation and creation of markets, inspiration, and agreements between the space-faring nations. Other reasons for the continuation of the space age are defending Earth from hazardous objects like asteroids and comets. Much of the technology developed for space applications has been spun off and found additional uses, such as memory foam. In 1958 the United States launched its first satellite, Explorer 1. The same year President Dwight D. Eisenhower created the National Aeronautics and Space Administration, commonly known as NASA. Prior to the first attempted human spaceflight, various animals were flown into outer space to identify potential detrimental effects of high g-forces in takeoff and landing, microgravity, and radiation exposure at high altitudes. The Space Race reached its peak with the Apollo program that captured the imagination of much of the world's population. From 1961 to 1964, NASA's budget was increased almost 500 percent, and the lunar landing program eventually involved some 34,000 NASA employees and 375,000 employees of industrial and university contractors. The Soviet Union proceeded tentatively with its own lunar landing program which it did not publicly acknowledge, partly due to internal debate over its necessity and the untimely death (in January 1966) of Sergey Korolev, chief engineer of the Soviet space program. The landing of Apollo 11 was watched by over 500 million people around the world and is widely recognized as one of the defining moments of the 20th century. Since then, public attention has largely moved to other areas. The last major leap of in the USSR-USA Space Race was the Skylab and Salyut programs, which established the first space stations for the U.S. and USSR in Earth orbit following termination of both countries' moon programs. At the conclusion of the Apollo program, crewed flights from the United States were rare, then ended while the shuttle program was getting ready to kick into gear, and the space race had been over since the Apollo-Soyuz test project of 1975, started a period of U.S.–Soviet co-operation. The Soviet Union continued using the Soyuz spacecraft. The shuttle program restored spaceflight to the U.S. following the Skylab program, but the Space Shuttle Challenger disaster in 1986 marked a significant decline in crewed Shuttle launches. Following the disaster, NASA grounded all Shuttles for safety concerns until 1988. During the 1990s funding for space-related programs fell sharply as the remaining structures of the now-dissolved Soviet Union disintegrated and NASA no longer had any direct competition, engaging rather in more substantial cooperation like the Shuttle-Mir program and its follow-up the International Space Station. Diversification While participation of private actors and other countries beside the Soviet Union and the United States in spaceflight had been the case from the very start of spaceflight development. A first commercial satellite had been launched by 1962, as well as in 1965 a third country achieving orbital spaceflight. The very beginning of the space age, the launch of Sputnik was in the context of international exchange, the International Geophysical Year 1957. Also soon into the space age the international community came together starting to negotiate dedicated international law governing outer space activity. In the 1970s the Soviet Union started to invite other countries to fly their people into space through its Intercosmos program and the United States started to include women and people of colour in its astronaut program. First exchange between the United States and the Soviet Union was formalized in the 1962 Dryden-Blagonravov agreement, calling for cooperation on the exchange of data from weather satellites, a study of the Earth's magnetic field, and joint tracking of the NASA Echo II balloon satellite. In 1963 President Kennedy could even interest premier Khrushchev in a joint crewed Moon landing, but after the assassination of Kennedy in November 1963 and Khrushchev's removal from office in October 1964, the competition between the two nations' crewed space programs heated up, and talk of cooperation became less common, due to tense relations and military implications. Only later the United States and the Soviet Union slowly started to exchange more information and engage in joint programs, particularly in the light of the development of safety standards since 1970, producing the co-developed APAS-75 and later docking standards. Most notably this signaled the ending of the first era of the space age, the Space Race, through the Apollo-Soyuz mission which became the basis for the Shuttle-Mir program and eventually the International Space Station programme. Such international cooperation, and international spaceflight organization was furthermore fueled by increasingly more countries achieving spaceflight capabilies and together with a by the 1980s established private spaceflight sector, both being embodied by the European Space Agency. This allowed the formation of an international and commercial post-Space Race spaceflight economy and period, with by the 1990s a public perception of space exploration and space-related technologies as being increasingly commonplace. This increasingly cooperative diversification persisted until competition started to rise in this diversified conditions, from the 2010s and particularly by the early 2020s. 2010s to present: New Space competition In the early 21st century, the Ansari X Prize competition was set up to help jump-start private spaceflight. The winner, Space Ship One in 2004, became the first spaceship not funded by a government agency. Several countries now have space programs; from related technology ventures to full-fledged space programs with launch facilities. There are many scientific and commercial satellites in use today, with thousands of satellites in orbit, and several countries have plans to send humans into space. Some of the countries joining this new race are France, India, China, Israel and the United Kingdom, all of which have employed surveillance satellites. There are several other countries with less extensive space programs, including Brazil, Germany, Ukraine, and Spain. As for the United States space program, NASA permanently grounded all U.S. Space Shuttles in 2011. NASA has since relied on Russia and SpaceX to take American astronauts to and from the International Space Station. NASA is currently constructing a deep-space crew capsule named the Orion. NASA's goal with this new space capsule is to carry humans to Mars. The Orion spacecraft is due to be completed in the early 2020s. NASA is hoping that this mission will "usher in a new era of space exploration." Another major factor affecting the current Space Age is the privatization of space flight. A significant private spaceflight company is SpaceX which became the proprietor of one of world's most capable operational launch vehicle when they launched their current largest rocket, the Falcon Heavy in 2018. Elon Musk, the founder and CEO of SpaceX, has put forward the goal of establishing a colony of one million people on Mars by 2050 and the company is developing its Starship launch vehicle to facilitate this. Since the Demo-2 mission for NASA in 2020 in which SpaceX launched astronauts for the first time to the International Space Station, the company has maintained an orbital human spaceflight capability. Blue Origin, a private company founded by Amazon.com founder Jeff Bezos, is developing rockets for use in space tourism, commercial satellite launches, and eventual missions to the Moon and beyond. Richard Branson's company Virgin Galactic is concentrating on launch vehicles for space tourism. A spinoff company, Virgin Orbit, air-launches small satellites with their LauncherOne rocket. Another small-satellite launcher, Rocket Lab, has developed the Electron rocket and the Photon satellite bus for sending spacecraft further into the Solar System, the company also plans to introduce the larger Neutron launch vehicle in 2025. Elon Musk has the stated that the main reason he founded SpaceX is to make humanity a multiplanetary species, and cites reasons for doing it including: To ensure the long-term continuation of our species and protecting the "light of consciousness". He also said, You want to wake up in the morning and think the future is going to be great - and that's what being a spacefaring civilization is all about. It's about believing in the future and thinking that the future will be better than the past. And I can't think of anything more exciting than going out there and being among the stars. The Space Age marked a major comeback and return with the launch of NASA's Space Launch system during the Artemis I mission on November 16, 2022; it marked the first time a human rated spacecraft had been to the Moon in nearly 50 years, as well as the return of United States capability to get astronauts to the Moon with the Space Launch System and Orion. Additional goals for the 2020s include completion of the Lunar Gateway, mankind's first space station around the Moon, and the first crewed moon landing since the Apollo era with Artemis III. The U.S. Military has also joined the new space age with the creation of the new Space Force on December 20th 2019. Chronology Cultural influences Arts and architecture The Space Age is considered to have influenced: Automotive design: Virgil Exner's Forward Look, 1957-1961 Googie architecture Space Age fashions by André Courrèges, Pierre Cardin, Paco Rabanne, Rudi Gernreich, Emanuel Ungaro, Jean-Marie Armand, Michèle Rosier, and Diana Dew Furniture design of the 1950s and '60s by Eero Saarinen, Arne Jacobsen, Eero Aarnio, and Verner Panton Amusement park attractions, such as TWA Moonliner and Mission: Space. Cold War playground equipment Music The Space Age also inspired musical genres: Space age pop Space music Space rock Space-themed music See also SEDS Information Age Jet Age Atomic Age References External links Interactive media . Spaceflight Historical eras 20th century 1957 introductions 1957 in spaceflight
Space Age
[ "Astronomy" ]
2,692
[ "Spaceflight", "Outer space" ]
630,922
https://en.wikipedia.org/wiki/Astronomical%20Society%20of%20the%20Pacific
The Astronomical Society of the Pacific (ASP) is an American scientific and educational organization, founded in San Francisco on February 7, 1889, immediately following the solar eclipse of January 1, 1889. Its name derives from its origins on the Pacific Coast, but today it has members all over the country and the world. It has the legal status of a nonprofit organization. It is the largest general astronomy education society in the world, with members from over 40 countries. Education and outreach programs The ASP's mission is to promote public interest in and awareness of astronomy (and increase scientific literacy) through its publications, web site, and many educational and outreach programs. The NASA Night Sky Network – a community of more than 450 astronomy clubs across the U.S. that share their time and telescopes to engage the public with unique astronomy experiences. The ASP provides training and materials to enhance clubs outreach activities, and inspires more than four million people through their participation in 30,000+ plus events. Eclipse Ambassadors Off the Path is a NASA partner program to prepare 500 communities off the path of the 2024 Total Eclipse with the awe and science of solar eclipses. Partnering eclipse enthusiasts and undergraduate students create new partnerships with underserved audiences in their community. Reaching for the Stars: NASA Science for Girl Scouts – The ASP is partnered on a NASA project to create new astronomy badges for Girl Scouts, connect them with their local astronomy clubs, and train amateur astronomers to make their outreach more girl-friendly. The ASP also connects adult Girl Scout volunteers to NASA's Night Sky Network (NSN). My Sky Tonight – a set of research-based, science-rich astronomy activities that engage pre-kindergarten aged children, and trained hundreds of educators at museums, parks, and libraries across the U.S. on how to effectively engage their youngest visitors (ages 3–5) in astronomy. The AAS Astronomy Ambassadors Program – provides mentoring and training experiences for young astronomers just starting their careers. The American Astronomical Society, in partnership with the ASP, members of the Center for Astronomy Education (CAE), and other organizations active in science education and public outreach, has created the program, which involves a series of professional-development workshops and a community of practice designed to help improve participants communication skills and effectiveness in doing outreach to students and the public. On-the-Spot Assessment – a research-to-practice initiative led by the ASP (in collaboration with Oregon State University, the Portal to the Public Network via the Institute for Learning Innovation, and the National Radio Astronomy Observatory) to develop a set of embedded assessment strategies and professional development to support research scientists in effectively communicating science to the public. The Galileoscope Project – Developed in 2009 for the International Year of Astronomy, the Galileoscope has become the centerpiece for teaching about telescopes in many programs. As a key component of the Galileo Teacher Training Program, the Astronomical Society of the Pacific engaged hundreds of educators in professional development related to telescopes and the Galileoscope. Project PLANET – Leveraging resources developed as a part of the My Sky Tonight project to explore their usefulness in a formal, classroom setting. Project ASTRO – a national program that improves the teaching of astronomy and physical science (using hands-on inquiry-based activities) by pairing amateur and professional astronomers with 4th through 9th grade teachers and classes. Family ASTRO – a project that develops kits and games to help families enjoy astronomy in their leisure time and trains astronomers, educators, and community leaders. Astronomy from the Ground Up (AFGU) – Providing informal science educators and interpreters with new and innovative ways to communicate astronomy. AFGU is a growing community of hundreds of educators from museums, science centers, nature centers, and parks around the U.S., who are actively enhancing and expanding their capacity to address astronomy topics for their visitors. Classroom activities, videos, and resources in astronomy (many developed by the Society's educational staff) are sold through their online AstroShop or made available free through the website. The ASP assists with astronomy education and outreach by partnering with other organizations both in the United States and internationally, and organizes an annual meeting to promote the appreciation and understanding of astronomy. Publications The society promotes astronomy education through several publications. The Universe in the Classroom, a free electronic educational newsletter for teachers and other educators around the world who help students of all ages learn more about the wonders of the universe through astronomy. Mercury, the ASP's quarterly on-line membership magazine, covers a wide range of astronomy topics, from history and archaeoastronomy to cutting-edge developments. First published in 1925 as the Leaflets of the ASP, Mercury is now disseminated to thousands of ASP members and schools, universities, libraries, observatories, and institutions around the world. Mercury Online, a publicly accessible companion blog for Mercury, was established in 2019 "to showcase articles by our expert columnists after they've been published in Mercury magazine." The ASP also publishes the journal Publications of the Astronomical Society of the Pacific (PASP) aimed at professional astronomers. The PASP is a technical journal of refereed papers on astronomical research covering all wavelengths and distance scales as well as papers on the latest innovations in astronomical instrumentation and software, and has been publishing journals since 1889. The Astronomical Society of the Pacific Conference Series (ASPCS) is a series of over 400 volumes of professional astronomy conference proceedings. Started in 1988, the Conference Series has grown to become a prominent publication series in the world of professional astronomy publications, and has published over 500 volumes. Volumes are sold to the attendees of the conferences of which the proceedings are published, as well as being offered through the Astronomical Society of the Pacific's AstroShop, and can be found in the libraries of major universities and research institutions worldwide. In 2004, the ASPCS stepped into electronic publishing, offering electronic access subscriptions for libraries and institutions, as well as individual access to volumes which they have purchased in hard copy form. AstroBeat is an on-line ASP-membership column, which comes out every other week, and features a behind-the-scenes report on some aspect of astronomical discovery, astronomy education, or astronomy as a hobby, written by a key participant. Authors have included: Clyde Tombaugh, retelling the story of his discovery of the (dwarf) planet Pluto Michael E. Brown, discussing the naming of the dwarf planet Makemake Noted astronomical photographer David Malin describing the transition from chemical to digital photography Virginia Louise Trimble explaining how she selected her list of the top ten astronomical discoveries of the last thousand years. Awards The ASP makes several different awards annually: The Bruce Medal for lifetime contribution to astronomy research. The medal is named after Catherine Wolfe Bruce. The Klumpke-Roberts Award for outstanding contributions to the public understanding and appreciation of astronomy, named for Dorothea Klumpke-Roberts. The Gordon Myers Amateur Achievement Awards (formerly the Amateur Achievement Award), in recognition of significant contributions to astronomy by one not employed in the field of astronomy in a professional capacity. The Bart Bok Award, named in honor of astronomer Bart Bok, awarded jointly with the American Astronomical Society to outstanding student projects in astronomy at the International Science and Engineering Fair. The Thomas Brennan Award for exceptional achievement related to the teaching of astronomy at the high school level. The Maria and Eric Muhlmann Award for recent significant observational results made possible by innovative advances in astronomical instrumentation, software, or observational infrastructure. The Robert J. Trumpler Award, named in honor of astronomer Robert J. Trumpler, given to a recent recipient of a Ph.D degree with a particularly notable thesis. The Richard Emmons Award is given for a lifetime of contributions to the teaching of astronomy to college non-science majors. The Las Cumbres Amateur Outreach Award, established by Wayne Rosing and Dorothy Largay, seeks to honor outstanding educational outreach by an amateur astronomer to K-12 children and the interested lay public. The Arthur B.C. Walker II Award, first announced in 2016, for research and teaching with a substantial commitment to students from underrepresented groups. Defunct awards The Donohoe Comet Medal (1890–1950). In July 1889 the ASP announced that a bronze medal for "the actual discovery of any unexpected comet" would be awarded on the basis of a $500 gift by Joseph A. Donohoe of San Francisco. The first award was made in March 1890 to W. R. Brooks for the discovery of the comet now known as 16P/Brooks. After the 250th medal had been awarded in 1950 the award was discontinued because photography had enabled the discovery of too many comets. The Comet Medal (1969–1974). In 1968 the ASP Board voted to create a new Comet Medal to be awarded once yearly to recognize "an outstanding nonprofessional astronomer" for "past contributions to the study of comets." The first award was made in 1969 to Reginald L. Waterfield. After 1974, the Comet Medal was discontinued. Affiliations The Astronomical Society of the Pacific is an affiliate of the American Association for the Advancement of Science. Recent presidents Presidents of the ASP have included such notable astronomers as Edwin Hubble, George O. Abell, and Frank Drake. George Pardee, who later became Governor of the State of California, served as president in 1899. 2019– Kelsey Johnson (University of Virginia, National Radio Astronomy Observatory) 2017–2019 Chris Ford (Pixar Animation Studios, ret.) 2015–2017 Connie Walker (National Optical Astronomy Observatory) 2013–2015 Gordon Myers (IBM, ret.) 2011–2013 William A. Gutsch (St Peters College, Jersey City) 2009–2011 Bruce Partridge (Haverford College) 2007–2009 James B. Kaler (U. of Illinois) 2005–2007 Dennis Schatz (Pacific Science Center, Seattle) 2003–2005 Catharine Garmany (U. of Colorado; National Optical Astronomy Observatories) 2001–2003 Alexei Filippenko (U. of California, Berkeley) 1999–2001 Frank N. Bash (U. of Texas) Past presidents 1997–1999 John R. Percy (U. of Toronto) 1995–1997 Bruce Carney (U. of North Carolina) 1993–1995 Russell Merle Genet (Fairborn Observatory) 1991–1993 Julie Lutz (Washington State U.) 1989–1991 Frank Drake (U. of California, Santa Cruz) 1987–1989 James Hesser (Dominion Astrophysical Observatory) 1985–1987 Sidney Wolff (U. of Hawaii, NOAO) 1983–1985 David Morrison (U. of Hawaii & NASA Ames) 1981–1983 Halton Arp (Hale Observatories) 1979–1981 Leonard Kuhi (U. of California, Berkeley) 1977–1979 Ann Boesgaard (U. of Hawaii) 1975–1977 Geoffrey Burbidge (U. of California, San Diego) 1973–1975 Ray Weymann (U. of Arizona) 1971–1973 Harold Weaver (U. of California, Berkeley) 1969–1971 George O. Abell (U. of California, Los Angeles) 1967–1969 Helmut Abt (Kitt Peak National Observatory) 1965–1967 Louis G. Henyey (U. of California, Berkeley) 1962–1965 Robert Petrie (Dominion Astrophysical Observatory) 1960–1962 Seth Barnes Nicholson (Mt. Wilson-Palomar Observatories; 2nd term) 1958–1959 Nicholas Mayall (Lick Observatory; 2nd term) 1956–1957 Andrew McKellar (Dominion Astrophysical Observatory) 1954–1955 Olin Chaddock Wilson (Mt. Wilson & Palomar Observatories) 1952–1953 Gerald Kron (Lick Observatory) 1951 Otto Struve (U. of California, Berkeley) 1950 Dinsmore Alter (Griffith Observatory) 1949 Robert Julius Trumpler (U. of California, Berkeley; 2nd term) 1948 Ira Sprague Bowen (Mt. Wilson Observatory) 1947 C. Donald Shane (Lick Observatory; 2nd term) 1946 Ralph Elmer Wilson (Mt. Wilson Observatory) 1945 Ferdinand Neubauer (Lick Observatory) 1944 Roscoe Frank Sanford (Mt. Wilson Observatory) 1943 Armin Otto Leuschner (University of California, Berkeley; 3rd term) 1942 Nicholas Mayall (Lick Observatory) 1941 Arthur Scott King (Mt. Wilson Observatory) 1940 C. Donald Shane (University of California, Berkeley) 1939 Alfred Harrison Joy (Mt. Wilson Observatory; 2nd term) 1938 Hamilton Jeffers (Lick Observatory) 1937 Harold D. Babcock (Mt. Wilson Observatory) 1936 Armin Otto Leuschner (U. of California, Berkeley; 2nd term) 1935 Seth Barnes Nicholson (Mt. Wilson Observatory) 1934 Sturla Einarsson (U. of California, Berkeley) 1933 Edwin Hubble (Mt. Wilson Observatory) 1932 Robert Julius Trumpler (Lick Observatory) 1931 Alfred Harrison Joy (Mt. Wilson Observatory) 1930 William Meyer (U. of California, Berkeley) 1929 Frederick Hanley Seares (Mt. Wilson Observatory) 1928 Joseph Haines Moore (Lick Observatory; 2nd term) 1927 Paul W. Merrill (Mt. Wilson Observatory) 1926 Bernard Benfield (2nd term) 1925 Bernard Benfield (San Francisco engineer) 1924 Arthur Black (San Francisco banker) 1923 Walter Sydney Adams (U. of California, Berkeley) 1922 Exum Lewis (U. of California, Berkeley) 1921 Charles Cushing (San Francisco attorney, 2nd term) 1920 Joseph Haines Moore (Lick Observatory) 1919 Beverly L. Hodghead (San Francisco, attorney) 1918 William Wallace Campbell (Lick Observatory, 3rd term) 1917 Frank Cornish (San Francisco) 1916 Sidney Dean Townley (Stanford U., 2nd term) 1915 Robert Grant Aitken (Lick Observatory; 2nd term) 1914 Russell Crawford (U. of California, Berkeley) 1913 Alexander George McAdie (U.S. Weather Bureau) 1912 Heber Curtis (Lick Observatory) 1912 John Galloway (Berkeley) 1911 Fremont Morse (U.S. Coast and Geodetic Survey) 1910 William Wallace Campbell (Lick Observatory, 2nd term) 1909 Charles Burckhalter (Chabot Observatory, 2nd term) 1908 Charles Cushing (San Francisco attorney) 1907 Armin Otto Leuschner (U. of California, Berkeley) 1906 Sidney Dean Townley (Latitude Observatory, Ukiah) 1905 George Edwards (University of California, Berkeley) 1904 Otto von Geldern (San Francisco) 1903 Charles Dillon Perrine (Lick Observatory) 1902 John Dolbeer (San Francisco lumber businessman) 1901 James Edward Keeler (Lick Observatory) 1900 George Pardee (eye doctor and Governor of California) 1899 Robert Grant Aitken (Lick Observatory) 1898 William Alvord (San Francisco merchant/banker; Bank of California president; 14th mayor of San Francisco) 1897 William Hussey (Lick Observatory) and William Alvord (San Francisco bank president) 1896 Charles Burckhalter (Chabot Observatory; Co-founder) 1895 William Wallace Campbell (Lick Observatory) 1894 Eusebius Molera (San Francisco civil engineer) 1893 John Martin Schaeberle (Lick Observatory) 1892 William Montgomery Pierson (San Francisco attorney; drew up Society's articles of incorporation) 1889–1891 Edward Singleton Holden (Lick Observatory; Founder) See also List of astronomical societies References Sources History of the Astronomical Society of the Pacific Research resources Astronomical Society of the Pacific records, 1909–1969, The Bancroft Library External links ASP Conference Series website PASP website Astronomical Society of the Pacific 1889 establishments in California Scientific organizations established in 1889 Pacific, Astronomical Society of the Organizations based in the San Francisco Bay Area Astronomy education
Astronomical Society of the Pacific
[ "Astronomy" ]
3,182
[ "Astronomical Society of the Pacific", "Astronomy education", "Astronomy societies", "Astronomy organizations" ]
630,936
https://en.wikipedia.org/wiki/Library%20%28biology%29
In molecular biology, a library is a collection of genetic material fragments that are stored and propagated in a population of microbes through the process of molecular cloning. There are different types of DNA libraries, including cDNA libraries (formed from reverse-transcribed RNA), genomic libraries (formed from genomic DNA) and randomized mutant libraries (formed by de novo gene synthesis where alternative nucleotides or codons are incorporated). DNA library technology is a mainstay of current molecular biology, genetic engineering, and protein engineering, and the applications of these libraries depend on the source of the original DNA fragments. There are differences in the cloning vectors and techniques used in library preparation, but in general each DNA fragment is uniquely inserted into a cloning vector and the pool of recombinant DNA molecules is then transferred into a population of bacteria (a Bacterial Artificial Chromosome or BAC library) or yeast such that each organism contains on average one construct (vector + insert). As the population of organisms is grown in culture, the DNA molecules contained within them are copied and propagated (thus, "cloned"). Terminology The term "library" can refer to a population of organisms, each of which carries a DNA molecule inserted into a cloning vector, or alternatively to the collection of all of the cloned vector molecules. cDNA libraries A cDNA library represents a sample of the mRNA purified from a particular source (either a collection of cells, a particular tissue, or an entire organism), which has been converted back to a DNA template by the use of the enzyme reverse transcriptase. It thus represents the genes that were being actively transcribed in that particular source under the physiological, developmental, or environmental conditions that existed when the mRNA was purified. cDNA libraries can be generated using techniques that promote "full-length" clones or under conditions that generate shorter fragments used for the identification of "expressed sequence tags". cDNA libraries are useful in reverse genetics, but they only represent a very small (less than 1%) portion of the overall genome in a given organism. Applications of cDNA libraries include: Discovery of novel genes Cloning of full-length cDNA molecules for in vitro study of gene function Study of the repertoire of mRNAs expressed in different cells or tissues Study of alternative splicing in different cells or tissues Genomic libraries A genomic library is a set of clones that together represents the entire genome of a given organism. The number of clones that constitute a genomic library depends on (1) the size of the genome in question and (2) the insert size tolerated by the particular cloning vector system. For most practical purposes, the tissue source of the genomic DNA is unimportant because each cell of the body contains virtually identical DNA (with some exceptions). Applications of genomic libraries include: Determining the complete genome sequence of a given organism (see genome project) Serving as a source of genomic sequence for generation of transgenic animals through genetic engineering Study of the function of regulatory sequences in vitro Study of genetic mutations in cancer tissues Synthetic mutant libraries In contrast to the library types described above, a variety of artificial methods exist for making libraries of variant genes. Variation throughout the gene can be introduced randomly by either error-prone PCR, DNA shuffling to recombine parts of similar genes together, or transposon-based methods to introduce indels. Alternatively, mutations can be targeted to specific codons during de novo synthesis or saturation mutagenesis to construct one or more point mutants of a gene in a controlled way. This results in a mixture of double stranded DNA molecules which represent variants of the original gene. The expressed proteins from these libraries can then be screened for variants which exhibit favorable properties (e.g. stability, binding affinity or enzyme activity). This can be repeated in cycles of creating gene variants and screening the expression products in a directed evolution process. Overview of cDNA library preparation techniques DNA extraction If creating an mRNA library (i.e. with cDNA clones), there are several possible protocols for isolating full length mRNA. To extract DNA for genomic DNA (also known as gDNA) libraries, a DNA mini-prep may be useful. Insert preparation cDNA libraries require care to ensure that full length clones of mRNA are captured as cDNA (which will later be inserted into vectors). Several protocols have been designed to optimise the synthesis of the 1st cDNA strand and the 2nd cDNA strand for this reason, and also to make directional cloning into the vector more likely. gDNA fragments are generated from the extracted gDNA by using non-specific frequent cutter restriction enzymes. Vectors The nucleotide sequences of interest are preserved as inserts to a plasmid or the genome of a bacteriophage that has been used to infect bacterial cells. Vectors are propagated most commonly in bacterial cells, but if using a YAC (Yeast Artificial Chromosome) then yeast cells may be used. Vectors could also be propagated in viruses, but this can be time-consuming and tedious. However, the high transfection efficiency achieved by using viruses (often phages) makes them useful for packaging the vector (with the ligated insert) and then introducing them into the bacterial (or yeast) cell. Additionally, for cDNA libraries, a system using the Lambda Zap II phage, ExAssist, and 2 E. coli species has been developed. A Cre-Lox system using loxP sites and the in vivo expression of the recombinase enzyme can also be used instead. These are examples of in vivo excision systems. In vitro excision involves subcloning often using traditional restriction enzymes and cloning strategies. In vitro excision can be more time-consuming and may require more "hands-on" work than in vivo excision systems. In either case, the systems allow the movement of the vector from the phage into a live cell, where the vector can replicate and propagate until the library is to be used. Using libraries This involves "screening" for the sequences of interest. There are multiple possible methods to achieve this. References External links Molecular biology Genetic engineering
Library (biology)
[ "Chemistry", "Engineering", "Biology" ]
1,267
[ "Biochemistry", "Biological engineering", "Genetic engineering", "Molecular biology" ]
630,966
https://en.wikipedia.org/wiki/Heckler
A heckler is a person who harasses and tries to disconcert others with questions, challenges, or gibes. Hecklers are often known to shout discouraging comments at a performance or event, or to interrupt set-piece speeches, with the intent of disturbing performers or participants. Origin Although the word heckler, which originated from the textile trade, was first attested in the mid-15th century, its use as "person who harasses" is from 1885. To heckle was to tease or comb out flax or hemp fibres. The additional meaning, to interrupt speakers with awkward or embarrassing questions, was first used in Scotland, and specifically in early 19th century Dundee, a town where the hecklers who combed the flax had established a reputation as the most belligerent element in the workforce. In the heckling factory, one heckler would read out the day's news while the others worked, to the accompaniment of interruptions and furious debate. Heckling was a major part of the vaudeville theater. Sometimes it was incorporated into the play. Milton Berle's weekly TV variety series in the 1960s featured a heckler named Sidney Spritzer (German/Yiddish for 'squirter') played by Borscht Belt comic Irving Benson. In the 1970s and 1980s, The Muppet Show, which was also built around a vaudeville theme, featured two hecklers, Statler and Waldorf (two old men named after famous hotels). Heckles are now particularly likely to be heard at comedy performances, to unsettle or compete with the performer. Politics Politicians speaking before live audiences have less latitude to deal with hecklers. In the early 1930s, before becoming Premier of Ontario, Mitchell Hepburn stood on top of a manure spreader, apologizing to the crowd for speaking from a Tory platform, at which someone in the crowd shouted, "Well, wind 'er up Mitch, she's never carried a bigger load!" Legally, such conduct may constitute protected free speech. Strategically, coarse or belittling retorts to hecklers entail personal risk disproportionate to any gain. Some politicians, however, have been known to improvise a relevant and witty response despite these pitfalls. One acknowledged expert at this was Harold Wilson, British Prime Minister in the 1960s: Martin Luther King Jr.'s 1963 "I Have a Dream" speech was largely a response to supporter Mahalia Jackson interrupting his prepared speech to shout "Tell them about the dream, Martin". At that point, King stopped reading from his previously prepared speech and improvised the remainder of the speech—this improvised portion of the speech is the best-known part of the speech and frequently rated as one of the best of all time. During a campaign stop just before winning the Presidency in 1980, Ronald Reagan was heckled by an audience member who kept interrupting him during a speech. Reagan tried to go on with his speech three times, but after being interrupted yet again glared at the heckler and snapped "Aw, shut up!" The audience immediately gave him a standing ovation. In 1992, then-Presidential candidate Bill Clinton was interrupted by Bob Rafsky, a member of the AIDS activism group ACT UP, who accused him of "dying of ambition to be president" during a rally. After becoming visibly agitated, Clinton took the microphone off the stand, pointed to the heckler and directly responded to him by saying, "[...] I have treated you and all of the other people who have interrupted my rallies with a hell of a lot more respect than you treated me. And it's time to start thinking about that!" Clinton was then met with raucous applause. On 9 September 2009, Representative Joe Wilson (R-SC) shouted "You lie!" at President Barack Obama after President Obama stated that his health care plan would not subsidize coverage for undocumented noncitizens during a speech he was making to a joint session of Congress. Wilson later apologized for his outburst. On 25 November 2013, Ju Hong, a 24-year-old South Korean immigrant without legal documentation, shouted at Obama to use his executive power to stop deportation of unauthorized noncitizens. Obama said "If, in fact, I could solve all these problems without passing laws in Congress, then I would do so." "But we're also a nation of laws, that's part of our tradition," he continued. "And so the easy way out is to try to yell and pretend like I can do something by violating our laws. And what I'm proposing is the harder path, which is to use our democratic processes to achieve the same goal." Audience control One modern political approach to discourage heckling is to ensure that major events are given before a "tame" audience of sympathizers, or conducted to allow restrictions on who may remain on the premises (see also, astroturfing). The downside is this may make heckling incidents even more newsworthy. This happened to Tony Blair during a photo op visit to a hospital during the 2001 general election campaign, and again in 2003 during a speech. In 2004, American Vice President Dick Cheney was interrupted mid-speech by Perry Patterson, a middle-aged mother in a pre-screened rally audience. After various supportive outbursts that were permitted ("Four more years", "Go Bush!"), Patterson uttered "No, no, no, no" and was removed from the speech area and told to leave. She refused, and was arrested for criminal trespass. Later, in 2005, Cheney received some heckling that was broadcast during his trip to New Orleans, after Hurricane Katrina ravaged the city. The heckling occurred during a press conference in Gulfport, Mississippi, in an area that was cordoned off for public safety reasons, and then further secured for the press conference. Nevertheless, emergency room physician Ben Marble got close enough to the proceedings and could be heard yelling, "Go fuck yourself, Mr. Cheney". Cheney laughed it off and continued speaking. The heckle was a reference to Cheney's use of the phrase the previous year, when during a heated exchange with Senator Patrick Joseph Leahy, Vermont, he said "fuck yourself" on the floor of the senate. On 15 October 2005, The Scotsman reported "Iranian ambassador Dr Seyed Mohammed Hossein Adeli... speaking at the annual Campaign for Nuclear Disarmament conference... During his speech to the CND several people were told to leave the room following protests at Iran's human rights record. Several protesters shouted "Fascists" at the ambassador and the organisers of the conference. Walter Wolfgang, the 82-year-old peace campaigner who was forced out of the Labour Party conference last month, was in the audience." On Thursday, 20 April 2006, a heckler from the Falun Gong spiritual movement entered the US White House grounds as a reporter and interrupted a formal arrival ceremony for Chinese president Hu Jintao. Moments into Hu's speech at the event, Wang Wenyi, perched on the top tier of the stands reserved for the press, began screaming in English and Chinese: "President Bush stop him. Stop this visit. Stop the killing and torture." President Bush later apologised to his guest. Medea Benjamin of Code Pink repeatedly interrupted a major speech by President Barack Obama regarding United States policy in the War on Terror at the National Defense University on 23 May 2013. Sport Hecklers can also appear at sporting events, and usually (but not always) direct their taunts at a visiting team. Fans of the Philadelphia Eagles American football team are notorious for heckling; among the most infamous incidents were booing and subsequently throwing snowballs at a performer dressed as Santa Claus in a halftime show in 1968, and cheering at the career-ending injury of visiting team player Michael Irvin in 1999. Often, sports heckling will also involve throwing objects onto the field; this has led most sports stadiums to ban glass containers and bottlecaps. Another famous heckler is Robert Szasz, who regularly attends Tampa Bay Rays baseball games and is known for loudly heckling one opposing player per game or series. Former Yugoslav football star Dejan Savićević was involved in an infamous incident with a heckler in which during an interview, a man on the street was heard shouting off-camera: "You're a piece of shit!". Savićević berated the man, and went on to finish the interview, without missing a beat. In English and Scottish football, heckling and swearing from the stands, and football chants such as who ate all the pies? are common. Australian sporting audiences are known for creative heckling. Perhaps the most famous is Yabba who had a grandstand at the Sydney Cricket Ground named after him, and now a statue. The sport of cricket is particularly notorious for heckling between the teams themselves, which is known as sledging. At the NBA Drafts of recent years, many fans have gone with heckling ESPN NBA analyst Stephen A. Smith. Most notably, the Stephen A. Smith Heckling Society of Gentlemen heckles him with a sock puppet dubbed as Stephen A. himself. Tennis fans are also fairly noted for heckling. Some may call out during a service point to distract either player. Another common heckle from tennis fans is cheering after a service fault, which is considered to be rude and unsporting. In 2009, then Toronto Blue Jays outfielder Alex Ríos was a victim of a heckling incident outside after a fund-raising event. The incident occurred after Ríos declined to sign an autograph for a young fan, the same day he went 0 for 5 with 5 strikeouts in a game against the Los Angeles Angels. An older man yelled "The way you played today Alex, you should be lucky someone wants your autograph." Ríos then replied with "Who gives a fuck", repeating it until being ushered into a vehicle. Ríos did apologize the next day, but was eventually placed on waivers and claimed by the Chicago White Sox later that year. Music The heckling of Bob Dylan at the Manchester Free Trade Hall in 1966 is one of the most famous examples in music history. During a quiet moment in between songs, an audience member shouts very loudly and clearly, "Judas!" referencing Dylan's so-called betrayal of folk music by "going electric". Dylan replied: "I don't believe you, you're a liar!" before telling his band to "Play it fucking loud!" They played an acidic version of "Like a Rolling Stone". This incident was captured on tape and the full concert was released as volume four of Dylan's Live Bootleg Series. Stand-up comedy In stand-up comedy, a heckler is what separates the medium from theatre; at any time during the show (either indirectly or directly), a heckler may interrupt a comedian's set. Hecklers want the stand-up to break the fourth wall. Most sources claim that heckling is uncommon. Heckling is more likely to occur at open stage performances and performances where alcoholic beverages are being consumed; it is regarded as a sign of audience members becoming impatient with what they regard as a low-quality performance. New comics are often underprepared to handle hecklers properly. In addition, live comedy venues tend to discourage heckling via signage and admissions policy, but tend to tolerate it as it creates customer loyalty. The etiquette of exactly how much heckling is tolerated differs immensely from venue to venue, however, but is generally more likely to be tolerated in blue-collar or working-class venues. Comedians generally dislike heckling. Hecklers may rarely threaten or physically assault comedians. Even more rarely, comedians may receive death threats. Countering Comedians counter hecklers by controlling the flow of conversation. A comedian cannot completely ignore a heckler without undermining the performance. Comedians devise a strategy for quashing such outbursts, usually by having a repertoire of comebacks for hecklers—known as savers, heckler lines, squelchers, or squelches—on hand; those who handle the moment in an off-the-cuff manner do so by giving the heckler "enough rope to hang themselves". Stewart Lee treats heckles as genuine inquiries. Jerry Seinfeld is a "Heckle Therapist", who verbally sympathizes with the heckler to confuse the heckler and win the audience over. Some comedians will get hecklers to repeat themselves to take away the momentum and laughter from the heckle. Phyllis Diller would have her light technician shine a spotlight on hecklers to make them feel intimidated. Magician Teller established his persona of never speaking while performing to minimize heckling. Controversies Bill Burr's Philadelphia Incident was performed in Camden, New Jersey, where he reprimanded an audience of over ten thousand people. Michael Richards became upset with hecklers and called them the N-word several times. When a female audience member claimed that rape jokes are never funny, Daniel Tosh allegedly made an off-the-cuff retort that it would be funny if she were to be immediately raped. Movies and musicals Screenings of The Rocky Horror Picture Show (as well as performances of The Rocky Horror Show) are continuously heckled by the audience by use of so-called callbacks or call backs. The funny and bold comments are sometimes 'traditional', that is, developed over decades including downloadable 'audience participation' heckling scripts. Many callbacks are spontaneous, regional and develop into heckling fights amongst the audience members or with either professional musical actors or audience members who perform the show in front of the screen. Other comedy mediums The comedy TV series The Muppet Show featured a pair of hecklers named Statler and Waldorf. These characters created a kind of meta-comedy act in which the show's official comedian, Fozzie Bear, acted as their usual foil, although they occasionally made jokes at other characters as well. Another notable use of heckling in comedy is in the cult favorite series Mystery Science Theater 3000. The series involves a man (either Joel Robinson, Mike Nelson or Jonah Heston) and two robots (Tom Servo and Crow T. Robot) sitting in a theater mocking bad B-movies. This style of comedy, coined as riffing, is continued with commentary-based series such as RiffTrax and Cinematic Titanic. In one of Rowan Atkinson's plays, The School Master, a heckler interrupted his play by shouting "Here!" after Atkinson had read out an amusing name on his register. Atkinson incorporated it into his act by saying "I have a detention book..." See also Applause Audience participation Booing Internet troll Mystery Science Theater 3000, a TV show built on humorous heckling References Human communication Comedy Show business terms Stand-up comedy
Heckler
[ "Biology" ]
3,043
[ "Human communication", "Behavior", "Human behavior" ]
630,977
https://en.wikipedia.org/wiki/Portland%20Head%20Light
Portland Head Light is a historic lighthouse in Cape Elizabeth, Maine. The light station sits on a headland at the entrance of the primary shipping channel into Portland Harbor, which is within Casco Bay in the Gulf of Maine. Completed in 1791, it is the oldest lighthouse in Maine. The light station is automated, and the tower, beacon, and foghorn are maintained by the United States Coast Guard, while the former lighthouse keeper's house is a maritime museum within Fort Williams Park. History Construction began in 1787 at the directive of George Washington and was completed on January 10, 1791, using a fund of $1,500, established by him. Whale oil lamps were originally used for illumination. In 1855, following the formation of the Lighthouse Board, a fourth-order Fresnel lens was installed; that lens was replaced by a second-order Fresnel lens, which was replaced later by an aerobeacon in 1958. That lens was replaced with a DCB-224 aerobeacon in 1991. The DCB-224 aerobeacon is still in use. In 1787, while Maine was still part of the state of Massachusetts, George Washington engaged two masons from the town of Falmouth (modern-day Portland), Jonathan Bryant and John Nichols, and instructed them to take charge of the construction of a lighthouse on Portland Head. Washington reminded them that the early government was poor, and said that the materials used to build the lighthouse should be taken from the fields and shores, materials which could be handled nicely when hauled by oxen on a drag. The original plans called for the tower to be 58 feet tall. When the masons completed this task, they climbed to the top of the tower and realized that it would not be visible beyond the headlands to the south, so it was raised another 20 feet. The tower was built of rubblestone, and Washington gave the masons four years to build it. While it was under construction in 1789, the federal government was being formed, and for a while, it looked as though the lighthouse would not be finished. Following passage of their ninth law, the first congress made an appropriation and authorized the Secretary of the Treasury, Alexander Hamilton, to inform the mechanics that they could go on with the completion of the tower. On August 10, 1790, the second session of Congress appropriated a sum not to exceed $1500, and under the direction of the President, "to cause the said lighthouse to be finished and completed accordingly." The tower was completed in 1790 and first lit on January 10, 1791. During the American Civil War, raids on shipping in and out of Portland Harbor became commonplace, and because of the necessity for ships at sea to sight Portland Head Light as soon as possible, the tower was raised 20 more feet. The current keepers' house was built in 1891. When Halfway Rock Light was built, Portland Head Light was considered less important, and in 1883, the tower was shortened and a weaker fourth-order Fresnel lens was added. Following the mariners' complaints, the former height and second-order Fresnel lens were restored in 1885. The station has changed little except for rebuilding the whistle house in 1975 due to its having been badly damaged in a storm. Today, Portland Head Light stands above ground and above water, its white conical tower is connected to a dwelling. The grounds and keeper's house are owned by the town of Cape Elizabeth, while the beacon and fog signal is owned and maintained by the U.S. Coast Guard as a current aid to navigation. It was added to the National Register of Historic Places as Portland Head light (sic) on April 24, 1973, reference number 73000121. The lighthouse was designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 2002. In art and popular culture Edward Hopper painted the lighthouse in 1927. His watercolor resides at Boston's Museum of Fine Arts. A snowy Portland Head Light was featured in the 1999 drama Snow Falling on Cedars, which was filmed during the Ice storm of 1998. The lighthouse was featured in the fifth, sixth, and seventh seasons of Agents of S.H.I.E.L.D.. Described as being on the shores of Lake Ontario in the show, the building housed an underground facility used by S.H.I.E.L.D. as their covert base of operations. It was also featured on a postcard in the opening credits of National Lampoon's Vacation. See also Annie C. Maguire shipwreck Port of Portland National Register of Historic Places listings in Cumberland County, Maine References External links Portland Head Light - official site Portland Head Light - United States Lighthouses Lighthouses on the National Register of Historic Places in Maine Lighthouses completed in 1791 Houses completed in 1791 Towers completed in 1791 Lighthouse museums in Maine Museums in Cumberland County, Maine Historic Civil Engineering Landmarks Cape Elizabeth, Maine Lighthouses in Cumberland County, Maine National Register of Historic Places in Cumberland County, Maine 1791 establishments in Maine
Portland Head Light
[ "Engineering" ]
1,020
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
5,700,197
https://en.wikipedia.org/wiki/Strontium-90
Strontium-90 () is a radioactive isotope of strontium produced by nuclear fission, with a half-life of 28.8 years. It undergoes β− decay into yttrium-90, with a decay energy of 0.546 MeV. Strontium-90 has applications in medicine and industry and is an isotope of concern in fallout from nuclear weapons, nuclear weapons testing, and nuclear accidents. Radioactivity Naturally occurring strontium is nonradioactive and nontoxic at levels normally found in the environment, but 90Sr is a radiation hazard. 90Sr undergoes β− decay with a half-life of 28.79 years and a decay energy of 0.546 MeV distributed to an electron, an antineutrino, and the yttrium isotope 90Y, which in turn undergoes β− decay with a half-life of 64 hours and a decay energy of 2.28 MeV distributed to an electron, an antineutrino, and 90Zr (zirconium), which is stable. Note that 90Sr/Y is almost a pure beta particle source; the gamma photon emission from the decay of 90Y is so infrequent that it can normally be ignored. 90Sr has a specific activity of 5.21 TBq/g. Fission product 90Sr is a product of nuclear fission. It is present in significant amount in spent nuclear fuel, in radioactive waste from nuclear reactors and in nuclear fallout from nuclear tests. For thermal neutron fission as in today's nuclear power plants, the fission product yield from uranium-235 is 5.7%, from uranium-233 6.6%, but from plutonium-239 only 2.0%. Nuclear waste Strontium-90 is classified as high-level waste. Its 29-year half-life means that it can take hundreds of years to decay to negligible levels. Exposure from contaminated water and food may increase the risk of leukemia and bone cancer. Reportedly, thousands of capsules of radioactive strontium containing millions of curies are stored at Hanford Site's Waste Encapsulation and Storage Facility. Remediation Algae has shown selectivity for strontium in studies, where most plants used in bioremediation have not shown selectivity between calcium and strontium, often becoming saturated with calcium, which is greater in quantity and also present in nuclear waste. Researchers have looked at the bioaccumulation of strontium by Scenedesmus spinosus (algae) in simulated wastewater. The study claims a highly selective biosorption capacity for strontium of S. spinosus, suggesting that it may be appropriate for use of nuclear wastewater. A study of the pond alga Closterium moniliferum using stable strontium found that varying the ratio of barium to strontium in water improved strontium selectivity. Biological effects Biological activity Strontium-90 is a "bone seeker" that exhibits biochemical behavior similar to calcium, the next lighter group 2 element. After entering the organism, most often by ingestion with contaminated food or water, about 70–80% of the dose gets excreted. Virtually all remaining strontium-90 is deposited in bones and bone marrow, with the remaining 1% remaining in blood and soft tissues. Its presence in bones can cause bone cancer, cancer of nearby tissues, and leukemia. Exposure to 90Sr can be tested by a bioassay, most commonly by urinalysis. The biological half-life of strontium-90 in humans has variously been reported as from 14 to 600 days, 1000 days, 18 years, 30 years and, at an upper limit, 49 years. The wide-ranging published biological half life figures are explained by strontium's complex metabolism within the body. However, by averaging all excretion paths, the overall biological half life is estimated to be about 18 years. The elimination rate of strontium-90 is strongly affected by age and sex, due to differences in bone metabolism. Together with the caesium isotopes 134Cs and 137Cs, and the iodine isotope 131I, it was among the most important isotopes regarding health impacts after the Chernobyl disaster. As strontium has an affinity to the calcium-sensing receptor of parathyroid cells that is similar to that of calcium, the increased risk of liquidators of the Chernobyl power plant to suffer from primary hyperparathyroidism could be explained by binding of strontium-90. Uses Radioisotope thermoelectric generators (RTGs) The radioactive decay of strontium-90 generates a significant amount of heat, 0.95 W/g in the form of pure strontium metal or approximately 0.460 W/g as strontium titanate and is cheaper than the alternative 238Pu. It is used as a heat source in many Russian/Soviet radioisotope thermoelectric generators, usually in the form of strontium titanate. It was also used in the US "Sentinel" series of RTGs. Startup company Zeno Power is developing RTGs that use strontium-90 from the DOD, and is aiming to ship product by 2026. Industrial applications 90Sr finds use in industry as a radioactive source for thickness gauges. Medical applications 90Sr finds extensive use in medicine as a radioactive source for superficial radiotherapy of some cancers. Controlled amounts of 90Sr and 89Sr can be used in treatment of bone cancer, and to treat coronary restenosis via vascular brachytherapy. It is also used as a radioactive tracer in medicine and agriculture. Aerospace applications 90Sr is used as a blade inspection method in some helicopters with hollow blade spars to indicate if a crack has formed. Radiological warfare In April 1943, Enrico Fermi suggested to Robert Oppenheimer the possibility of using the radioactive byproducts from enrichment to contaminate the German food supply. The background was fear that the German atomic bomb project was already at an advanced stage, and Fermi was also skeptical at the time that an atomic bomb could be developed quickly enough. Oppenheimer discussed the proposal with Edward Teller, who suggested the use of strontium-90. James Bryant Conant and Leslie R. Groves were also briefed, but Oppenheimer wanted to proceed with the plan only if enough food could be contaminated with the weapon to kill half a million people. 90Sr contamination in the environment Strontium-90 is not quite as likely as caesium-137 to be released as a part of a nuclear reactor accident because it is much less volatile, but is probably the most dangerous component of the radioactive fallout from a nuclear weapon. A study of hundreds of thousands of deciduous teeth, collected by Dr. Louise Reiss and her colleagues as part of the Baby Tooth Survey, found a large increase in 90Sr levels through the 1950s and early 1960s. The study's final results showed that children born in St. Louis, Missouri, in 1963 had levels of 90Sr in their deciduous teeth that was 50 times higher than that found in children born in 1950, before the advent of large-scale atomic testing. Reviewers of the study predicted that the fallout would cause increased incidence of disease in those who absorbed strontium-90 into their bones. However, no follow up studies of the subjects have been performed, so the claim is untested. An article with the study's initial findings was circulated to U.S. President John F. Kennedy in 1961, and helped convince him to sign the Partial Nuclear Test Ban Treaty with the United Kingdom and Soviet Union, ending the above-ground nuclear weapons testing that placed the greatest amounts of nuclear fallout into the atmosphere. The Chernobyl disaster released roughly 10 PBq, or about 5% of the core inventory, of strontium-90 into the environment. The Kyshtym disaster released strontium-90 and other radioactive material into the environment. It is estimated to have released 20 MCi (800 PBq) of radioactivity. The Fukushima Daiichi disaster had from the accident until 2013 released 0.1 to 1 PBq of strontium-90 in the form of contaminated cooling water into the Pacific Ocean. See also Gray and Sievert Lia radiological accident Radiation poisoning References External links Fission products Strontium-090 Radioisotope fuels Radioactive contamination
Strontium-90
[ "Chemistry", "Technology" ]
1,733
[ "Nuclear fission", "Radioactive contamination", "Isotopes", "Fission products", "Isotopes of strontium", "Nuclear fallout", "Environmental impact of nuclear power" ]
5,700,626
https://en.wikipedia.org/wiki/Malignant%20narcissism
Malignant narcissism is a psychological syndrome comprising a mix of narcissism, antisocial behavior, sadism, and a paranoid outlook on life. Malignant narcissism is not a diagnostic category defined in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR). Rather, it is a subcategory of narcissistic personality disorder (NPD) which could also include traits of antisocial personality disorder or paranoid personality disorder. Malignant narcissists are grandiose and always ready to raise hostility levels, which undermines the families and organizations in which they are involved, and dehumanizes the people with whom they associate. History Early uses of the term The social psychologist Erich Fromm first coined the term "malignant narcissism" in 1964. He characterized the condition as a solipsistic form of narcissism, in which the individual takes pride in their own inherent traits rather than their achievements, and thus does not require a connection to other people or to reality. Edith Weigert (1967) saw malignant narcissism as a "regressive escape from frustration by distortion and denial of reality", while Herbert Rosenfeld (1971) described it as "a disturbing form of narcissistic personality where grandiosity is built around aggression and the destructive aspects of the self become idealized." Psychoanalyst George H. Pollock wrote in 1978: "The malignant narcissist is presented as pathologically grandiose, lacking in conscience and behavioral regulation with characteristic demonstrations of joyful cruelty and sadism". In 1983, M. Scott Peck used malignant narcissism as a way to explain evil. Proposal as a diagnosis Psychoanalyst Otto Kernberg first proposed malignant narcissism as a psychiatric diagnosis in 1984. He described malignant narcissism as a syndrome characterized by a narcissistic personality disorder (NPD), antisocial features, paranoid traits, and egosyntonic aggression. Other symptoms may include an absence of conscience, a psychological need for power, and a sense of importance (grandiosity). Kernberg believed that malignant narcissism should be considered part of a spectrum of severity of pathological narcissism. At the high end was "antisocial character" (as defined by psychiatrist Hervey M. Cleckley, now referred to as psychopathy or antisocial personality), through malignant narcissism, to narcissistic personality disorder at the low end. He distinguished malignant narcissism from psychopathy because of the malignant narcissist's capacity to internalize "both aggressive and idealized superego precursors, leading to the idealization of the aggressive, sadistic features of the pathological grandiose self". According to Kernberg, while the psychopath's paranoid stance against external influences makes him or her unwilling to internalize even the values of the "aggressor", malignant narcissists "have the capacity to admire powerful people, and can depend on sadistic and powerful but reliable parental images". Malignant narcissists, in contrast to psychopaths, are also said to be capable of developing "some identification with other powerful idealized figures as part of a cohesive 'gang'...which permits at least some loyalty and good object relations to be internalized... Some of them may present rationalized antisocial behavior – for example, as leaders of sadistic gangs or terrorist groups...with the capacity for loyalty to their own comrades." These definitions built upon his previous work. Kernberg had presented a paper on Factors in the psychoanalytic treatment of narcissistic personalities, from the work of the Psychotherapy Research Project of The Menninger Foundation on 11 May 1968 at the 55th Annual Meeting of the American Psychoanalytic Association. Kernberg's paper was first published on 1 January 1970 in the Journal of the American Psychoanalytic Association (JAPA). In his article, "malignant narcissism" and psychopathy are employed interchangeably. The word "malignant" does not appear once, while "pathological" or "pathologically" appears 25 times. Developing these ideas further, Kernberg pointed out that the antisocial personality was fundamentally narcissistic and without morality. Malignant narcissism includes a sadistic element creating, in essence, a sadistic psychopath. , malignant narcissism has not been accepted in any of the medical manuals, such as the International Classification of Diseases (ICD) or the Diagnostic and Statistical Manual of Mental Disorders (DSM). Relation to other concepts and diagnoses Sadism and cruelty Psychologist Keith Campbell has defined malignant narcissism specifically as the rare but dangerous combination of narcissism and sadism. Malignant narcissism is highlighted as a key area in the study of mass murder, sexual sadism, and serial murder. Narcissism The primary difference between narcissism and malignant narcissism is that malignant narcissism includes comorbid features of other personality disorders and thus consists of a broader range of symptoms than pathological narcissism (NPD). In the term "malignant narcissism", the word "malignant" is used in the sense described by the Merriam-Webster Dictionary as "passionately and relentlessly malevolent, aggressively malicious". In malignant narcissism, NPD is accompanied by additional symptoms of antisocial, paranoid and sadistic personality disorders. While a person with NPD will deliberately damage other people in pursuit of their own selfish desires, they may regret and will in some circumstances show remorse for doing so. Because traits of antisocial personality disorder are present in malignant narcissism, the "malignant narcissist" has a more pervasive lack of empathy than someone with NPD alone and will lack feelings of guilt or remorse for the damage they cause. Since sadism is often considered a feature of malignant narcissism, an individual with the syndrome may not only lack feelings of guilt or remorse for hurting others but may even derive pleasure from the gratuitous infliction of mental or physical pain on others. These traits were formerly codified in the DSM-III under sadistic personality disorder (SPD). Psychopathy The terms malignant narcissist and psychopath are sometimes used interchangeably because there is little to clinically separate the two. Individuals who have narcissistic personality disorder, malignant narcissism, and psychopathy all exhibit similar symptoms, as detailed in the Hare Psychopathy Checklist. The test consists of 20 items that are scored on a three-point scale, with a score of 0 indicating that it does not apply at all, 1 indicating a partial match or mixed information, and 2 indicating a reasonably good match. The cut-off for the label of psychopathy in the United States is 30 and in the United Kingdom is 25 out of a possible score of 40. High scores are associated with impulsivity and aggression, Machiavellianism, and persistent criminal behavior, but not with empathy and affiliation. Paranoia The importance of malignant narcissism and of projection as a defense mechanism has been confirmed in paranoia, as well as "the patient's vulnerability to malignant narcissistic regression". Because a malignant narcissist's personality cannot tolerate any criticism, being mocked typically causes paranoia. Therapy Typically in the analysis of the malignant narcissist, "the patient attempts to triumph over the analyst by destroying the analysis and himself or herself"; an extreme version of what Jacques Lacan described as "that resistance of the amour-propre... which is often expressed thus: 'I can't bear the thought of being freed by anyone other than myself. Since malignant narcissism is a severe personality disorder that has far-reaching societal and familial effects, it requires attention from both the psychiatric community and the social science community. Treatment is recommended in a therapeutic community, as well as a psychoeducational preventative program aimed at both mental health professionals and the general public. See also Narcissistic perversion Borderline personality disorder Narcissistic leadership Narcissistic supply The Mask of Sanity References External links Narcissism and co-morbidity with other disorders Problem behavior Narcissism Anti-social behaviour
Malignant narcissism
[ "Biology" ]
1,776
[ "Behavior", "Anti-social behaviour", "Problem behavior", "Narcissism", "Human behavior" ]
5,701,074
https://en.wikipedia.org/wiki/Harmony%20Compiler
Harmony Compiler was written by Peter Samson at the Massachusetts Institute of Technology (MIT). The compiler was designed to encode music for the PDP-1 and built on an earlier program Samson wrote for the TX-0 computer. Jack Dennis noticed and had mentioned to Samson that the sound on or off state of the TX-0's speaker could be enough to play music. They succeeded in building a WYSIWYG program for one voice before or by 1960. For the PDP-1 which arrived at MIT in September 1961, Samson designed the Harmony Compiler which synthesizes four voices from input in a text-based notation. Although it created music in many genres, it was optimized for baroque music. PDP-1 music is merged from four channels and played back in stereo. Notes are on pitch and each has an undertone. The music does not stop for errors. Mistakes are greeted with a message from the typewriter's red ribbon, "To err is human, to forgive divine." Samson joined the PDP-1 restoration project at the Computer History Museum in 2004 to recreate the music player. References Samson's description begins at 1:20. Notes Audio programming languages History of software
Harmony Compiler
[ "Technology" ]
251
[ "History of software", "History of computing" ]
5,701,662
https://en.wikipedia.org/wiki/318%20%28number%29
318 is the natural number following 317 and preceding 319. In mathematics 318 is: a sphenic number a nontotient the number of posets with 6 unlabeled elements the sum of 12 consecutive primes, 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47. In religion In Genesis 14, Abraham takes 318 men to rescue his brother Lot. References Integers
318 (number)
[ "Mathematics" ]
90
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
5,702,698
https://en.wikipedia.org/wiki/Protein%E2%80%93ligand%20docking
Protein–ligand docking is a molecular modelling technique. The goal of protein–ligand docking is to predict the position and orientation of a ligand (a small molecule) when it is bound to a protein receptor or enzyme. Pharmaceutical research employs docking techniques for a variety of purposes, most notably in the virtual screening of large databases of available chemicals in order to select likely drug candidates. There has been rapid development in computational ability to determine protein structure with programs such as AlphaFold, and the demand for the corresponding protein-ligand docking predictions is driving implementation of software that can find accurate models. Once the protein folding can be predicted accurately along with how the ligands of various structures will bind to the protein, the ability for drug development to progress at a much faster rate becomes possible. History Computer-aided drug design (CADD) was introduced in the 1980s in order to screen for novel drugs. The underlying premise is that by parsing an extremely large data set for chemical compounds which may be viable to make a certain pharmaceutical, researchers were able to minimize the amount of novel without testing them all experimentally. The ability to accurately predict target binding sites is a new phenomena, however, which expands on the ability to simply parse a data set of chemical compounds; now due to increasing computational capability, it is possible to inspect the actual geometries of the protein-ligand binding site in vitro. Hardware advancements in computation have made these structure-oriented methods of drug discovery the next frontier in the 21st century biopharma. In order to finely train the new algorithms to capture the accurate geometry of the protein-ligand binding capability, an experimentally gathered dataset can be used by applying techniques such as X-ray crystallography or NMR spectroscopy. Available software Several protein–ligand docking software applications that calculate the site, geometry and energy of small molecules or peptides interacting with proteins are available, such as AutoDock and AutoDock Vina, rDock, FlexAID, Molecular Operating Environment, and Glide. Peptides are a highly flexible type of ligand that has proven to be a difficult type of structure to predict in protein bonding programs. DockThor implements up to 40 rotatable bonds to help model these complex physicochemical bindings at the target site. Root Mean Square Deviation is the standard method of evaluating various software performance within the binding mode of the protein-ligand structure. Specifically, it is the root-mean-squared deviation between the software-predicted docking pose of the ligand and the experimental binding mode. The RMSD measurement is computed for all of the computer-generated poses of the possible bindings between the protein and ligand. The program does not always perfectly predict the actual physical pose when evaluating the RMSD between candidates. In order to then evaluate the strength of a computer algorithm to predict protein docking, the ranking of RMSD among computer-generated candidates must be examined to determine whether the experimental pose actually was generated but not selected. Protein flexibility Computational capacity has increased dramatically over the last two decades making possible the use of more sophisticated and computationally intensive methods in computer-assisted drug design. However, dealing with receptor flexibility in docking methodologies is still a thorny issue. The main reason behind this difficulty is the large number of degrees of freedom that have to be considered in this kind of calculations. However, in most of the cases, neglecting it leads to poor docking results in terms of binding pose prediction in real-world settings. Using coarse grained protein models to overcome this problem seems to be a promising approach. Coarse-grained models are often implemented in the case of protein-peptide docking, as they frequently involve large-scale conformation transitions of the protein receptor. AutoDock is one of the computational tools frequently used to model the interactions between proteins and ligands during the drug discovery process. Although the classically used algorithms to search for effective poses often assume the receptor proteins to be rigid while the ligand is moderately flexible, newer approaches are implementing models with limited receptor flexibility as well. AutoDockFR is a newer model that is able to simulate this partial flexibility within the receptor protein by letting side-chains of the protein to take various poses among their conformational space. This allows the algorithm to explore a vastly larger space of energetically relevant poses for each ligand tested. In order to simplify the complexity of the search space for prediction algorithms, various hypotheses have been tested. One such hypothesis is that side-chain conformational changes that contain more atoms and rotations of greater magnitude are actually less likely to occur than the smaller rotations due to the energy barriers that arise. Steric hindrance and rotational energy cost that are introduced with these larger changes made it less likely that they were included in the actual protein-ligand pose. Findings such as these can make it easier for scientists to develop heuristics that can lower the complexity of the search space and improve the algorithms. Implementations The original method of testing the molecular models of various binding sites was introduced in the 1980s where the receptor was estimated in a rough manner by spheres which occupied the surface clefts. The ligand was approximated by more spheres which would occupy the relevant volume. Then a search was executed for maximizing the steric overlap between the spheres of both the binding and receptor spheres. However, the new scoring functions to evaluate molecular dynamics and protein-ligand docking potential are implementing supervised molecular dynamic approach. Essentially, the simulations are sequences of small time windows by which the distance between the center of mass of the ligand and protein is computed. The distance values are updated at regular frequencies and then regressively fitted linearly. When the slope is negative, the ligand is getting nearer to the binding site, and vice versa. When the ligand is departing from the binding site, the tree of possibilities is pruned right at that moment so as to avoid unnecessary computation. The advantage of this method is speed without the introduction of any energetic bias which could foul the model from accurate mappings to the experimental truths. See also Docking (molecular) Protein–protein docking Virtual screening List of protein-ligand docking software References External links BioLiP, a comprehensive ligand-protein interaction database DockThor Molecular modelling Computational chemistry Cheminformatics
Protein–ligand docking
[ "Chemistry" ]
1,248
[ "Molecular physics", "Theoretical chemistry", "Molecular modelling", "Computational chemistry", "nan", "Cheminformatics" ]
5,702,901
https://en.wikipedia.org/wiki/AIDGAP%20series
AIDGAP is an acronym for Aid to Identification in Difficult Groups of Animals and Plants. The AIDGAP series is a set of books published by the Field Studies Council. They are intended to enable students and interested non-specialists to identify groups of taxa in Britain which are not covered by standard field guides. In general, they are less demanding in level than the Synopses of the British Fauna. All AIDGAP guides are initially produced as test versions, which are circulated widely to students, teaching staff and environmental professionals, with the feedback incorporated into the final published versions. In many cases the AIDGAP volume is the only non-technical work covering the group of taxa in question. History of the series The Field Studies Council recognised the widespread need for identification guides soon after its inception, and has since established a long tradition of publishing such material. Many of these were written by teaching staff writing their own keys to fill obvious gaps in the available literature. However, it became increasingly apparent that a change in approach was needed. Too few guides were available which were usable by those with little previous experience. Many groups of plants and animals appeared to be neglected. The FSC initiated the AIDGAP project in 1976, with input from an advisory panel which included a range of organisations such as the Linnean Society, teachers in secondary education and professional illustrators. The two main objectives adopted by the panel were first to identify those groups of organisms regarded as 'difficult' due to a lack of a suitable key, and second to investigate ways of alleviating the difficulties of identification for each group. The panel also decided to incorporate a 'testing' stage during which the identification guides could be revised and improved. In practice today, AIDGAP guides are produced as 'test versions', which are then circulated to at least 100 volunteer 'testers', drawn from a wide range of backgrounds. Titles in the AIDGAP series A large number of guides have been published in the last thirty years. A list of the works in the series is as follows (see link below for a list of the guides still in print): AIDGAP guides (books) Price & Bersweden (2013) Winter Trees: a photographic guide to common trees and shrubs Hubble (2012) Seed beetles Sherlock (2012) Earthworms Redfern & Shirley (2011) British plant galls (2nd edition) Macadam & Bennett (2010) A pictorial guide to British Ephemeroptera Barber (2009) Key to the identification of British centipedes Barnard & Ross (2008) Guide to the adult caddisflies or sedge flies (Trichoptera) Cameron & Riley (2008) Land Snails in the British Isles (2nd edition) Pryce, Macadam & Brooks (2007) Guide to the British Stonefly (Plecoptera) families: adults and larvae Merryweather & Hill (2007) The fern guide (3rd edition) Hopkin (2007) A key to the Springtails (Collembola) of Britain and Ireland Wallace (2006) Simple key to caddis larvae Hayward (2005) A new key to wild flowers Killeen, Aldridge & Oliver (2004) Freshwater Bivalves of Britain and Ireland Redfern, Shirley & Bloxham (2002) British Plant Galls: identification of galls on plants and fungi Unwin (2001) A key to families of British bugs (Insecta, Hemiptera) Kelly (2000) Identification of common benthic diatoms in rivers May & Panter (2000) A guide to the identification of deciduous broad-leaved trees and shrubs in winter Plant (1997) A key to the adults of British lacewings and their allies (Neuroptera, Megaloptera, Raphidiptera and Mecoptera) Wheeler (1997) A field key to the freshwater fishes and lampreys of the British Isles Crothers (1997) A key to the major groups of British marine invertebrates Wheeler (1994) Field Key to the Shore Fishes of the British Isles Hopkin (1991) A key to the woodlice of Britain and Ireland Vas (1991) A field guide to the Sharks of British Coastal Waters (freely downloadable pdf from ) Skidmore (1991) Insects of the British cow-dung community Wright (1990) British Sawflies (Hymenoptera: Symphyta): a key to adults of the genera occurring in Britain Savage (1990) A key to the adults of British Lesser Water Boatmen (Corixidae) (freely downloadable pdf from Jones-Walters (1989) Keys to families of British spiders Trudgil (1989) Soil types: a field identification guide Friday (1988) A key to the adults of British Water Beetles (freely downloadable pdf from ) Haslam et al. (1987) British water plants (revised edition) Tilling (1987) A key to the major groups of terrestrial invertebrates Hiscock (1986) A field guide to the British Red Seaweeds (Rhodophyta) King (1986) Sea Spiders. A revised key to the adults of littoral Pycnogonida in the British Isles Croft (1986) A key to the major groups of British freshwater invertebrates Willmer (1985) Bees, Ants and Wasps - the British Aculeates Unwin (1984) A key to the families of British Coleoptera (beetles) and Strepsiptera Crothers & Crothers (1983) A key to the crabs and crab-like animals of British inshore waters (revised edition, 1988) Unwin (1981) Key to the families of British Diptera (freely downloadable pdf from ) Sykes (1981) An illustrated guide to the diatoms of British coastal plankton (freely downloadable pdf from ) Hiscock (1979) A field key to the British Brown Seaweeds Identification guides (fold-out laminated charts) Shaw & Dallimore (2013) Springtails families AIDGAP Tested Field Guides (Polyclave punched-card keys) R.J. Pankhurst & J.M. Allinson (1985) BRITISH GRASSES a punched-card key to grasses in the vegetative state References External links Full list of AIDGAP guides in print Wild animals identification Biological literature Taxonomy (biology)
AIDGAP series
[ "Biology" ]
1,282
[ "Taxonomy (biology)", "Taxonomy (biology) books" ]
5,703,338
https://en.wikipedia.org/wiki/Delta%20robot
A delta robot is a type of parallel robot that consists of three arms connected to universal joints at the base. The key design feature is the use of parallelograms in the arms, which maintains the orientation of the end effector. In contrast, a Stewart platform can change the orientation of its end effector. Delta robots have popular usage in picking and packaging in factories because they can be quite fast, some executing up to 300 picks per minute. History The delta robot (a parallel arm robot) was invented in the early 1980s by a research team led by professor Reymond Clavel at the École Polytechnique Fédérale de Lausanne (EPFL, Switzerland). After a visit to a chocolate maker, a team member wanted to develop a robot to place pralines in their packages. The purpose of this new type of robot was to manipulate light and small objects at a very high speed, an industrial need at that time. In 1987, the Swiss company Demaurex purchased a license for the delta robot and started the production of delta robots for the packaging industry. In 1991, Reymond Clavel presented his doctoral thesis 'Conception d'un robot parallèle rapide à 4 degrés de liberté', and received the golden robot award in 1999 for his work and development of the delta robot. Also in 1999, ABB Flexible Automation started selling its delta robot, the FlexPicker. By the end of 1999, delta robots were also sold by Sigpack Systems. In 2017, researchers from Harvard's Microrobotics Lab miniaturized it with piezoelectric actuators to 0.43 grams for 15 mm x 15 mm x 20 mm, capable of moving a 1.3 g payload around a 7 cubic millimeter workspace with a 5 micrometers precision, reaching 0.45 m/s speeds with 215 m/s² accelerations and repeating patterns at 75 Hz. Design The delta robot is a parallel robot, i.e. it consists of multiple kinematic chains connecting the base with the end-effector. The robot can also be seen as a spatial generalisation of a four-bar linkage. The key concept of the delta robot is the use of parallelograms which restrict the movement of the end platform to pure translation, i.e. only movement in the X, Y or Z direction with no rotation. The robot's base is mounted above the workspace and all the actuators are located on it. From the base, three middle jointed arms extend. The ends of these arms are connected to a small triangular platform. Actuation of the input links will move the triangular platform along the X, Y or Z direction. Actuation can be done with linear or rotational actuators, with or without reductions (direct drive). Since the actuators are all located in the base, the arms can be made of a light composite material. As a result of this, the moving parts of the delta robot have a small inertia. This allows for very high speed and high accelerations. Having all the arms connected together to the end-effector increases the robot stiffness, but reduces its working volume. The version developed by Reymond Clavel has four degrees of freedom: three translations and one rotation. In this case a fourth leg extends from the base to the middle of the triangular platform giving to the end effector a fourth, rotational degree of freedom around the vertical axis. Currently other versions of the delta robot have been developed: Delta with 6 degrees of freedom: developed by the Fanuc company, in this robot a serial kinematic with 3 rotational degrees of freedom is placed on the end effector Delta with 4 degrees of freedom: developed by the Adept company, this robot has 4 parallelogram directly connected to the end-platform instead of having a fourth leg coming in the middle of the end-effector Pocket Delta: developed by the Swiss company Asyril SA, a 3-axis version of the delta robot adapted for flexible part feeding systems and other high-speed, high-precision applications. Delta direct drive: a 3 degrees of freedom delta robot having the motor directly connected to the arms. Accelerations can be very high, from 30 up to 100 g. Delta cube: developed by the EPFL university laboratory LSRO, a delta robot built in a monolithic design, having flexure-hinges joints. This robot is adapted for ultra-high-precision applications. Several "linear delta" arrangements have been developed where the motors drive linear actuators rather than rotating an arm. Such linear delta arrangements can have much larger working volumes than rotational delta arrangements. The majority of delta robots use rotary actuators. Vertical linear actuators have recently been used (using a linear delta design) to produce a novel design of 3D printer. These offer advantages over conventional leadscrew-based 3D printers of quicker access to a larger build volume for a comparable investment in hardware. Applications Industries that take advantage of the high speed of delta robots are the food, pharmaceutical and electronics industry. For its stiffness it is also used for surgery, in particular, the Surgiscope is a delta robot used as a microscopic holder system. The structure of a delta robot can also be used to create haptic controllers. More recently, the technology has been adapted to 3D printers. References Robot kinematics Industrial robots Parallel robots
Delta robot
[ "Engineering" ]
1,105
[ "Industrial robots", "Robotics engineering", "Robot kinematics" ]
5,703,374
https://en.wikipedia.org/wiki/Rasagiline
Rasagiline, sold under the brand name Azilect among others, is a medication which is used in the treatment of Parkinson's disease. It is used as a monotherapy to treat symptoms in early Parkinson's disease or as an adjunct therapy in more advanced cases. The drug is taken by mouth. Side effects of rasagiline include insomnia and orthostatic hypotension, among others. Rasagiline acts as an inhibitor of the enzyme monoamine oxidase (MAO) and hence is a monoamine oxidase inhibitor (MAOI). More specifically, it is a selective inhibitor of monoamine oxidase B (MAO-B). The drug is thought to work by increasing levels of the monoamine neurotransmitter dopamine in the brain. Rasagiline shows pharmacological differences from the related drug selegiline, including having no amphetamine-like metabolites, monoamine-releasing activity, or monoaminergic activity enhancer actions, which may result in clinical differences between the medications. Rasagiline was approved for medical use in the European Union in 2005 and in the United States in 2006. Generic versions of rasagiline are available. Medical uses Parkinson's disease Rasagiline is used to treat symptoms of Parkinson's disease both alone and in combination with other drugs. It has shown efficacy in both early and advanced Parkinson's, and appears to be especially useful in dealing with non-motor symptoms like fatigue. Teva conducted clinical trials attempting to prove that rasagiline did not just treat symptoms, but was a disease-modifying drug—that it actually prevented the death of the dopaminergic neurons that characterize Parkinson's disease and slowed disease progression. They conducted two clinical trials, called TEMPO and ADAGIO, to try to prove this. The United States Food and Drug Administration (FDA) advisory committee rejected their claim in 2011, saying that the clinical trial results did not prove that rasagiline was neuroprotective. The main reason was that in one of the trials, the lower dose (1 mg) was effective at slowing progression, but the higher dose (2 mg) was not, contradicting standard dose-response pharmacology. MAO-B inhibitors like rasagiline may improve certain non-motor symptoms in Parkinson's disease. These may include depression, sleep disturbances, and pain (particularly related to motor fluctuations), but are unlikely to include cognitive or olfactory dysfunctions. The effects of MAO-B inhibitors like rasagiline on fatigue, autonomic dysfunctions, apathy, and impulse control disorders in people with Parkinson's disease remain unknown. Rasagiline has been reported to significantly improve quality of life in people with Parkinson's disease, but the effect sizes were trivial to small and may not be clinically meaningful. It showed a large effect size relative to placebo for depression in people with Parkinson's disease. In other studies, rasagiline appeared to reduce fatigue in people with Parkinson's disease. However, its effect sizes for this effect in a large trial were described as trivial. Available forms Rasagiline is available in the form of 0.5 and 1mg oral tablets. Contraindications Rasagiline has not been tested in pregnant women. Side effects The FDA label contains warnings that rasagiline may cause severe hypertension or hypotension, may make people sleepy, may make motor control worse in some people, may cause hallucinations and psychotic-like behavior, may cause impulse control disorders, may increase the risk of melanoma, and upon withdrawal, may cause high fever or confusion. Side effects when the drug is taken alone include flu-like symptoms, joint pain, depression, stomach upset, headache, dizziness, and insomnia. When taken with levodopa, side effects include increased movement problems, accidental injury, sudden drops in blood pressure, joint pain and swelling, dry mouth, rash, abnormal dreams and digestive problems including vomiting, loss of appetite, weight loss, abdominal pain, nausea, and constipation. When taken with Parkinson's drugs other than levodopa, side effects include peripheral edema, fall, joint pain, cough, and insomnia. In a 2013 meta-analysis, none of the most frequently reported side effects of rasagiline occurred significantly more often than with placebo. It was concluded that rasagiline is well-tolerated. Rasagiline has been found to produce orthostatic hypotension as a side effect. Rates of orthostatic hypotension in a selection of different clinical trials have been 1.2- to 5-fold higher than those of placebo, ranging from 3.1 to 44% with rasagiline and 0.6 to 33% with placebo. Orthostatic hypotension tends to be worst in the first 2months of treatment and then tends to decrease with time. Rasagiline can also cause hypotension while supine and unrelated to standing. In a clinical trial, the rate of hypotension was 3.2% with rasagiline versus 1.3% with placebo. Rarely, rasagiline has been reported to induce impulse control disorders, obsessive–compulsive symptoms, hypersexuality, and spontaneous orgasm or ejaculation. Other rare adverse effects associated with rasagiline include pleurothotonus (Pisa syndrome), livedo reticularis, tendon rupture, and hypoglycemia. Serotonin syndrome has been reported rarely with rasagiline both alone and in combination with selective serotonin reuptake inhibitors (SSRIs) like escitalopram, paroxetine, and sertraline and other MAOIs like linezolid. A withdrawal syndrome associated with rasagiline has been reported. Overdose Rasagiline has been studied at single doses of up to 20mg and at repeated doses of up to 10mg/day and was well-tolerated at these doses. However, in a dose-escalation study with concomitant levodopa therapy, a dosage of 10mg/day rasagiline was associated with cardiovascular side effects including hypertension and orthostatic hypotension in some people. The symptoms of rasagiline overdose may be similar to the case of non-selective MAOIs. Onset of symptoms may be delayed by 12hours and may not peak for 24hours. A variety of different symptoms may occur and the central nervous system and cardiovascular system are prominently involved. Death may result and immediate hospitalization is warranted. Serotonin syndrome has occurred with rasagiline overdose and body temperature should be monitored closely. There is no specific antidote for overdose and treatment is supportive and based on symptoms. Interactions Rasagiline is contraindicated with known serotonergic agents like selective serotonin reuptake inhibitors (SSRIs), serotonin–norepinephrine reuptake inhibitors (SNRIs), tricyclic antidepressants (TCAs), tetracyclic antidepressants (TeCAs), triazolopyridines or serotonin antagonists and reuptake inhibitors (SARIs) like trazodone, and other monoamine oxidase inhibitors (MAOIs), as well as meperidine (pethidine), tramadol, methadone, propoxyphene, dextromethorphan, St. John's wort, and cyclobenzaprine, due to potential risk of serotonin syndrome. However, the risk appears to be low, based on a large study of 1,504people which looked for serotonin syndrome in people with Parkinson's disease who were treated with rasagiline plus antidepressants, rasagiline without antidepressants, or antidepressants plus Parkinson's drugs other than either rasagiline or selegiline, and in which no cases were identified. There is a risk of psychosis or bizarre behavior if rasagiline is used with dextromethorphan. There is a risk of non-selective MAO inhibition and hypertensive crisis if rasagiline is used with other MAOIs. Rasagiline may have a risk of hypertensive crisis in combination with sympathomimetic agents such as amphetamines, ephedrine, epinephrine, isometheptene, and pseudoephedrine. However, based on widespread clinical experience with the related select I've MAO-B inhibitor selegiline, occasional use of over-the-counter sympathomimetics like pseudoephedrine appears to pose minimal risk of hypertensive crisis. In any case, the combination of sympathomimetics with MAO-B inhibitors like rasagiline and selegiline should be undertaken with caution. Pharmacology Pharmacodynamics Monoamine oxidase inhibitor Parkinson's disease is characterized by the death of cells that produce dopamine, a neurotransmitter. An enzyme called monoamine oxidase (MAO) breaks down neurotransmitters. MAO has two forms, MAO-A and MAO-B. MAO-B is involved in the metabolism of dopamine. Rasagiline prevents the breakdown of dopamine by irreversibly binding to MAO-B. Dopamine is therefore more available, somewhat compensating for the diminished quantities made in the brains of people with Parkinson's disease. Rasagiline acts as a selective and potent irreversible inhibitor of the monoamine oxidases (MAO) enzymes monoamine oxidase B (MAO-B) and monoamine oxidase A (MAO-A). It is selective for inhibition of MAO-B over MAO-A, but can also inhibit MAO-A at high doses or concentrations. MAO-B is involved in the metabolism of the monoamine neurotransmitter dopamine in the body and brain. By inhibiting MAO-B, rasagiline is thought to increase dopamine levels. In the case of Parkinson's disease, increased dopamine levels in the striatum are thought to be responsible for rasagiline's therapeutic effectiveness in treating the condition. Rasagiline inhibits platelet MAO-B activity with single doses by 35% one-hour after 1mg, 55% after 2mg, 79% after 5mg, and 99% after 10mg in healthy young people. With all dose levels, maximum inhibition is maintained for at least 48hours after the dose. With repeated doses, rasagiline reaches greater than 99% platelet MAO-B inhibition after 6days of 2mg/day, 3days of 5mg/day, and 2 days of 10mg/day. Similarly, repeated administration of 0.5, 1, and 2mg/day rasagiline resulted in complete MAO-B inhibition. Clinically relevant inhibition of MAO-B is thought to require 80% inhibition and above. Following the last dose, platelet MAO-B levels remain significantly inhibited for 7days and return to baseline after 2weeks. In people with Parkinson's disease, rasagiline at a dose of 1mg/day achieved near-complete inhibition of platelet MAO-B after 3days of dosing. The recommended dosing schedule of rasagiline in Parkinson's disease (1mg/day) has been described as somewhat questionable and potentially excessive from a pharmacological standpoint. The half-time for recovery of brain MAO-B following discontinuation of an MAO-B inhibitor (specifically selegiline) has been found to be approximately 40days. Similarly, recovery of brain MAO-B following rasagiline discontinuation was gradual and occurred over 6weeks. The clinical effectiveness of rasagiline in Parkinson's disease has been found to persist during a 6-week washout phase with discontinuation of the medication. Rasagiline is about 30 to 100times more potent in inhibiting MAO-B than MAO-A in vitro and is about 17 to 65times more potent in inhibiting MAO-B over MAO-A in vivo in rodents. Rasagiline does not importantly potentiate the pressor effects of tyramine challenge in humans, indicating that it is selective for MAO-B inhibition and does not meaningfully inhibit MAO-A. It is expected that at sufficiently high doses rasagiline would eventually become non-selective and additionally inhibit MAO-A in humans. However, it is unknown what dose threshold would be required for this to occur. Rasagiline is the R(+)-enantiomer of AGN-1135, a racemic mixture of rasagiline (TVP-1012) and the S(–)-enantiomer (TVP-1022). Virtually all of the MAO-inhibiting activity of AGN-1135 lies in the R(+)-enantiomer rasagiline, with this enantiomer having 1,000-fold greater inhibitory potency of MAO-B than the S(–)-enantiomer. In addition, the S(–)-enantiomer is poorly selective for MAO-B over MAO-A. As a result, the purified R(+)-enantiomer rasagiline was the form of the compound advanced for clinical development. Selegiline was the first selective MAO-B inhibitor. Selegiline and rasagiline have similar selectivity for inhibition of MAO-B over MAO-A. However, rasagiline is 5- to 10-fold more potent than selegiline at inhibiting MAO-B, which results in the former being used at lower doses clinically than the latter (1mg/day versus 5–10mg/day, respectively). In addition, selegiline is metabolized into levomethamphetamine and levoamphetamine. These metabolites induce the release of norepinephrine and dopamine, have sympathomimetic and psychostimulant effects, and may contribute to the effects and side effects of selegiline. In contrast to selegiline, rasagiline does not convert into metabolites with amphetamine-like effects. The amphetamine metabolites of selegiline may contribute to significant clinical differences between selegiline and rasagiline. Rasagiline metabolizes into (R)-1-aminoindan which has no amphetamine-like effects and shows neuroprotective properties in cells and in animal models. Selective MAO-B inhibitors including rasagiline and selegiline have been found to increase dopamine levels in the striatum in rats in vivo. It has been theorized that this might be due to strong inhibition of the metabolism of β-phenylethylamine, which is an endogenous MAO-B substrate that has monoaminergic activity enhancer and norepinephrine–dopamine releasing agent actions. β-Phenylethylamine has been described as "endogenous amphetamine" and its brain levels are dramatically increased (10- to 30-fold) by MAO-B inhibitors like selegiline. Elevation of β-phenylethylamine may be involved in the effects of MAO-B inhibitors in the treatment of Parkinson's disease. In 2021, it was discovered that MAO-A is solely or almost entirely responsible for striatal dopamine catabolism in the rodent brain and that MAO-B is not importantly involved. In contrast, MAO-B appears to mediate tonic γ-aminobutyric acid (GABA) synthesis from putrescine in the striatum, a minor and alternative metabolic pathway of GABA synthesis, and this synthesized GABA in turn inhibits dopaminergic neurons in this brain area. MAO-B specifically mediates the transformations of putrescine into γ-aminobutyraldehyde (GABAL or GABA aldehyde) and N-acetylputrescine into N-acetyl-γ-aminobutyraldehyde (N-acetyl-GABAL or N-acetyl-GABA aldehyde), metabolic products that can then be converted into GABA via aldehyde dehydrogenase (ALDH) (and an unknown deacetylase enzyme in the case of N-acetyl-GABAL). These findings may warrant a rethinking of the pharmacological actions of MAO-B inhibitors like selegiline and rasagiline in the treatment of Parkinson's disease. Other actions Rasagiline is selective for inhibition of MAOs over interactions with other proteins, including α-adrenergic receptors, β-adrenergic receptors, muscarinic acetylcholine receptors, and other targets. The major metabolite of rasagiline, (R)-1-aminoindan, is either devoid of MAO inhibition or shows only weak inhibition of MAO-B. It also has no amphetamine-like activity. However, 1-aminoindan is not lacking in pharmacological activity. Like rasagiline, 1-aminoindan shows neuroprotective activity in some experimental models. In addition, 1-aminoindan has been found to enhance striatal dopaminergic neurotransmission and improve motor function independent of MAO inhibition in animal models of Parkinson's disease. 2-Aminoindan, a closely related positional isomer of 1-aminoindan, is known to inhibit the reuptake and induce the release of dopamine and norepinephrine and to produce psychostimulant-like effects in rodents, albeit with lower potency than amphetamine, but rasagiline does not metabolize into this compound. 1-Aminoindan has been found to inhibit the reuptake of norepinephrine 28-fold less potently than 2-aminoindan and to inhibit the reuptake of dopamine 300-fold less potently than 2-aminoindan, with values for dopamine reuptake inhibition in one study of 0.4μM for amphetamine, 3.3μM for 2-aminoindan, and 1mM for 1-aminoindan. In contrast to 2-aminoindan, which increased locomotor activity in rodents (+49%), 1-aminoindan suppressed locomotor activity (–69%). On the other hand, 1-aminoindan has been found to enhance the psychostimulant-like effects of amphetamine in rodents. Whereas selegiline is a catecholaminergic activity enhancer, which may be mediated by agonism of the TAAR1, rasagiline does not possess this action. Instead, rasagiline actually antagonizes selegiline's effects as a catecholaminergic activity enhancer, which may be mediated by TAAR1 antagonism. Rasagiline has been reported to directly bind to and inhibit glyceraldehyde-3-phosphate dehydrogenase (GAPDH). This might play a modulating role in its clinical effectiveness for Parkinson's disease. Selegiline also binds to and inhibits GAPDH. Rasagiline has been found to bind reversibly to α-synuclein, a major protein involved in the pathophysiology of Parkinson's disease, and this action might be neuroprotective. Pharmacokinetics Absorption Rasagiline is rapidly absorbed from the gastrointestinal tract with oral administration and has approximately 36% absolute bioavailability. The peak and area-under-the-curve levels of rasagiline are linear and dose-proportional over a dose range of 0.5 to 10mg. The time to peak levels of rasagiline is 0.5 to 0.7hours and steady-state peak levels are on average 8.5ng/mL. At steady-state, the time to peak levels of rasagiline's major metabolite (R)-1-aminoindan is 2.1hours, its peak levels are 2.6ng/mL, and its area-under-the-curve levels are 10.1ng/h/mL. Taking rasagiline with food (as a high-fat meal) increases peak levels by approximately 60% and area-under-the-curve levels by approximately 20%, whereas time to peak levels is unchanged. Because exposure to rasagiline is not substantially modified, rasagiline can be taken with or without food. Distribution The mean volume of distribution of rasagiline is 87L or 182 to 243L depending on the source. It readily crosses the blood–brain barrier and enters the central nervous system. The plasma protein binding of rasagiline is 60 to 70% or 88 to 94% depending on the source. In the case of the latter range, 61 to 63% of binding was to albumin. Metabolism Rasagiline is extensively metabolized in the liver. It is metabolized primarily by hepatic N-dealkylation via the cytochrome P450 enzyme CYP1A2 which forms the major metabolite (R)-1-aminoindan. It is also metabolized by hydroxylation via cytochrome P450 enzymes to form 3-hydroxy-N-propargyl-1-aminoindan (3-OH-PAI) and 3-hydroxy-1-aminoindan (3-OH-AI). Rasagiline and its metabolites also undergo conjugation via glucuronidation. Use of rasagiline should be monitored carefully in people taking other drugs that inhibit or induce CYP1A2. Variants in CYP1A2 have been found to modify exposure to rasagiline in some studies but not others. Tobacco smoking, a known inhibitor of CYP1A2, did not modify rasagiline exposure. Drug transporters may be more important in influencing the pharmacokinetics of rasagiline than metabolizing enzymes. Exposure to rasagiline is increased in people with hepatic impairment. In those with mild hepatic impairment, peak levels of rasagiline are increased by 38% and area-under-the-curve levels by 80%, whereas in people with moderate hepatic impairment, peak levels are increased by 83% and area-under-the-curve levels by 568%. As a result, the dosage of rasagiline should be halved to 0.5mg/day in people with mild hepatic impairment and rasagiline is considered to be contraindicated in people with moderate to severe hepatic impairment. Elimination Rasagiline is eliminated primarily in urine (62%) and to a much lesser extent in feces (7%). Rasagiline is excreted unchanged in urine at an amount of less than 1%. Hence, it is almost completely metabolized prior to excretion. The elimination half-life of rasagiline is 1.34hours. At steady-state, its half-life is 3hours. As rasagiline acts as an irreversible inhibitor of MAO-B, its actions and duration of effect are not dependent on its half-life or sustained concentrations in the body. The oral clearance of rasagiline is 94.3L/h and is similar to normal liver blood flow (90L/h). This indicates that non-hepatic mechanisms are not significantly involved in the elimination of rasagiline. Moderate renal impairment did not modify exposure to rasagiline, whereas that of (R)-1-aminoindan was increased by 1.5-fold. Since (R)-1-aminoindan is not an MAO inhibitor, mild to moderate renal impairment does not require dosage adjustment of rasagiline. No data are available in the case of severe or end-stage renal impairment. Chemistry Rasagiline, also known as (R)-N-propargyl-1-aminoindan and by its former developmental code name TVP-1012, is a secondary cyclic benzylamine propargylamine. It is the R(+)-enantiomer of the chiral racemic compound AGN-1135 (N-propargyl-1-aminoindan), whereas the S(–)-enantiomer is TVP-1022 ((S)-N-propargyl-1-aminoindan). Rasagiline is a potent and selective MAO-B inhibitor, whereas TVP-1022 is a very weak and poorly selective MAO inhibitor. Both the hydrochloride and mesylate salts of rasagiline were studied and were found to have similar pharmacological, pharmacokinetic, and toxicological profiles. However, the mesylate salt of rasagiline was ultimately selected for its use as a pharmaceutical drug due to favorable chemical stability. The propargyl moiety is essential in the pharmacodynamics of rasagiline. It binds covalently and irreversibly with the flavin adenine dinucleotide (FAD) moiety of the MAO enzyme. The selectivity of rasagiline for MAO-B over MAO-A depends on the maintenance of a distance of no more than two carbon atoms between the aromatic ring and the N-propargyl group. The propargyl group of rasagiline is also essential for its neuroprotective and antiapoptopic actions, which are independent of its MAO inhibition. Rasagiline is closely structurally related to selegiline (R(–)-N-propargylmethamphetamine). However, in contrast to selegiline, rasagiline is not a substituted amphetamine and is instead an 1-aminoindan derivative. The chemical structures of the amphetamines and aminoindans are very similar. However, whereas selegiline metabolizes into levomethamphetamine and levoamphetamine and can produce amphetamine-like effects, rasagiline does not do so. Instead, it metabolizes into (R)-1-aminoindan (TVP-136) and has no such actions. SU-11739 (AGN-1133; N-methyl-N-propargyl-1-aminoindan), the N-methylated analogue of rasagiline, is also an MAO-B-preferring MAOI. However, it is less selective for inhibition of MAO-B over MAO-A than rasagiline. Another structurally related selective MAO-B inhibitor, ladostigil (N-propargyl-(3R)-aminoindan-5-yl-N-propylcarbamate; TV-3326), was developed from structural modification of rasagiline and additionally acts as an acetylcholinesterase inhibitor due to its carbamate moiety. Rasagiline and its metabolite (R)-1-aminoindan are structurally related to 2-aminoindan and derivatives like 5,6-methylenedioxy-2-aminoindane (MDAI), 5,6-methylenedioxy-N-methyl-2-aminoindane (MDMAI), and 5-iodo-2-aminoindane (5-IAI). History AGN-1135, the racemic form of the drug, was invented by Aspro Nicholas in the early 1970s. Moussa B. H. Youdim identified it as a potential drug for Parkinson's disease, and working with collaborators at Technion – Israel Institute of Technology in Israel and the drug company, Teva Pharmaceuticals, identified the R-isomer as the active form of the drug. Teva brought it to market in partnership with Lundbeck in the European Union and Eisai in the United States and elsewhere. Prior to the discovery of rasagiline, a closely related analogue called SU-11739 (AGN-1133; J-508; N-methyl-N-propargyl-1-aminoindan) was patented in 1965. At first, the N-methyl was necessary for the agent to be considered a ring cyclized analogue of pargyline with about 20times the potency. However, the N-methyl compound was a non-selective MAOI. In addition, SU-11739 has been reported to have strong catecholamine-releasing actions. Racemic rasagiline was discovered and patented by Aspro Nicholas in the 1970s as a drug candidate for treatment of hypertension. Moussa B. H. Youdim was involved in developing selegiline as a drug for Parkinson's, in collaboration with Peter Reiderer. He called the compound AGN 1135. In 1996 Youdim, in collaboration with scientists from Technion and the US National Institutes of Health, and using compounds developed with Teva Pharmaceuticals, published a paper in which the authors wrote that they were inspired by the racemic nature of deprenyl and the greater activity of one of its stereoisomers, L-deprenyl, which became selegiline, to explore the qualities of the isomers of the Aspro compound, and they found that the R-isomer had almost all the activity; this is the compound that became rasagiline. They called the mesylate salt of the R-isomer TVP-1012 and the hydrochloride salt, TVP-101. Teva and Technion filed patent applications for this racemically pure compound, methods to make it, and methods to use it to treat Parkinson's disease and other disorders, and Technion eventually assigned its rights to Teva. Teva began development of rasagiline, and by 1999 was in Phase III trials, and entered into a partnership with Lundbeck in which Lundbeck agreed to share the costs and obtained the joint right to market the drug in the European Union. In 2003, Teva partnered with Eisai, giving Eisai the right to jointly market the drug for Parkinson's in the US, and to co-develop and co-market the drug for Alzheimers and other neurological diseases. It was approved by the European Medicines Agency for Parkinson's in 2005 and in the United States in 2006. Following its approval, rasagiline was described by some authors as a "me-too drug" that offered nothing new in terms of effectiveness and tolerability compared to selegiline. However, others have contended that rasagiline shows significant differences from and improvements over selegiline, like its lack of amphetamine metabolites and associated monoamine releasing agent effects, which may improve tolerability and safety. Conversely, others have maintained that rasagiline may be less efficacious than selegiline due to its lack of catecholaminergic activity enhancer actions. Society and culture Names Rasagiline is the generic name of the drug and its and . It is also known by its former developmental code name TVP-1012. Rasagiline is marketed under the brand name Azilect, among others. Generic forms Lower-cost generic versions of rasagiline are available. Research Neurodegenerative diseases Rasagiline was under development for the treatment of Alzheimer's disease. However, development was discontinued. Rasagiline was tested for efficacy in people with multiple system atrophy in a large randomized, placebo-controlled, double-blind disease-modification trial; the drug failed. Rasagiline has been reported to improve symptoms in people with freezing gait. Rasagiline has been studied in the treatment of amyotrophic lateral sclerosis (ALS; Lou Gehrig's disease). Psychiatric disorders Rasagiline has been described as an emerging potential antidepressant. MAO-B inhibitors have been found to reduce depressive symptoms in people with Parkinson's disease with a small effect size. However, rasagiline does not appear to have been studied in the treatment of depression in people without Parkinson's disease and it has not been developed nor approved for the treatment depression. In an animal study, selegiline was effective in models of antidepressant-like activity, whereas rasagiline was ineffective. The antidepressant effects of selegiline in animals appear to be independent of monoamine oxidase inhibition and may be related to its catecholaminergic activity enhancer (CAE) activity, which rasagiline lacks. Rasagiline has not been studied in the treatment of psychostimulant addiction as of 2015. Other conditions Rasagiline has been reported to improve restless legs syndrome (RLS). Notes References 1-Aminoindanes Antidepressants Antiparkinsonian agents Drugs with unknown mechanisms of action Enantiopure drugs Monoamine oxidase inhibitors Monoaminergic activity enhancer antagonists Neuroprotective agents Propargyl compounds
Rasagiline
[ "Chemistry" ]
6,997
[ "Stereochemistry", "Enantiopure drugs" ]
5,703,563
https://en.wikipedia.org/wiki/Booster%20dose
A booster dose is an extra administration of a vaccine after an earlier (primer) dose. After initial immunization, a booster provides a re-exposure to the immunizing antigen. It is intended to increase immunity against that antigen back to protective levels after memory against that antigen has declined through time. For example, tetanus shot boosters are often recommended every 10 years, by which point memory cells specific against tetanus lose their function or undergo apoptosis. The need for a booster dose following a primary vaccination is evaluated in several ways. One way is to measure the level of antibodies specific against a disease a few years after the primary dose is given. Anamnestic response, the rapid production of antibodies after a stimulus of an antigen, is a typical way to measure the need for a booster dose of a certain vaccine. If the anamnestic response is high after receiving a primary vaccine many years ago, there is most likely little to no need for a booster dose. People can also measure the active B and T cell activity against that antigen after a certain amount of time that the primary vaccine was administered or determine the prevalence of the disease in vaccinated populations. If a patient receives a booster dose but already has a high level of antibody, then a reaction called an Arthus reaction could develop, a localized form of Type III hypersensitivity induced by high levels of IgG antibodies causing inflammation. The inflammation is often self-resolved over the course of a few days but could be avoided altogether by increasing the length of time between the primary vaccine and the booster dose. It is not yet fully clear why some vaccines such as hepatitis A and B are effective for life, and some such as tetanus need boosters. The prevailing theory is that if the immune system responds to a primary vaccine rapidly, the body does not have time to sufficiently develop immunological memory against the disease, and memory cells will not persist in high numbers for the lifetime of the human. After a primary response of the immune system against a vaccination, memory T helper cells and B cells persist at a fairly constant level in germinal centers, undergoing cell division at a slow to nonexistent rate. While these cells are long-lived, they do not typically undergo mitosis, and eventually, the rate of loss of these cells will be greater than the rate of gain. In these cases, a booster dose is required to "boost" the memory B and T cell count back up again. Polio booster doses In the case of the polio vaccine, the memory B and T cells produced in response to the vaccine persist only six months after consumption of the oral polio vaccine (OPV). Booster doses of the OPV were found ineffective, as they, too, resulted in decreased immune response every six months after consumption. However, when the inactive polio vaccine (IPV) was used as a booster dose, it was found to increase the test subjects' antibody count by 39–75%. Often in developing countries, OPV is used over IPV, because IPV is expensive and hard to transport. Also, IPVs in tropical countries are hard to store due to the climate. However, in places where polio is still present, following up an OPV primary dose with an IPV booster may help eradicate the disease. In the United States, only the IPV is used. In rare cases (about 1 in 2.7 million), the OPV has reverted to a strengthened form of the illness, and caused paralysis in the recipients of the vaccine. For this reason, the US only administers IPV, which is given in four increments (3 within their first year and a half after birth, then one booster dose between the ages 4–6). Hepatitis B booster doses The need for a booster dose for hepatitis B has long been debated. Studies in the early 2000s that measured memory cell count of vaccinated individuals showed that fully vaccinated adults (those that received all three rounds of vaccination at the suggested time sequence during infancy) do not require a booster dose later in life. Both the United States Centers for Disease Control (CDC) and the Canadian National Advisory Committee on Immunization (NACI) supported these recommendations by publicly advising against the need for a hepatitis B booster dose. However, immuno-repressed individuals are advised to seek further screening to evaluate their immune response to hepatitis B, and potentially receive a booster dose if their B and T cell count against hepatitis B decrease below a certain level. Tetanus booster dose The tetanus disease requires a booster dose every 10 years, or in some circumstances immediately following infection of tetanus. Td is the name of the booster for adults, and differs from the primary dose in that it does not include immunization against pertussis (whooping cough). While the US recommends a booster for tetanus every 10 years, other countries, such as the UK, suggest just two booster shots within the first 20 years of life, but no booster after a third decade. Neonatal tetanus is a concern during pregnancy for some women, and mothers are recommended a booster against tetanus during their pregnancy in order to protect their child against the disease. Whooping cough booster dose Whooping cough, also called pertussis, is a contagious disease that affects the respiratory tract. The infection is caused by a bacterium that sticks to the cilia of the upper respiratory tract and can be very contagious. Pertussis can be especially dangerous for babies, whose immune systems are not yet fully developed, and can develop into pneumonia or result in the baby having trouble breathing. DTaP is the primary vaccine given against pertussis, and children typically receive five doses before the age of seven. Tdap is the booster for pertussis, and is advised in the US to be administered every ten years, and during every pregnancy for mothers. Tdap can also be used as a booster against tetanus. Upon its invention in the 1950s, the pertussis vaccine was whole-cell (contained the entire inactivated bacterium), and could cause fever and local reactions in people who received the vaccine. In the 1990s, people in the US started using acellular vaccines (contained small portions of the bacterium), that had lower side effects but were also less effective at triggering an immunological memory response, due to the antigen presented to the immune system being less complete. This less effective, but safer vaccine, led to the development of the booster Tdap. COVID-19 booster dose , protection against severe disease remained high at 6 months after vaccination despite lower efficacy in protection from COVID-19 infection. An international panel of scientists affiliated with the FDA, WHO, and several universities and healthcare institutions, concluded that there was insufficient data to determine the long-term protective benefits of a booster dose (only short-term protective effects were observed), and recommended instead that existing vaccine stock would save most lives if made available to people who had not received any vaccine. Israel first rolled out booster doses of the Pfizer–BioNTech COVID-19 vaccine for at-risk populations in July 2021. In August this was expanded for the rest of the Israeli population. Effectiveness against severe disease in Israel was lower among people vaccinated either in January or April than in those vaccinated in February or March. During the first 3 weeks of August 2021, just after booster doses were approved and began to be deployed widely, a short-term protective effect of a third dose (relative to two doses) was suggested. In the United States, the CDC rolled out booster shots to immunocompromised individuals during the summer of 2021 and originally planned to allow adults to receive a third dose of the COVID-19 vaccine starting in September 2021, with individuals becoming eligible starting 8 months after their second dose (for those who received a two-dose vaccine). After further data about long-term vaccine efficacy and the delta variant came to light, the CDC ultimately made recipients eligible for boosters 6 months after the second shot, in late October. Subsequently, vaccinations in the country surged. In September 2021, the UK's Joint Committee on Vaccination and Immunisation recommended a booster shot for the over-50s and at-risk groups, preferably the Pfizer–BioNTech vaccine, meaning about 30 million adults should receive a third dose. The UK's booster rollout was extended to over-40s in November 2021. Russia's Sputnik V COVID-19 vaccine, using similar technology to AstraZeneca's COVID-19 vaccine, in November 2021 introduced a COVID-19 booster called Sputnik Light, which according to a study by the Gamaleya Research Institute of Epidemiology and Microbiology has an effectiveness of 70% against the delta variant. It can be combined with all other vaccines and may be more effective with mRNA vaccines than mRNA boosters. Booster shots can also be used after infections. In this regard, the UK's National Health Service recommends people to wait 28 days after testing positive for COVID-19 before getting their booster shots. Evidence shows that getting a vaccine after recovery from a COVID-19 infection provides added protection to the immune system. References Vaccination
Booster dose
[ "Biology" ]
1,917
[ "Vaccination" ]
5,703,575
https://en.wikipedia.org/wiki/Virtual%20screening
Virtual screening (VS) is a computational technique used in drug discovery to search libraries of small molecules in order to identify those structures which are most likely to bind to a drug target, typically a protein receptor or enzyme. Virtual screening has been defined as "automatically evaluating very large libraries of compounds" using computer programs. As this definition suggests, VS has largely been a numbers game focusing on how the enormous chemical space of over 1060 conceivable compounds can be filtered to a manageable number that can be synthesized, purchased, and tested. Although searching the entire chemical universe may be a theoretically interesting problem, more practical VS scenarios focus on designing and optimizing targeted combinatorial libraries and enriching libraries of available compounds from in-house compound repositories or vendor offerings. As the accuracy of the method has increased, virtual screening has become an integral part of the drug discovery process. Virtual Screening can be used to select in house database compounds for screening, choose compounds that can be purchased externally, and to choose which compound should be synthesized next. Methods There are two broad categories of screening techniques: ligand-based and structure-based. The remainder of this page will reflect Figure 1 Flow Chart of Virtual Screening. Ligand-based methods Given a set of structurally diverse ligands that binds to a receptor, a model of the receptor can be built by exploiting the collective information contained in such set of ligands. Different computational techniques explore the structural, electronic, molecular shape, and physicochemical similarities of different ligands that could imply their mode of action against a specific molecular receptor or cell lines. A candidate ligand can then be compared to the pharmacophore model to determine whether it is compatible with it and therefore likely to bind. Different 2D chemical similarity analysis methods have been used to scan a databases to find active ligands. Another popular approach used in ligand-based virtual screening consist on searching molecules with shape similar to that of known actives, as such molecules will fit the target's binding site and hence will be likely to bind the target. There are a number of prospective applications of this class of techniques in the literature. Pharmacophoric extensions of these 3D methods are also freely-available as webservers. Also shape based virtual screening has gained significant popularity. Structure-based methods Structure-based virtual screening approach includes different computational techniques that consider the structure of the receptor that is the molecular target of the investigated active ligands. Some of these techniques include molecular docking, structure-based pharmacophore prediction, and molecular dynamics simulations. Molecular docking is the most used structure-based technique, and it applies a scoring function to estimate the fitness of each ligand against the binding site of the macromolecular receptor, helping to choose the ligands with the most high affinity. Currently, there are some webservers oriented to prospective virtual screening. Hybrid methods Hybrid methods that rely on structural and ligand similarity were also developed to overcome the limitations of traditional VLS approaches. This methodologies utilizes evolution‐based ligand‐binding information to predict small-molecule binders and can employ both global structural similarity and pocket similarity. A global structural similarity based approach employs both an experimental structure or a predicted protein model to find structural similarity with proteins in the PDB holo‐template library. Upon detecting significant structural similarity, 2D fingerprint based Tanimoto coefficient metric is applied to screen for small-molecules that are similar to ligands extracted from selected holo PDB templates. The predictions from this method have been experimentally assessed and shows good enrichment in identifying active small molecules. The above specified method depends on global structural similarity and is not capable of a priori selecting a particular ligand‐binding site in the protein of interest. Further, since the methods rely on 2D similarity assessment for ligands, they are not capable of recognizing stereochemical similarity of small-molecules that are substantially different but demonstrate geometric shape similarity. To address these concerns, a new pocket centric approach, PoLi, capable of targeting specific binding pockets in holo‐protein templates, was developed and experimentally assessed. Computing Infrastructure The computation of pair-wise interactions between atoms, which is a prerequisite for the operation of many virtual screening programs, scales by , N is the number of atoms in the system. Due to the quadratic scaling, the computational costs increase quickly. Ligand-based Approach Ligand-based methods typically require a fraction of a second for a single structure comparison operation. Sometimes a single CPU is enough to perform a large screening within hours. However, several comparisons can be made in parallel in order to expedite the processing of a large database of compounds. Structure-based Approach The size of the task requires a parallel computing infrastructure, such as a cluster of Linux systems, running a batch queue processor to handle the work, such as Sun Grid Engine or Torque PBS. A means of handling the input from large compound libraries is needed. This requires a form of compound database that can be queried by the parallel cluster, delivering compounds in parallel to the various compute nodes. Commercial database engines may be too ponderous, and a high speed indexing engine, such as Berkeley DB, may be a better choice. Furthermore, it may not be efficient to run one comparison per job, because the ramp up time of the cluster nodes could easily outstrip the amount of useful work. To work around this, it is necessary to process batches of compounds in each cluster job, aggregating the results into some kind of log file. A secondary process, to mine the log files and extract high scoring candidates, can then be run after the whole experiment has been run. Accuracy The aim of virtual screening is to identify molecules of novel chemical structure that bind to the macromolecular target of interest. Thus, success of a virtual screen is defined in terms of finding interesting new scaffolds rather than the total number of hits. Interpretations of virtual screening accuracy should, therefore, be considered with caution. Low hit rates of interesting scaffolds are clearly preferable over high hit rates of already known scaffolds. Most tests of virtual screening studies in the literature are retrospective. In these studies, the performance of a VS technique is measured by its ability to retrieve a small set of previously known molecules with affinity to the target of interest (active molecules or just actives) from a library containing a much higher proportion of assumed inactives or decoys. There are several distinct ways to select decoys by matching the properties of the corresponding active molecule and more recently decoys are also selected in a property-unmatched manner. The actual impact of decoy selection, either for training or testing purposes, has also been discussed. By contrast, in prospective applications of virtual screening, the resulting hits are subjected to experimental confirmation (e.g., IC50 measurements). There is consensus that retrospective benchmarks are not good predictors of prospective performance and consequently only prospective studies constitute conclusive proof of the suitability of a technique for a particular target. Application to drug discovery Virtual screening is a very useful application when it comes to identifying hit molecules as a beginning for medicinal chemistry. As the virtual screening approach begins to become a more vital and substantial technique within the medicinal chemistry industry the approach has had an expeditious increase. Ligand-based methods While not knowing the structure trying to predict how the ligands will bind to the receptor. With the use of pharmacophore features each ligand identified donor, and acceptors. Equating features are overlaid, however given it is unlikely there is a single correct solution. Pharmacophore models This technique is used when merging the results of searches by using unlike reference compounds, same descriptors and coefficient, but different active compounds. This technique is beneficial because it is more efficient than just using a single reference structure along with the most accurate performance when it comes to diverse actives. Pharmacophore is an ensemble of steric and electronic features that are needed to have an optimal supramolecular interaction or interactions with a biological target structure in order to precipitate its biological response. Choose a representative as a set of actives, most methods will look for similar bindings. It is preferred to have multiple rigid molecules and the ligands should be diversified, in other words ensure to have different features that don't occur during the binding phase. Shape-Based Virtual Screening Shape-based molecular similarity approaches have been established as important and popular virtual screening techniques. At present, the highly optimized screening platform ROCS (Rapid Overlay of Chemical Structures) is considered the de facto industry standard for shape-based, ligand-centric virtual screening. It uses a Gaussian function to define molecular volumes of small organic molecules. The selection of the query conformation is less important, rendering shape-based screening ideal for ligand-based modeling: As the availability of a bioactive conformation for the query is not the limiting factor for screening — it is more the selection of query compound(s) that is decisive for screening performance. Other shape-based molecular similarity methods such as Autodock-SS have also been developed. Field-Based Virtual Screening As an improvement to Shape-Based similarity methods, Field-Based methods try to take into account all the fields that influence a ligand-receptor interaction while being agnostic of the chemical structure used as a query. Various other fields are used in these methods, such as electrostatic or hydrophobic fields. Quantitative-Structure Activity Relationship Quantitative-Structure Activity Relationship (QSAR) models consist of predictive models based on information extracted from a set of known active and known inactive compounds. SAR's (Structure Activity Relationship) where data is treated qualitatively and can be used with structural classes and more than one binding mode. Models prioritize compounds for lead discovery. Machine learning algorithms Machine learning algorithms have been widely used in virtual screening approaches. Supervised learning techniques use a training and test datasets composed of known active and known inactive compounds. Different ML algorithms have been applied with success in virtual screening strategies, such as recursive partitioning, support vector machines, random forest, k-nearest neighbors and neural networks. These models find the probability that a compound is active and then ranking each compound based on its probability. Substructural analysis in Machine Learning The first Machine Learning model used on large datasets is the Substructure Analysis that was created in 1973. Each fragment substructure make a continuous contribution an activity of specific type. Substructure is a method that overcomes the difficulty of massive dimensionality when it comes to analyzing structures in drug design. An efficient substructure analysis is used for structures that have similarities to a multi-level building or tower. Geometry is used for numbering boundary joints for a given structure in the onset and towards the climax. When the method of special static condensation and substitutions routines are developed this method is proved to be more productive than the previous substructure analysis models. Recursive partitioning Recursively partitioning is method that creates a decision tree using qualitative data. Understanding the way rules break classes up with a low error of misclassification while repeating each step until no sensible splits can be found. However, recursive partitioning can have poor prediction ability potentially creating fine models at the same rate. Structure-based methods known protein ligand docking Ligand can bind into an active site within a protein by using a docking search algorithm, and scoring function in order to identify the most likely cause for an individual ligand while assigning a priority order. See also Grid computing High-throughput screening Docking (molecular) Retro screening Scoring functions ZINC database References Further reading External links VLS3D – list of over 2000 databases, online and standalone in silico tools Bioinformatics Drug discovery Cheminformatics Alternatives to animal testing
Virtual screening
[ "Chemistry", "Engineering", "Biology" ]
2,410
[ "Animal testing", "Biological engineering", "Life sciences industry", "Drug discovery", "Bioinformatics", "Alternatives to animal testing", "Computational chemistry", "nan", "Cheminformatics", "Medicinal chemistry" ]
5,703,638
https://en.wikipedia.org/wiki/BBGKY%20hierarchy
In statistical physics, the Bogoliubov–Born–Green–Kirkwood–Yvon (BBGKY) hierarchy (sometimes called Bogoliubov hierarchy) is a set of equations describing the dynamics of a system of a large number of interacting particles. The equation for an s-particle distribution function (probability density function) in the BBGKY hierarchy includes the (s + 1)-particle distribution function, thus forming a coupled chain of equations. This formal theoretic result is named after Nikolay Bogolyubov, Max Born, Herbert S. Green, John Gamble Kirkwood, and . Formulation The evolution of an N-particle system in absence of quantum fluctuations is given by the Liouville equation for the probability density function in 6N-dimensional phase space (3 space and 3 momentum coordinates per particle) where are the position and momentum for -th particle with mass , and the net force acting on the -th particle is where is the pair potential for interaction between particles, and is the external-field potential. By integration over part of the variables, the Liouville equation can be transformed into a chain of equations where the first equation connects the evolution of one-particle probability density function with the two-particle probability density function, second equation connects the two-particle probability density function with the three-particle probability density function, and generally the s-th equation connects the s-particle probability density function with the (s + 1)-particle probability density function: The equation above for s-particle distribution function is obtained by integration of the Liouville equation over the variables . The problem with the above equation is that it is not closed. To solve , one has to know , which in turn demands to solve and all the way back to the full Liouville equation. However, one can solve , if could be modeled. One such case is the Boltzmann equation for , where is modeled based on the molecular chaos hypothesis (). In fact, in the Boltzmann equation is the collision integral. This limiting process of obtaining Boltzmann equation from Liouville equation is known as Boltzmann–Grad limit. Physical interpretation and applications Schematically, the Liouville equation gives us the time evolution for the whole -particle system in the form , which expresses an incompressible flow of the probability density in phase space. We then define the reduced distribution functions incrementally by integrating out another particle's degrees of freedom . An equation in the BBGKY hierarchy tells us that the time evolution for such a is consequently given by a Liouville-like equation, but with a correction term that represents force-influence of the suppressed particles The problem of solving the BBGKY hierarchy of equations is as hard as solving the original Liouville equation, but approximations for the BBGKY hierarchy (which allow truncation of the chain into a finite system of equations) can readily be made. The merit of these equations is that the higher distribution functions affect the time evolution of only implicitly via Truncation of the BBGKY chain is a common starting point for many applications of kinetic theory that can be used for derivation of classical or quantum kinetic equations. In particular, truncation at the first equation or the first two equations can be used to derive classical and quantum Boltzmann equations and the first order corrections to the Boltzmann equations. Other approximations, such as the assumption that the density probability function depends only on the relative distance between the particles or the assumption of the hydrodynamic regime, can also render the BBGKY chain accessible to solution. Bibliography s-particle distribution functions were introduced in classical statistical mechanics by J. Yvon in 1935. The BBGKY hierarchy of equations for s-particle distribution functions was written out and applied to the derivation of kinetic equations by Bogoliubov in the article received in July 1945 and published in 1946 in Russian and in English. The kinetic transport theory was considered by Kirkwood in the article received in October 1945 and published in March 1946, and in the subsequent articles. The first article by Born and Green considered a general kinetic theory of liquids and was received in February 1946 and published on 31 December 1946. See also Fokker–Planck equation Vlasov equation Cluster-expansion approach References Statistical mechanics Non-equilibrium thermodynamics Max Born
BBGKY hierarchy
[ "Physics", "Mathematics" ]
888
[ "Non-equilibrium thermodynamics", "Statistical mechanics", "Dynamical systems" ]
5,704,580
https://en.wikipedia.org/wiki/TI-Nspire%20series
The TI-Nspire is a graphing calculator line made by Texas Instruments, with the first version released on 25 September 2007. The calculators feature a non-QWERTY keyboard and a different key-by-key layout than Texas Instruments's previous flagship calculators such as the TI-89 series. Development The original TI-Nspire was developed out of the TI PLT SHH1 prototype calculator, the TI-92 series of calculators released in 1995, and the TI-89 series of calculators released in 1998. In 2011, Texas Instruments released the CX line of their TI-Nspire calculators which effectively replaced the previous generation. The updates included improvements to the original's keyboard layout, an addition of a rechargeable lithium-ion battery, 3D graphing capabilities and reduced form factor. TI got rid of the removable keypad with this generation and therefore, the TI-84 compatibility mode. In 2019, the TI-Nspire CX II was added, with a boost in clock speed and changes to the existing operating system. Versions The TI-Nspire series uses a different operating system compared to Texas Instruments' other calculators. The TI-Nspire includes a file manager that lets users create and edit documents. As a result of being developed from PDA-esque devices, the TI-Nspire retains many of the same functional similarities to a computer. TI-Nspire The standard TI-Nspire calculator is comparable to the TI-84 Plus in features and functionality. It features a TI-84 mode by way of a replaceable snap-in keypad and contains a TI-84 Plus emulator. The likely target of this is secondary schools that make use of the TI-84 Plus currently or have textbooks that cover the TI-83 (Plus) and TI-84 Plus lines, and to allow them to transition to the TI-Nspire line more easily. The TI-Nspire started development in 2004. It uses a proprietary SoC of the ARM9 variant for its CPU. The TI-Nspire and TI-Nspire CAS (Computer algebra system) calculators have 32 MB of NAND Flash, 32 MB of SDRAM, and 512 KB of NOR Flash. However, only 20 MB and 16 MB are user-accessible respectively. The TI-Nspire released in two models; a numeric and CAS version. The numeric is similar in features to the TI-84, except with a bigger and higher resolution screen and a full keyboard. The feature that the numeric lacks is the ability to solve algebraic equations such as indefinite integrals and derivatives. To fill in the gap of needing an algebraic calculator, Texas Instruments introduced the second model with the name: TI-Nspire CAS. The CAS is designed for college and university students, giving them the feature of calculating many algebraic equations like the Voyage 200 and TI-89 (which the TI-Nspire was intended to replace). However, the TI-Nspire does lack part of the ability of programming and installing additional apps that the previous models had, although a limited version of TI-BASIC is supported, along with Lua in later versions. C and assembly are only possible by Ndless. Because the TI-Nspire lacks a QWERTY keyboard, it is acceptable for use on the PSAT, SAT, SAT II, ACT, AP, and IB Exams. TI-Nspire CAS The TI-Nspire CAS calculator is capable of displaying and evaluating values symbolically, not just as floating-point numbers. It includes algebraic functions such as a symbolic differential equation solver: deSolve(...), the complex eigenvectors of a matrix: eigVc(...), as well as calculus based functions, including limits, derivatives, and integrals. For this reason, the TI-Nspire CAS is more comparable to the TI-89 Titanium and Voyage 200 than to other calculators. Unlike the TI-Nspire, it is not compatible with the snap-in TI-84 Plus keypad. It is accepted in the SAT and AP exams (without a QWERTY keyboard) but not in the ACT, IB or British GCSE and A level. The body color is grey. TI-Nspire Touchpad On 8 March 2010, Texas Instruments announced new models of the TI-Nspire Touchpad and TI-Nspire CAS Touchpad graphing calculators. In the United States the new calculator was listed on the TI website as a complement to the TI-Nspire with Clickpad, though it was introduced as a successor to the previous model in other countries. The calculators were released alongside the OS 2.0 update, which featured a number of updates to the user interface and new functions. The keyboards on the touchpad keypads featured a different and less crowded key layout along with the touchpad, which is used for navigation. The touchpad keypads were also compatible with older calculators that are running OS 2.0 or newer. New calculators that were shipped with touchpad keypads supported an optional rechargeable battery. The second generation was available in two models, the TI-Nspire Touchpad and TI-Nspire CAS Touchpad; each model has maintained the color of itself, with the base model being white and black while the CAS is black and gray. To reduce theft of school-owned TI-Nspire calculators, Texas Instruments also introduced the EZ-Spot Teacher Packs with a bright, easy-to-spot, "school bus yellow" frame and slide case. The hardware of both versions are the same, with the only differences being cosmetic. The TI-Nspire calculators that were released after the touchpad TI-Nspires also have EZ-Spot versions. TI-Nspire CX and TI-Nspire CX CAS In 2011, the TI-Nspire CX and CX CAS were announced as updates to TI-Nspire series. They have a thinner design, with a thickness of 1.57 cm (almost half of the TI-89), a 1,200 mA·h (1,060 mAh before 2013) rechargeable battery (wall adapter is included in the American retail package), a 320 by 240 pixel full color backlit display (3.2" diagonal), and OS 3.0 which includes features such as 3D graphing. The CX series were released in the same time frame as the Casio Prizm (fx-CG10/20), Casio's color screen graphing calculator with similar features. The TI-Nspire CX series differ from all previous TI graphing calculator models in that the CX series are the first to use a rechargeable 1,060 mAh lithium-ion battery (upgraded to 1,200 mAh in the 2013 revision). The device is charged via a USB cable. TI claims that the battery requires four hours to charge, that a full charge powers the device for up to two weeks under normal daily use, and that the battery should last up to 3 years before it requires replacement. The battery is user-replaceable. With the exception of interchangeable TI-84 keypads, the CX series retain all features of the previous TI-Nspire models. The colors of the calculator are still the same as those of the TI-Nspire models; the CX is white and dark blue, while the CX CAS is gray and black. The external connectors have changed slightly. The mini-USB port, located at the center on the top of the TI-Nspire series, has moved to the right on the top on the CX series. On the CX series, TI added a second port immediately left of the mini-USB port, for a new wireless module. The new wireless TI-Nspire Navigator adapter, which allows teachers to monitor students and send files, is not compatible with the previous TI-Nspire models. The third port, located at the bottom of the handheld, is for the TI Charging Dock and Lab Cradle. The keypad layout is very similar to that of the TI-Nspire Touchpad. Both models have 100 MB of user memory and 64 MB of RAM. The retail package comes in a plastic blister case and does not have the full manual, while the teachers edition comes in a box with a TI-Nspire CX poster for classrooms and the full manual (in English and French in the US). Both devices ship with the student/teacher software for Windows/Mac OS X. According to Texas Instruments, The CX is accepted in SAT, IB, AP, ACT and British GCSE and A level exams. The CX CAS is only accepted on SAT and AP. Chinese market Four models aimed for the Chinese market were launched, with specialized features. All four models have Chinese labeled keyboards. The CX-C and CX-C CAS models are similar to CX and CX CAS, but included a concise Chinese-English dictionary. The CM-C and CM-C CAS are cheaper, featured a more stream-lined design, but have only 32 MB of RAM and no port for the wireless module. The systems of the Chinese versions are not interchangeable with those of the international models. TI-Nspire CX II and TI-Nspire CX II CAS In 2019, Texas Instruments introduced the TI-Nspire CX II and TI-Nspire CX II CAS. They feature a slightly different operating system with several enhancements and slightly improved hardware, including Python integration. The non-CAS version lacks the exact math mode which is included in the CAS version as well as all the models dedicated to the European/China market (T and C versions). European market Like China, the continent of Europe also has models aimed for its market. These calculators include a "-T" after the CX. The CX II-T and CX II-T CAS both have different body color designs than their North American counterparts. One of the main feature differences in the European versions is the inclusion of an exact math engine in both the CAS and the non-CAS version. European models also omit the WiFi adapter port from the top of the calculator. Software Texas Instruments offers several different versions of software for their calculators. They offer CAS and non-CAS versions of their student and teacher software. This software allows users to share results with classmates and teachers and gives the user an emulated version of the TI-Nspire. TI also offers a computer link software for connecting their handheld to their computer to transfer documents. The software allows for the syncing of documents to and from the calculator or computer. This software requires a license in order to be used. Programming languages Beside the TI Basic functionality the Nspire calculators provides functionality to run scripts written in two additional programming languages with the standard TI firmware. With the release of OS 3.0, the Lua scripting language is supported, allowing 3rd party programs to be run without the need of exploits. There are currently more than 100 third-party programs and functions for the Nspire that introduce new functionality, like Laplace transforms, Fourier transforms, and 3rd and 4th degree differential equations, that are not included by default. The actual LUA Version is 5.1 in OS Version 5.2 (September 2020). Since firmware version 5.2 it is possible to program and run Python (Version 3.4.0 in September 2020) scripts in an interpreter shell or from the main calculator command line. Lab Cradle The TI-Nspire Lab Cradle is a Calculator-Based Laboratory system introduced in 1994. It is a portable data collection device for the life sciences. The CBL system was replaced in 1999 by the CBL 2. The TI-Nspire Lab Cradle has three analog and two digital inputs with a sampling rate of up to 100,000 readings per second. The cradle also has 32 MB of storage space to store sensor data. The Lab Cradle allows the TI-Nspire series to communicate with older Calculator-Based Laboratory systems that previous TI calculators used (TI-73 series, TI-82, TI-83 series, TI-85, and TI-86). The TI-Nspire Lab Cradle used the rechargeable battery of the TI-Nspire and support three different charging options: wall adapter, USB cable to computer and TI-Nspire Cradle Charging Bay. The TI-Nspire Lab Cradle is marketed by Texas Instruments and developed as part of an ongoing business venture between TI and Vernier Software & Technology of Portland, Oregon. Navigator system The navigator system allows teachers to connect multiple TI-Nspire calculators to a computer through the TI-Nspire Access Point and TI-Nspire Navigator Wireless Cradles. The system includes the TI-Nspire cradle charging bay and the main system which looks like a wireless router. The Navigator system was first available when the first generation Nspires were launched, but when the TI-Nspire CX and CX CAS were released, a new wireless adapter was announced that is smaller but not compatible with the TI-Nspire and TI-Nspire Touchpad. Press-to-Test Press-to-Test is a feature that restricts access to the user's documents and certain features of the calculator for a limited time. Its intended purpose is to prevent cheating on tests and exams. Press-to-Test is enabled by pressing a certain button combination when turning on the calculator. The features that are blocked (for example 3D graphs and drag & drop for graphs) can be selectively enabled, but access to existing documents is always prohibited. When the handheld is running in Press-to-Test mode, an LED on top of it blinks to indicate that Press-to-Test is enabled. Press-to-Test can only be disabled by connecting to another calculator or a computer with TI-Nspire compatible software installed. Removing the batteries or pressing the reset button will not disable it. Ndless Ndless (alternatively stylized Ndl3ss) is a third-party jailbreak for the TI-Nspire calculators that allows native programs, such as C, C++, and ARM assembly programs, to run. Ndless was developed initially by Olivier Armand and Geoffrey Anneheim and released in February 2010 for the Clickpad handheld. Organizations such as Omnimaga and TI-Planet promoted Ndless and built a community around Ndless and Ndless programs. With Ndless, low-level operations can be accomplished, for example overclocking, allowing the handheld devices to run faster. Downgrade prevention can be defeated as well. In addition, Game Boy, Game Boy Advance, and Nintendo Entertainment System emulators exist for the handhelds with Ndless. Major Ndless-powered programs also include a port of the game Doom. Unlike Lua scripts, which are supported by Texas Instruments, Ndless is actively counteracted by TI. Each subsequent OS attempts to block Ndless from operating. Technical specifications Texas Instruments developed their own proprietary System-On-Chip from the ARM9 32-bit processors. The first generation of the TI-Nspire is based on LSI Corporation's (now Broadcom Inc.) "Zevio" design while the CX and CX II generation is built with Toshiba's Application-Specific Integrated Circuit design. Most Texas Instruments calculators contain only a non-volatile read-only memory called NAND Flash and a volatile random-access memory called Synchronous dynamic random-access memory or SDRAM. The NAND Flash is not executable but contains parts of the operating system. However, the TI-Nspire also uses NOR ROM to store boot instructions for the operating system. Texas Instruments most likely did this to free up the NAND Flash, and SDRAM in the calculator to be used by the user and operating system. The NAND Flash and SDRAM are used to store user and operating system documents. Previous Texas Instruments calculators had a backup button cell battery used to maintain user information, system information and time and date, between battery changes. This allows a user to keep their information when a battery is removed. Because the TI-Nspire lacks this backup battery, the SDRAM content is deleted whenever the user has to swap the battery out. This necessitates that the calculator load the operating system and file structure from the NAND Flash to the SDRAM, causing a longer loading time. Despite the overall performance increase between versions of the TI-Nspire, performance differences do exist. The TI-Nspire CX II version lacks 10+ MB of storage space compared to its predecessor. The TI-Nspire CM-C and CM-C CAS (the Chinese versions of the CX and CX CAS) are cheaper and have an updated design, but have only 32 MB of RAM and no port for the wireless module. Operating system versions The TI-Nspire CX/CX CAS calculators are now running the operating system (OS) version 4.5.5.79, released in August 2021. The TI-Nspire CX II/CX II CAS are running version 6.0.3.374, released in January 2023. The operating system has been updated frequently since 2007 (partly due to bugs and missing functions, and also to patch jailbreak exploits), one year after its release in 2006. Version 2.0, 3.0, 4.0, and 5.0 were major upgrades. Added features in OS 2.0 Starting with OS 2.0, additional features were added to increase usability and usefulness of the TI-Nspire. Below are major changes that were made. These features have stayed with the Nspire series to date. Added features in OS 3.0 Images can be included in TI-Nspire documents using the computer software. They can then be displayed on the Nspire calculators and in full color on the Nspire CX calculators. Graphs can be drawn on top of the images. A data collection application is included with the OS, for use with the Lab Cradle. 3D graphing is supported, as well as differential equations. Other features were also added, including improvements to functions that are related to statistics. OS 3.0 also adds the ability to run programs that are written in Lua. OS 3.0.1 introduced a number of bugs, but most of these have been fixed as of 3.0.2. In OS 3.2, conic equations in standard formats can be graphed and a new chemistry feature, Chem Box, allows users to write chemical notations. OS 3.2 also saw the inclusion of the Chipmunk physics engine for use in Lua programs. In OS 3.9, the area between curves can now be calculated on the graph bar. Added features in OS 4.0 An indicator now displays the angle mode (Degrees, Radians or Gradians) in effect for the current application. In window settings on graphs, exact inputs such as 7/3 or 2*π can now be used for input of custom window settings. Added features in OS 5.0 OS 5.0 is currently exclusive to the CX II/CX II CAS and their -T counterparts. These features were added in this release: Animated Path plot Modernized user interface Dynamic coefficient values Points by coordinates Tick-mark labels Various TI-Basic programming enhancements Simplified Disable CAS (CAS model exclusive) DeSolve wizard (CAS model exclusive) Added features in OS 5.2 OS 5.2 is currently exclusive to the CX II/CX II CAS and their -T counterparts. These features were added in this release: Python programming language support in Host Software and on the calculator. Added features in OS 5.3 OS 5.3 is currently exclusive to the CX II/CX II CAS and their -T counterparts. These features were added in this release: Exam support Quick set-up code to enter Press-to-Test Python programming improvements Six TI-authored modules that are additional libraries for Python's functionality. The new modules are: TI Draw TI Plotlib TI Hub TI Rover TI Image TI System See also Comparison of Texas Instruments graphing calculators Comparison of computer algebra systems References External links TI-Nspire series official site Texas Instruments Calculators & Education Technology Datamath Calculator Museum Lua programming on the TI-Nspire TI-Nspire series Google Discussion Forum TI-Nspire series user programs TI-Nspire series BASIC TI-Nspire series program collection Graphing calculators Texas Instruments programmable calculators Computer algebra systems Lua (programming language)-scriptable hardware
TI-Nspire series
[ "Mathematics" ]
4,332
[ "Computer algebra systems", "Mathematical software" ]