id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
17,511,951 | https://en.wikipedia.org/wiki/MobileHCI | The Conference on Mobile Human-Computer Interaction (MobileHCI) is a leading series of academic conferences in Human–computer interaction and is sponsored by ACM SIGCHI, the Special Interest Group on Computer-Human Interaction. MobileHCI has been held annually since 1998 and has been an ACM SIGCHI sponsored conference since 2012 The conference is very competitive, with an acceptance rate of below 20% in 2017 from 25% in 2006 and 21.6% in 2009. MobileHCI 2011 was held in Stockholm, Sweden, and MobileHCI 2012 which was sponsored by SIGCHI held in San Francisco, USA.
History
The MobileHCI series started in 1998 as a stand-alone Workshop on Human Computer Interaction with Mobile Devices organized by Chris Johnson and held at the University of Glasgow. In the following year the workshop was held in conjunction with the Interact conference and was organized by Stephen Brewster and Mark Dunlop. In 2001 MobileHCI was again organized by Brewster and Dunlop in association with a major conference. This was in conjunction with IHM-HCI in Lille, France.
In 2002, MobileHCI was held independently from an associated conference as a stand-alone symposium in Pisa, Italy, organized by Fabio Paternò. In 2003 the conference was organized by Luca Chittaro in Udine, Italy. In 2004 it was again organized by Brewster and Dunlop, this time at the University of Strathclyde. In the following years the conference took place in Austria, Finland, and Singapore. MobileHCI 2008 has been organized by Henri Ter Hofte from the Telematica Instituut in the Netherlands.
For 2008 the conference's steering committee agreed to award a prize for the most influential paper published at MobileHCI ten years ago. The price should recognises the longevity of impact papers from the first MobileHCI have had on the research community. The 2008 prize was awarded to Keith Cheverst for the paper Exploiting Context in HCI Design for Mobile Systems written together with Tom Rodden, Nigel Davies, and Alan Dix.
MobileHCI 2009 was organised by Fraunhofer FIT and University of Siegen, in cooperation with ACM SIGCHI and ACM SIGMOBILE. The general chair was Prof. Dr. Reinhard Oppermann from Fraunhofer Society FIT, and the program chairs were Dr. Markus Eisenhauer, Prof. Dr. Matthias Jarke, and Prof. Dr. Volker Wulf. The 2009 prize for the most influential paper from ten years ago was awarded to Albrecht Schmidt for his paper Implicit human-computer interaction through context. The acceptance rate was 24.2% for full papers and 18.5% for short papers.
The 12th MobileHCI took place in Lisboa, Portugal, from September 7–10, 2010. The conference's general chairs were Marco de Sá and Luís Carriço from the University of Lisboa. The theme of the conference was a mobile world for all. The acceptance rate was 20% for full papers and 22% overall.
MobileHCI 2011 took place in Stockholm, Sweden from 30 August to 2 September 2011. The 13th in the series was chaired by Markus Bylund (Swedish Institute of Computer Science) and Maria Holm (Mobile Life Centre) with Oskar Juhlin and Ylva Fernaeus also from Mobile Life Centre as programme chairs. The full paper acceptance rate was 27% with an overall 23%. The Most influential Paper from MobileHCI 2001 prize was awarded to Simon Holland for his paper AudioGPS: Spatial Audio Navigation with a Minimal Attention Interface.
In 2018, the conference's steering committee agreed to award a prize for the most impactful paper published at MobileHCI in the 20 years conference series ("Impact Award") to the paper by Matthias Böhmer, Brent Hecht, Johannes Schöning, Antonio Krüger and Gernot Bauer on mobile application usage. In 2021, the same paper was honoured with the "Most Influential Paper Award" for the recent 10 years of the conference series.
In 2020, the conference's steering committee agreed to change the name of MobileHCI from Conference on Human-Computer Interaction with Mobile Devices and Services to Conference on Mobile Human-Computer Interaction to reflect the societal and technological transition where mobility has become pervasive and prime to our lives.
Topics
In its early years, the conference had a limited number of unspecific topics. The list of topics grew over the years.
Topics considered relevant to date are, for example, audio and speech interaction, input and output techniques for mobile technologies, evaluation of mobile devices and services, and multimodal interaction. Examples of topics that emerged in the last years are Wearable Computing, Mobile social networks, and studies on the use of mobile devices for special target groups (e.g. seniors).
Workshops
Since 2002 workshops have been held prior to the main conference. Workshops focus on specific topics related to the conference's main theme. To participate in a workshop it is often necessary to submit a paper and present it during the workshop. Usually around 20 persons participate in a workshop. Besides the presentations there is typically more room for discussions than during the main conference. Successful workshops are often repeated in the following years. Some examples are the workshops on HCI in Mobile Guides, Mobile Interaction with the Real World (MIRW), and Speech in Mobile and Pervasive Environments (SiMPE).
Tutorials
Tutorial days have been held at Mobile HCI 2008 and 2009. After more than 10 years of Mobile HCI, providing an overview of the state of the art becomes more and more challenging. During the tutorial days, a number of well-known researchers in Mobile HCI gave overviews of the state of the art and cover many of the relevant topics. The tutorials also introduced the "must read" papers in this domain. The audience varied and included new students starting a PhD in Mobile HCI, practitioners wanting a quick survey of the state of the art and educators wishing to get an overview of Mobile HCI for their own teaching.
External links
Website of the MobileHCI conference series
Website of the MobileHCI 2013 conference
Website of the MobileHCI 2012 conference
Website of the MobileHCI 2011 conference
Website of the MobileHCI 2010 conference
Website of the workshop HCI in Mobile Guides 2005
Website of the workshop Mobile Interaction with the Real World 2009
IT-Outsourcing (in German)
Mobile HCI 2009 tutorial day slides
Mobile HCI 2008 tutorial day slides
Website of the workshop Speech in Mobile and Pervasive Environments
Notes and references
Computer science conferences
Human–computer interaction
Association for Computing Machinery | MobileHCI | [
"Technology",
"Engineering"
] | 1,351 | [
"Human–computer interaction",
"Computer science",
"Computer science conferences",
"Human–machine interaction"
] |
17,512,141 | https://en.wikipedia.org/wiki/VLAN%20hopping | VLAN hopping is a computer security exploit, a method of attacking networked resources on a virtual LAN (VLAN). The basic concept behind all VLAN hopping attacks is for an attacking host on a VLAN to gain access to traffic on other VLANs that would normally not be accessible. There are two primary methods of VLAN hopping: switch spoofing and double tagging. Both attack vectors can be mitigated with proper switch port configuration.
Switch spoofing
In a switch spoofing attack, an attacking host imitates a trunking switch by speaking the tagging and trunking protocols (e.g. Multiple VLAN Registration Protocol, IEEE 802.1Q, Dynamic Trunking Protocol) used in maintaining a VLAN. Traffic for multiple VLANs is then accessible to the attacking host.
Mitigation
Switch spoofing can only be exploited when interfaces are set to negotiate a trunk. To prevent this attack on Cisco IOS, use one of the following methods:
1. Ensure that ports are not set to negotiate trunks automatically by disabling DTP:
Switch (config-if)# switchport nonegotiate
2. Ensure that ports that are not meant to be trunks are explicitly configured as access ports
Switch (config-if)# switchport mode access
Double tagging
In a double tagging attack, an attacker connected to an 802.1Q-enabled port prepends two VLAN tags to a frame that it transmits. The frame (externally tagged with VLAN ID that the attacker's port is really a member of) is forwarded without the first tag because it is the native VLAN of a trunk interface. The second tag is then visible to the second switch that the frame encounters. This second VLAN tag indicates that the frame is destined for a target host on a second switch. The frame is then sent to the target host as though it originated on the target VLAN, effectively bypassing the network mechanisms that logically isolate VLANs from one another.
However, possible replies are not forwarded to the attacking host (unidirectional flow).
Mitigation
Double tagging can only be exploited on switch ports configured to use native VLANs. Trunk ports configured with a native VLAN don't apply a VLAN tag when sending these frames. This allows an attacker's fake VLAN tag to be read by the next switch.
Double tagging can be mitigated by any of the following actions (incl. IOS example):
Simply do not put any hosts on VLAN 1 (the default VLAN). i.e., assign an access VLAN other than VLAN 1 to every access port
Switch (config-if)# switchport access vlan 2
Change the native VLAN on all trunk ports to an unused VLAN ID.
Switch (config-if)# switchport trunk native vlan 999
Explicit tagging of the native VLAN on all trunk ports. Must be configured on all switches in network autonomy.
Switch(config)# vlan dot1q tag native
Example
As an example of a double tagging attack, consider a secure web server on a VLAN called VLAN2. Hosts on VLAN2 are allowed access to the web server; hosts from outside VLAN2 are blocked by layer 3 filters. An attacking host on a separate VLAN, called VLAN1(Native), creates a specially formed packet to attack the web server. It places a header tagging the packet as belonging to VLAN2 under the header tagging the packet as belonging to VLAN1. When the packet is sent, the switch sees the default VLAN1 header and removes it and forwards the packet. The next switch sees the VLAN2 header and puts the packet in VLAN2. The packet thus arrives at the target server as though it were sent from another host on VLAN2, ignoring any layer 3 filtering that might be in place.
See also
Private VLAN
References
Computer network security
Ethernet | VLAN hopping | [
"Engineering"
] | 830 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
17,514,050 | https://en.wikipedia.org/wiki/Krascheninnikovia%20lanata | Krascheninnikovia lanata is a species of flowering plant currently placed in the family Amaranthaceae (previously, Chenopodiaceae), known by the common names winterfat, white sage, and wintersage. It is native to much of western North America: from central Western Canada; through the Western United States; to northern Mexico.
The genus was named for Stepan Krasheninnikov—the early 18th-century Russian botanist and explorer of Siberia and Kamchatka.
Distribution and habitat
Winterfat grows in a great variety of habitats at in elevation—from grassland plains and xeric scrublands to rain shadow faces of montane locations.
Winterfat is a halophyte that thrives in salty soils such as those on alkali flats, including those of the Great Basin, Central Valley, Great Plains, and Mojave Desert.
Description
Krascheninnikovia lanata is a small shrub sending erect stem branches to heights between 0. It produces flat lance-shaped leaves up to 3 centimeters long. The stems and gray foliage are covered in woolly white hairs that age to a reddish color. The woolly hairs start development in the late fall and gradually diminish through the winter season.
The tops of the stem branches are occupied by plentiful spike inflorescences from March to June. The shrub is generally monoecious, with each upright inflorescence holding mostly staminate flowers with a few pistillate flowers clustered near the bottom. The staminate flowers have large, woolly leaflike bracts.
The pistillate flowers have smaller bracts and develop tiny white fruits. The silky hairs on the fruits allow for wind dispersal.
Cultivation
Krascheninnikovia lanata is cultivated in the specialty plant nursery trade as an ornamental plant for xeriscape and wildlife gardens, and native plant natural landscapes. The light gray foliage can be a distinctive feature in garden designs. The plants are very long-lived.
Uses
Winterfat is an important winter forage for livestock and wildlife because its evergreen leaves are high in protein, hence its common name.
Cultivation
Winterfat is sometimes grown in xeriscape or native plant gardens for its striking whitish wool. It is especially valued for the fall and winter interest it provides in gardens. Small plants are easily transplanted.
Native American use
Winter fat was a traditional medicinal plant used by many Native American tribes that lived within its large North American range. These tribes used traditional plants to treat a wide variety of ailments and for other benefits. The Zuni people use a poultice of ground root bound with a cotton cloth to treat burns.
References
External links
Jepson Manual Treatment - Krascheninnikovia lanata (Winterfat)
USDA: Plants Profile of Krascheninnikovia lanata - with numerous Related Web Sites links.
U.S. Forest Service: Krascheninnikovia lanata Ecology
Native American Ethnobotany - 'Winterfat' - (University of Michigan - Dearborn)
Krascheninnikovia lanata (Winterfat) - U.C. Photo gallery
Chenopodioideae
Halophytes
Flora of Northwestern Mexico
Flora of the Southwestern United States
Flora of the Northwestern United States
Flora of Western Canada
Flora of Yukon
Flora of the Rocky Mountains
Flora of the Great Basin
Flora of the Sierra Nevada (United States)
Flora of the California desert regions
Forages
Plants used in traditional Native American medicine
Garden plants of North America
Drought-tolerant plants
Flora without expected TNC conservation status | Krascheninnikovia lanata | [
"Chemistry"
] | 721 | [
"Halophytes",
"Salts"
] |
17,514,197 | https://en.wikipedia.org/wiki/S%2A | S* (pronounced "S Star") is the diminutive for the S* Life Science Informatics Alliance, a collaboration between seven universities and the Karolinska Institutet of Sweden, and its course, the S-Star Bioinformatics Online course. The goal is to provide course material for training in bioinformatics and genomics.
Member institutions
The following institutions are members of the S* Life Science Informatics Alliance:
Macquarie University, Sydney, Australia
University of Sydney (School of Molecular Bioscience), Australia (as of 2001)
Karolinska Institutet, Sweden (as of 2001)
University of Uppsala, Sweden (as of 2001)
National University of Singapore, Singapore (as of 2001)
University of the Western Cape, South Africa (as of 2001)
Stanford University, United States (as of 2001)
University of California, San Diego, United States, via the San Diego Supercomputer Center (as of 2002)
References
Further reading
https://www.learntechlib.org/p/100842
Bioinformatics organizations | S* | [
"Chemistry",
"Biology"
] | 222 | [
"Bioinformatics stubs",
"Bioinformatics organizations",
"Biotechnology stubs",
"Biochemistry stubs",
"Bioinformatics"
] |
17,515,602 | https://en.wikipedia.org/wiki/Test%20stub | A test stub is a test double that provides static values to the software under test.
A test stub provides canned answers to calls made during the test, usually not responding at all to anything outside what's programmed in for the test.
A stub may be coded by hand or generated via a tool.
See also
Mock object
Method stub
Software testing
Test Double
Stub (distributed computing)
References
External links
Test Stub at XUnitPatterns.com
Software testing | Test stub | [
"Engineering"
] | 98 | [
"Software engineering",
"Software testing"
] |
17,517,121 | https://en.wikipedia.org/wiki/Kweichow%20Moutai | Kweichow Moutai Co. Ltd. (), commonly referred to as Kweichow Moutai (), is a Chinese company specializing in the production, sale, and distribution of Maotai liquor, a particular style of jiangxiang () baijiu.
Since the establishment of the company in its modern form in 1951, Kweichow Moutai has become the most famous brand of baijiu both within China and abroad gaining a notoriety among politicians and businessmen. The spirit is often presented at large diplomatic events with foreign dignitaries such as welcome dinners for US President Nixon's 1972 visit to China, as well as Xi Jinping's and Barack Obama's 2013 bilateral meeting in California. Famously, at a state dinner with Deng Xiaoping, US diplomat Henry Kissinger was quoted as saying, "I think if we drink enough Moutai, we can solve anything.”
Kweichow Moutai's position as a cultural icon has granted it broad market successes as well. Sitting at 181 on Fortune 500 China, the distillery is the largest non-technology company in China and the most valuable spirits brand worldwide having surpassed the British multi-national spirits conglomerate Diageo in 2017.
History
The company's A-shares were listed in the Shanghai Stock Exchange in 2001. Since then it was one of the first Chinese listed companies whose share price had exceeded CNY100. The price reached CNY803.5 in 2018 and over CNY1000 in 2019.
Kweichow Moutai and Camus started the cooperation in 2004, and Camus became the worldwide exclusive distributor of Moutai products for the duty-free market.
References
External links
Kweichow Moutai Overview at World Indicium
Companies in the CSI 100 Index
Companies in the FTSE China A50 Index
Companies listed on the Shanghai Stock Exchange
Companies based in Guizhou
Companies owned by the provincial government of China
Drink companies of China
Food and drink companies established in 1999
Chinese beer brands
Distilleries
Chinese companies established in 1999
Renhuai | Kweichow Moutai | [
"Chemistry"
] | 423 | [
"Distilleries",
"Distillation"
] |
17,517,168 | https://en.wikipedia.org/wiki/Mathematical%20principles%20of%20reinforcement | The mathematical principles of reinforcement (MPR) constitute of a set of mathematical equations set forth by Peter Killeen and his colleagues attempting to describe and predict the most fundamental aspects of behavior (Killeen & Sitomer, 2003).
The three key principles of MPR, arousal, constraint, and coupling, describe how incentives motivate responding, how time constrains it, and how reinforcers become associated with specific responses, respectively. Mathematical models are provided for these basic principles in order to articulate the necessary detail of actual data.
First principle: arousal
The first basic principle of MPR is arousal. Arousal refers to the activation of behavior by the presentation of incentives. An increase in activity level following repeated presentations of incentives is a fundamental aspect of conditioning. Killeen, Hanson, and Osborne (1978) proposed that adjunctive (or schedule induced) behaviors are normally occurring parts of an organism's repertoire. Delivery of incentives increases the rate of adjunctive behaviors by generating a heightened level of general activity, or arousal, in organisms.
Killeen & Hanson (1978) exposed pigeons to a single daily presentation of food in the experimental chamber and measured general activity for 15 minutes after a feeding. They showed that activity level increased slightly directly following a feeding and then decreased slowly over time. The rate of decay can be described by the following function:
= y-intercept (responses per minute)
= time in seconds since feeding
= time constant
= base of natural logarithm
The time course of the entire theoretical model of general activity is modeled by the following equation:
= arousal
= temporal inhibition
= competing behaviors
To better conceptualize this model, imagine how rate of responding would appear with each of these processes individually. In the absence of temporal inhibition or competing responses, arousal level would remain high and response rate would be depicted as an almost horizontal line with a very small negative slope. Directly following food presentation, temporal inhibition is at its maximum level. It decreases quickly as time elapses, and response rate would be expected to increase up to the level of arousal in a short time. Competing behaviors such as goal tracking or hopper inspection are at a minimum directly after food presentation. These behaviors increase as the interval elapses, so the measure of general activity would slowly decrease. Subtracting these two curves results in the predicted level of general activity.
Killeen et al. (1978) then increased the frequency of feeding from daily to every fixed-time seconds. They showed that general activity level increased substantially from the level of daily presentation. Response rate asymptotes were highest for the highest rates of reinforcement. These experiments indicate that arousal level is proportional to rate of incitement, and the asymptotic level increases with repeated presentations of incentives. The increase in activity level with repeated presentation of incentives is called cumulation of arousal. The first principle of MPR states that arousal level is proportional to rate of reinforcement, , where:
= arousal level
= specific activation
= rate of reinforcement
(Killeen & Sitomer, 2003).
Second principle: constraint
An obvious but often overlooked factor when analyzing response distributions is that responses are not instantaneous, but take some amount of time to emit (Killeen, 1994). These ceilings on response rate are often accounted for by competition from other responses, but less often for the fact that responses cannot always be emitted at the same rate at which they are elicited (Killeen & Sitomer, 2003). This limiting factor must be taken into account in order to correctly characterize what responding could be theoretically, and what it will be empirically.
An organism may receive impulses to respond at a certain rate. At low rates of reinforcement, the elicited rate and emitted rate will approximate each other. At high rates of reinforcement, however, this elicited rate is subdued by the amount of time it takes to emit a response. Response rate, , is typically measured as the number of responses occurring in an epoch divided by the duration of an epoch. The reciprocal of gives the typical measure of the inter response (IRT), the average time from the start of one response to the start of another (Killeen & Sitomer, 2003). This is actually the cycle time rather than the time between responses. According to Killeen & Sitomer (2003), the IRT consists of two subintervals, the time required to emit a response, plus the time between responses, . Therefore, response rate can be measured either by dividing the number of responses by the cycle time:
,
or as the number of responses divided by the actual time between responses:
.
This instantaneous rate, may be the best measure to use, as the nature of the operandum may change arbitrarily within an experiment (Killeen & Sitomer, 2003).
Killeen, Hall, Reilly, and Kettle (2002) showed that if instantaneous rate of responding is proportional to rate of reinforcement, , then a fundamental equation for MPR results. Killeen & Sitomer (2003) showed that:
if
then ,
and rearranging gives:
While responses may be elicited at a rate proportional to , they can only be emitted at rate due to constraint. The second principle of MPR states that the time required to emit a response constrains response rate (Killeen & Sitomer, 2003).
Third principle: coupling
Coupling is the final concept of MPR that ties all of the processes together and allows for specific predictions of behavior with different schedules of reinforcement. Coupling refers to the association between responses and reinforcers. The target response is the response of interest to the experimenter, but any response can become associated with a reinforcer. Contingencies of reinforcement refer to how a reinforcer is scheduled with respect to the target response (Killeen & Sitomer, 2003), and the specific schedules of reinforcement in effect determine how responses are coupled to the reinforcer. The third principle of MPR states that the degree of coupling between a response and reinforcer decreases with the distance between them (Killeen & Sitomer, 2003). Coupling coefficients, designated as , are given for the different schedules of reinforcement. When the coupling coefficients are inserted into the activation-constraint model, complete models of conditioning are derived:
This is the fundamental equation of MPR. The dot after the is a placeholder for the specific contingencies of reinforcement under study (Killeen & Sitomer, 2003).
Fixed-ratio reinforcement schedules
The rate of reinforcement for fixed-ratio schedules is easy to calculate, as reinforcement rate is directly proportional to response rate and inversely proportional to ratio requirement (Killeen, 1994). The schedule feedback function is therefore:
.
Substituting this function into the complete model gives the equation of motion for ratio schedules (Killeen & Sitomer, 2003). Killeen (1994, 2003) showed that the most recent response in a sequence of responses is weighted most heavily and given a weight of , leaving for the remaining responses. The penultimate response receives , the third back receives . The th response back is given a weight of
The sum of this series is the coupling coefficient for fixed-ratio schedules:
The continuous approximation of this is:
where is the intrinsic rate of memory decay. Inserting the reinforcement rate and coupling coefficient into the activation-constraint model gives the predicted response rates for FR schedules:
This equation predicts low response rates at low ratio requirements due to the displacement of memory by consummatory behavior. However, these low rates are not always found. Coupling of responses may extend back beyond the preceding reinforcer, and an extra parameter, is added to account for this. Killeen & Sitomer (2003) showed that the coupling coefficient for FR schedules then becomes:
is the number of responses preceding the prior reinforcer that contribute to response strength. which ranges from 0 to 1 is then the degree of erasure of the target response from memory with the delivery of a reinforcer. () If , erasure is complete and the simpler FR equation can be used.
Variable-ratio reinforcement schedules
According to Killeen & Sitomer (2003), the duration of a response can affect the rate of memory decay. When response durations vary, either within or between organisms, then a more complete model is needed, and is replaced with yielding:
Idealized variable-ratio schedules with a mean response requirement of have a constant probability of of a response ending in reinforcement (Bizo, Kettle, & Killeen, 2001). The last response ending in reinforcement must always occur and receives strengthening of . The penultimate response occurs with probability and receives a strengthening of . The sum of this process up to infinity is (Killeen 2001, Appendix):
The coupling coefficient for VR schedules ends up being:
Multiplying by degree of erasure of memory gives:
The coupling coefficient can then be inserted into the activation-constraint model just as the coupling coefficient for FR schedules to yield predicted response rates under VR schedules:
In interval schedules, the schedule feedback function is
where is the minimum average time between reinforcers (Killeen, 1994). Coupling in interval schedules is weaker than ratio schedules, as interval schedules equally strengthen all responses preceding the target rather than just the target response. Only some proportion of memory is strengthened. With a response requirement, the final, target response must receive strength of . All preceding responses, target or non-target, receive a strengthening of .
Fixed-time schedules are the simplest time dependent schedules in which organisms must simply wait t seconds for an incentive. Killeen (1994) reinterpreted temporal requirements as response requirements and integrated the contents of memory from one incentive to the next. This gives the contents of memory to be:
N
MN= lò e-lndn
0
This is the degree of saturation in memory of all responses, both target and non-target, elicited in the context (Killeen, 1994). Solving this equation gives the coupling coefficient for fixed-time schedules:
c=r(1-e-lbt)
where is the proportion of target responses in the response trajectory. Expanding into a power series gives the following approximation:
c» rlbt
1+lbt
This equation predicts serious instability for non-contingent schedules of reinforcement.
Fixed-interval schedules are guaranteed a strengthening of a target response, b=w1, as reinforcement is contingent on this final, contiguous response (Killeen, 1994). This coupling is equivalent to the coupling on FR 1 schedules
w1=b=1-e-l.
The remainder of coupling is due to the memory of preceding behavior. The coupling coefficient for FI schedules is:
c= b +r(1- b -e-lbt).
Variable-time schedules are similar to random ratio schedules in that there is a constant probability of reinforcement, but these reinforcers are set up in time rather than responses. The probability of no reinforcement occurring before some time t’ is an exponential function of that time with the time constant t being the average IRI of the schedule (Killeen, 1994). To derive the coupling coefficient, the probability of the schedule not having ended, weighted by the contents of memory, must be integrated.
∞
M= lò e-n’t/te-ln’ dn’
0
In this equation, t’=n’t, where t is a small unit of time. Killeen (1994) explains that the first exponential term is the reinforcement distribution, whereas the second term is the weighting of this distribution in memory. Solving this integral and multiplying by the coupling constant r, gives the extent to which memory is filled on VT schedules:
c=rlbt
1+lbt
This is the same coupling coefficient as an FT schedule, except it is an exact solution for VT schedules rather than an approximation. Once again, the feedback function on these non-contingent schedules predicts serious instability in responding.
As with FI schedules, variable-interval schedules are guaranteed a target response coupling of b. Simply adding b to the VT equation gives:
∞
M= b+ lò e-n’t/te-ln’ dn’
1
Solving the integral and multiplying by r gives the coupling coefficient for VI schedules:
c= b+(1-b) rlbt
1+lbt
The coupling coefficients for all of the schedules are inserted into the activation-constraint model to yield the predicted, overall response rate. The third principle of MPR states that the coupling between a response and a reinforcer decreases with increased time between them (Killeen & Sitomer, 2003).
Mathematical principles of reinforcement describe how incentives fuel behavior, how time constrains it, and how contingencies direct it. It is a general theory of reinforcement that combines both contiguity and correlation as explanatory processes of behavior. Many responses preceding reinforcement may become correlated with the reinforcer, but the final response receives the greatest weight in memory. Specific models are provided for the three basic principles to articulate predicted response patterns in many different situations and under different schedules of reinforcement. Coupling coefficients for each reinforcement schedule are derived and inserted into the fundamental equation to yield overall predicted response rates.
References
Sources
Bizo, L. A., Kettle, L. C. & Killeen, P. R. (2001). "Animals don't always respond faster for more food: The paradoxical incentive effect." Animal Learning & Behavior, 29, 66-78.
Killeen, P.R. (1994). "Mathematical principles of reinforcement." Behavioral and Brain Sciences, 17, 105-172.
Killeen, P. R., Hall, S. S., Reilly, M. P., & Kettle, L. C. (2002). "Molecular analyses of the principal components of response strength." Journal of the Experimental Analysis of Behavior, 78, 127-160.
Killeen, P. R., Hanson, S. J., & Osborne, S. R. (1978). "Arousal: Its genesis and manifestation as response rate." Psychological review. Vol 85 No 6. p. 571-81
Killeen, P. R. & Sitomer, M. T. (2003). "MPR." Behavioural Processes, 62, 49-64
Behavioral concepts
Quantitative analysis of behavior | Mathematical principles of reinforcement | [
"Biology"
] | 2,924 | [
"Behavior",
"Behavioral concepts",
"Behaviorism",
"Quantitative analysis of behavior"
] |
17,517,331 | https://en.wikipedia.org/wiki/Centre%20of%20Excellence%20for%20Biosecurity%20Risk%20Analysis | The Centre of Excellence for Biosecurity Risk Analysis (CEBRA), formerly Australian Centre of Excellence for Risk Analysis (ACERA), is a research institute within the School of Biosciences at the University of Melbourne in Melbourne, Victoria, Australia. It conducts research on a wide range of topics in risk, with an initial focus on biosecurity risks.
History
ACERA was founded in 2006 to honour a Federal Government election commitment on biosecurity risk, with a grant administered through the Bureau of Rural Sciences of the Federal Department of Agriculture, Fisheries and Forestry and the University of Melbourne. The Centre received a second phase of funding commencing in July 2009 and ending June 2013. The Centre was established in the then School of Botany at the University in March 2006 until 30 June 2013.
ACERA was funded by the now Department of Agriculture, Water and the Environment (DAWE) and the University of Melbourne. The first Director was Mark Burgman.
CEBRA was established in July 2013, in the 2013–2021 funding round.
Organisation and description
CEBRA is funded by the Department of Agriculture, Water and the Environment, New Zealand’s Ministry for Primary Industries and the University of Melbourne. The Centre is within the School of Biosciences at the University.
, the CEO is Andrew Robinson, with Susie Hester, Tom Kompas, Richard Bradhurst, and James Camac as chief investigators.
CEBRA "ensures that Australian biosecurity regulatory standards, procedures and tools are underpinned by world-class research and understanding of the issues, risks and response mechanisms". Its main objective is "to deliver practical solutions and advice for assessing and managing biosecurity risks that inform the risk management role of the department and ministry".
References
External links
ACERA home page (archived)
Research institutes in Australia
Biological research institutes in Australia
University of Melbourne
Biosecurity
2006 establishments in Australia | Centre of Excellence for Biosecurity Risk Analysis | [
"Environmental_science"
] | 389 | [
"Toxicology",
"Biosecurity"
] |
8,853,271 | https://en.wikipedia.org/wiki/In-glaze%20decoration | In-glaze or inglaze is a method of decorating pottery, where the materials used allow painted decoration to be applied on the surface of the glaze before the glost firing so that it fuses into the glaze in the course of firing.
It contrasts with the other main methods of adding painted colours to pottery. These are underglaze painting, where the paint is applied before the glaze, which then seals it, and overglaze decoration where the painting is done in enamels after the glazed vessel has been fired, before a second lighter firing to fuse it to the glaze. There is also the use of coloured glazes, which often carry painted designs.
As with underglaze, in-glaze requires pigments that can withstand the high temperatures of the main firing without discolouring. Historically this was a small group. Inglaze works well with tin-glazed pottery, as unlike lead glaze the glaze does not become runny in the course of firing.
Faience
The very wide range of types of European tin-glazed earthenware or "faience" all began using in-glaze or underglaze painting, with overglaze enamels only developing in the 18th century. In French faience, the in-glaze technique is known as grand feu ("big fire") and the one using enamels as petit feu ("little fire"). Most styles in this group, such as Delftware, mostly used blue and white pottery decoration, but Italian maiolica was fully polychrome, using the range of in- and underglaze colours available.
References
References
Lane, Arthur, French Faïence, 1948, Faber & Faber
Savage, George, and Newman, Harold, An Illustrated Dictionary of Ceramics, 1985, Thames & Hudson,
Ceramic glazes
Types of pottery decoration | In-glaze decoration | [
"Chemistry"
] | 386 | [
"Ceramic glazes",
"Coatings"
] |
8,853,302 | https://en.wikipedia.org/wiki/Steen%20Rasmussen%20%28physicist%29 | Steen Rasmussen (born 7 July 1955) is a Danish physicist mainly working in the areas of artificial life and complex systems. He is currently a professor in physics and a center director at University of Southern Denmark as well as an external research professor at the Santa Fe Institute. His formal training was at the Technical University of Denmark (1985 PhD in physics of complex systems) and University of Copenhagen (philosophy). He spent 20 years as a researcher at Los Alamos National Laboratory (1988-2007) the last five years as a leader of the Self-Organized Systems team. He has been part of the Santa Fe Institute since 1988.
The main scientific effort of Rasmussen has since 2001 has been to explore, understand and construct a transition from nonliving to living materials. Bridging this gap requires an interdisciplinary scientific effort, which is why he has assembled, sponsored and lead research teams in the US, across Europe and in Denmark. He became a scientific team leader in 2002 at Los Alamos National Laboratory, USA. He has since held research leadership positions at the Santa Fe Institute, University of Copenhagen and University of Southern Denmark. In 2004 he represented Los Alamos National Laboratory scientifically in cofounding together with primarily European scientific institutions the European Centre for Living Technology in Venice, Italy where he later served as Chairman of the Science Board. Since late 2007 he has been the director of the Center for Fundamental Living Technology at University of Southern Denmark. In 2018 he received the Lifetime Achievement Award form the International Society of Artificial Life (ISAL)
Rasmussen has for many years been actively engaged in the public discourse regarding science and society and on this background he founded The Initiative for Science, Society and Policy (ISSP) in 2009. ISSP is currently funded by two Danish universities, has a Director, five Science Focus Leaders and a Science Board. In 2018 he received the Lifetime Achievement Award from the International Society of Artificial Life (ISAL).
References
External links
Steen Rasmussen at SDU
Steen Rasmussen at SFI
1955 births
Living people
21st-century Danish physicists
Researchers of artificial life
Digital organisms
Santa Fe Institute people | Steen Rasmussen (physicist) | [
"Biology"
] | 430 | [
"Digital organisms"
] |
8,853,472 | https://en.wikipedia.org/wiki/Ponderomotive%20energy | In strong-field laser physics, ponderomotive energy is the cycle-averaged quiver energy of a free electron in an electromagnetic field.
Equation
The ponderomotive energy is given by
,
where is the electron charge, is the linearly polarised electric field amplitude, is the laser carrier frequency and is the electron mass.
In terms of the laser intensity , using , it reads less simply:
,
where is the vacuum permittivity.
For typical orders of magnitudes involved in laser physics, this becomes:
,
where the laser wavelength is , and is the speed of light. The units are electronvolts (eV), watts (W), centimeters (cm) and micrometers (μm).
Atomic units
In atomic units, , , where . If one uses the atomic unit of electric field, then the ponderomotive energy is just
Derivation
The formula for the ponderomotive energy can be easily derived. A free particle of charge
interacts with an electric field . The force on the charged particle is
.
The acceleration of the particle is
.
Because the electron executes harmonic motion, the particle's position is
.
For a particle experiencing harmonic motion, the time-averaged energy is
.
In laser physics, this is called the ponderomotive energy .
See also
Ponderomotive force
Electric constant
Harmonic generation
List of laser articles
References and notes
Laser science
Energy (physics) | Ponderomotive energy | [
"Physics",
"Mathematics"
] | 279 | [
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Quantity",
"Physical quantities"
] |
8,853,878 | https://en.wikipedia.org/wiki/IUCLID | IUCLID (; International Uniform Chemical Information Database) is a software application to capture, store, maintain and exchange data on intrinsic and hazard properties of chemical substances. Distributed free of charge, the software is especially useful to chemical industry companies and to government authorities. It is the key tool for chemical industry to fulfill data submission obligations under REACH, the most important European Union legal document covering the production and use of chemical substances. The software is maintained by the European Chemicals Agency, ECHA. The latest version, version 6, was made available on 29 April 2016.
History
IUCLID versions 1 to 4
1993: First version of IUCLID for the European Existing Substances Regulation 793/93/EEC.
1999: IUCLID becomes the recommended tool for the OECD HPV Programme.
2000: IUCLID is the software prescribed in the EU Biocides legislation to notify existing active substances (Art. 4 of Commission Regulation (EC) No 1896/2000).
IUCLID 4 was used worldwide by about 500 organizations. These included chemical industry companies, EU Member State Competent Authorities, the OECD Secretariat, the US EPA, the Japan METI, and third-party service providers.
IUCLID 5
In 2003, when it became clear that the REACH proposal would be adopted by the European Union, the European Commission decided to completely overhaul IUCLID 4 and to create a new version, IUCLID 5, which would be used by chemical industry companies to fulfill their data submission obligations under REACH. Migration of data in the IUCLID 4 format was supported by IUCLID 5.1. IUCLID 5.1 became available on 13 June 2007.
IUCLID is also mentioned in article 111 of the REACH of the REACH legislation as the format to be used for data collection and submission dossier preparation.
The following IUCLID 5 major versions have been released:
IUCLID 5.0: 12 June 2007
IUCLID 5.1: 16 January 2009
IUCLID 5.2: 15 February 2010
IUCLID 5.3: 24 February 2011
IUCLID 5.4: 5 June 2012
IUCLID 5.5: 2 April 2013
IUCLID 5.6: 16 April 2014
IUCLID 5
Data Format and Exchange
Data that can be stored and maintained with IUCLID encompass information about:
The party running IUCLID (production sites, contact persons etc.)
The chemical substances managed by the company, namely their
identity, composition, and supporting analytical data
reference information like CAS number and other identifiers,
classification and labelling
physical/chemical properties,
toxicological properties,
eco-toxicological properties.
OECD and the European Commission have agreed on a standard XML format (OECD Harmonized Templates) in which these data are stored for easy data sharing. IUCLID 5 will be the first application fully implementing this international reporting standard, which has been accepted by many national and international regulatory authorities.
Numerous parties were involved in the creation and the review of the OECD Harmonized Templates, among them the Business and Industry Advisory Committee (BIAC) to the OECD, the European Chemical Industry Council (CEFIC) and other bodies and authorities.
IUCLID5 can be used to enter robust study summaries summarising toxicologically-relevant endpoints. A Klimisch score is assigned within robust study summary as one field.
Possible IUCLID 5 uses
Anyone can use a local IUCLID 5 installation to collect, store, maintain and exchange relevant data on chemical substances.
In addition to dossier creation for REACH, IUCLID 5 data can be (re-)used for a large number of other purposes, due to the compatibility of IUCLID 5 data with the OECD Harmonized Templates. The European Commission IUCLID project team and international authorities are currently in deliberation in order to further promote acceptance of IUCLID5 data in non-REACH jurisdictions. Legislations and programmes under which IUCLID 5 data are certainly accepted are:
OECD Chemical Assessment Programme
US HPV Challenge Programme
Japan HPV Challenge Programme (provided OECD guidance for SIDS dossiers is followed there).
The IUCLID 5 data model also features Biocides/Pesticides elements. A dataset prepared for a substance under REACH can therefore be quickly complemented with data about possible biocidal or pesticidal properties and be re-used for data reporting obligations under the EU Biocides regulation.
The data are available and can be searched through the OECD eChemPortal.
Technology
IUCLID development and deployment
IUCLID 5 is a Java-based application, using the Hibernate framework for persistence. It features a Java Swing graphical user interface (GUI) and can be deployed on both single workstation and distributed environments.
IUCLID 5 offers the possibility to be deployed in:
either a 100% open source system environment, using Tomcat as web container and PostgreSQL as Database Management System (DBMS),
or a commercial system environment, using the Oracle WebLogic Server from Oracle Corporation as application server and/or Oracle as Database Management System (DBMS).
.i5z files
IUCLID 5 exports and imports files in the I5Z format. Files may be swapped between different IUCLID 5 installations and dossiers may be uploaded to ECHA via REACH-IT. I5Z stands for "IUCLID 5 Zip", as the file uses Zip file compression.
IUCLID 5 system requirements
IUCLID can be deployed on any current PC. For optimal performance, RAM should not be less than 1 GB.
IUCLID 6
IUCLID 6 was made available on 24 June 2015 as a beta version so that large companies and other organisations could begin preparing their IT systems for the full release of IUCLID 6 in 2016. However, individual users and SMEs could also download the beta version to get a preview, and to become familiar with the user-interface.
The first official version of IUCLID 6 was published on 29 April 2016.
See also
European Chemicals Agency
OECD
European Commission
REACH
Institute for Health and Consumer Protection
Joint Research Centre
References
External links
IUCLID 5 website
IUCLID 6 website
European Chemicals Agency
Institute for Health and Consumer Protection On-Line
Joint Research Centre On-Line
REACH Legislation Full text
The REACH CD-ROM, a practical guide for REACH
REACH Services Overview & Official REACH Texts
eChemPortal, a global portal to information on Chemical Substances
Data entry accelerator for IUCLID
Cheminformatics
Government databases of the European Union
Java platform software
Regulation of chemicals in the European Union | IUCLID | [
"Chemistry"
] | 1,361 | [
"Regulation of chemicals in the European Union",
"Regulation of chemicals",
"Computational chemistry",
"nan",
"Cheminformatics"
] |
8,854,005 | https://en.wikipedia.org/wiki/Ground%20conductivity | Ground conductivity refers to the electrical conductivity of the subsurface of the earth. In the International System of Units (SI) it is measured in millisiemens per meter (mS/m).
Radio propagation
Ground conductivity is an extremely important factor in determining the field strength and propagation of surface wave (ground wave) radio transmissions. Low frequency (30–300 kHz) and medium frequency (300–3000 kHz) radio transmissions are particularly reliant on good ground conductivity as their primary propagation is by surface wave. It also affects the real world radiation pattern of high frequency (3-30 MHz) antennas, as the so-called "takeoff angle" is not an inherent property of the antenna but a result of a ground reflection. For this reason ITU publishes an extensive world atlas of ground conductivities.
Other uses
Ground conductivity is sometimes used in determining the efficiency of a septic tank, using electromagnetic induction, so that contaminants do not reach the surface or nearby water supplies.
References
External links
Ground conductivity maps in the United States (provided by the Federal Communications Commission and includes large scale map)
Measurement of the ground conductivity and relative permittivity with high frequency using an open wire line (OWL) (Practical example with network analyzer and mathematics for the conversion)
Broadcast engineering | Ground conductivity | [
"Engineering"
] | 269 | [
"Broadcast engineering",
"Electronic engineering"
] |
8,854,508 | https://en.wikipedia.org/wiki/Ostriker%E2%80%93Peebles%20criterion | In astronomy, the Ostriker–Peebles criterion, named after its discoverers Jeremiah Ostriker and Jim Peebles, describes the formation of barred galaxies.
The rotating disc of a spiral galaxy, consisting of stars and solar systems, may become unstable in a way that the stars in the outer parts of the "arms" are released from the galaxy system, resulting in the collapse of the remaining stars into a bar-shaped galaxy. This occurs in approximately 1/3 of the known spiral galaxies.
Based on the first kinetic energy component T and the total gravitational energy W, a galaxy will become barred when .
References
External links
About barred galaxies
Extragalactic astronomy | Ostriker–Peebles criterion | [
"Astronomy"
] | 135 | [
"Extragalactic astronomy",
"Astronomical sub-disciplines"
] |
8,854,662 | https://en.wikipedia.org/wiki/Pore-forming%20toxin | Pore-forming proteins (PFTs, also known as pore-forming toxins) are usually produced by bacteria, and include a number of protein exotoxins but may also be produced by other organisms such as apple snails that produce perivitellin-2 or earthworms, who produce lysenin. They are frequently cytotoxic (i.e., they kill cells), as they create unregulated pores in the membrane of targeted cells.
Types
PFTs can be divided into two categories, depending on the alpha-helical or beta-barrel architecture of their transmembrane channel that can consist either of
Alpha-pore-forming toxins
e.g., Haemolysin E family, actinoporins, Corynebacterial porin B, Cytolysin A of E. coli.
Beta-barrel pore-forming toxins
e.g. α-Hemolysin (Fig 1), PVL – Panton-Valentine leukocidin, various insecticidal toxins.
Other categories:
Large beta-barrel pore-forming toxins
MACPF and Cholesterol-dependent cytolysins (CDCs), gasdermin
Binary toxins
e.g., Anthrax toxin, Pleurotolysin
Small pore-forming toxins
e.g., Gramicidin A
According to TCDB, there are following families of pore-forming toxins:
1.C.3 α-Hemolysin (αHL) family:
1.C.4 Aerolysin family
1.C.5 ε-Toxin family
1.C.11 RTX-toxin superfamily
1.C.12 Membrane attack complex/perforin superfamily
1.C.13 Leukocidin family
1.C.14 Cytohemolysin (CHL) family
1.C.39 Thiol-activated cholesterol-dependent cytolysin family
1.C.43 Lysenin family
1.C.56 Pseudomonas syringae HrpZ cation channel family
1.C.57 Clostridial cytotoxin family
1.C.74 Snake cytotoxin (SCT) family
1.C.97 Pleurotolysin pore-forming family
Beta-pore-forming toxins
β-PFTs are so-named because of their structural characteristics: they are composed mostly of β-strand-based domains. They have divergent sequences, and are classified by Pfam into a number of families including Leukocidins, Etx-Mtx2, Toxin-10, and aegerolysin. X-ray crystallographic structures have revealed some commonalities: α-hemolysin and Panton-Valentine leukocidin S are structurally related. Similarly, aerolysin and clostridial epsilon-toxin. and Mtx2 are linked in the Etx/Mtx2 family.
The ß-PFTs include a number of toxins of commercial interest for the control of pest insects. These toxins are potent but also highly specific to a limited range of target insects, making them safe biological control agents.
Insecticidal members of the Etx/Mtx2 family include Mtx2 and Mtx3 from Lysinibacillus sphaericus that can control mosquito vectors of human diseases and also Cry15, Cry23, Cry33, Cry38, Cry45, Cry51, Cry60, Cry64 and Cry74 from Bacillus thuringiensis that control a range of insect pests that can cause great losses to agriculture.
Insecticidal toxins in the Toxin_10 family show an overall similarity to the aerolysin and Etx/Mtx2 toxin structures but differ in two notable features. While all of these toxins feature a head domain and a larger, extended beta-sheet tail domain, in the Toxin_10 family, the head is formed exclusively from the N-terminal region of the primary amino acid sequence whereas regions from throughout the protein sequence contribute to the head domain in Etx/Mtx2 toxins. In addition, the head domains of the Toxin_10 proteins show lectin-like features of carbohydrate binding domains. The only reported natural targets of Toxin_10 proteins are insects. With the exception of Cry36 and Cry78, the Toxin_10 toxins appear to act as two-part, binary toxins. The partner proteins in these combinations may belong to different structural groups, depending on the individual toxin: two Toxin_10 proteins (BinA and BinB) act together in the Bin mosquitocidal toxin of Lysinibacillus sphaericus; the Toxin_10 Cry49 is co-dependent on the 3-domain toxin family member Cry48 for its activity against Culex mosquito larvae; and the Bacillus thuringiensis Toxin_10 protein Cry35 interacts with the aegerolysin family Cry34 to kill Western Corn Rootworm. This toxin pair has been included in insect resistant plants such as SmartStax corn.
Mode of action
β-PFTs are dimorphic proteins that exist as soluble monomers and then assemble to form multimeric assemblies that constitute the pore. Figure 1 shows the pore-form of α-hemolysin, the first crystal structure of a β-PFT in its pore-form. 7 α-hemolysin monomers come together to create the mushroom-shaped pore. The 'cap' of the mushroom sits on the surface of the cell, and the 'stalk' of the mushroom penetrates the cell membrane, rendering it permeable (see later). The 'stalk' is composed of a 14-strand β-barrel, with two strands donated from each monomer.
A structure of the Vibrio cholerae cytolysin in the pore form is also heptameric; however, Staphylococcus aureus gamma-hemolysin reveals an octomeric pore, consequently with a 16-strand 'stalk'. The Panton-Valentine leucocidin S structure shows a highly related structure, but in its soluble monomeric state. This shows that the strands involved in forming the 'stalk' are in a very different conformation – shown in Fig 2.
While the Bin toxin of Lysinibacillus sphaericus is able to form pores in artificial membranes and mosquito cells in culture, it also causes a series of other cellular changes including the uptake of toxin in recycling endosomes and the production of large, autophagic vesicles and the ultimate cause of cell death may be apoptotic. Similar effects on cell biology are also seen with other Toxin_10 activities but the roles of these events in toxicity remain to be established.
Assembly
The transition between soluble monomer and membrane-associated protomer to oligomer is not a trivial one: It is believed that β-PFTs, follow as similar assembly pathway as the CDCs (see later), in that they must first assemble on the cell-surface (in a receptor-mediated fashion in some cases) in a pre-pore state. Following this, the large-scale conformational change occurs in which the membrane spanning section is formed and inserted into the membrane. The portion entering the membrane, referred to as the head, is usually apolar and hydrophobic, this produces an energetically favorable insertion of the pore-forming toxin.
Specificity
Some β-PFTs such as clostridial ε-toxin and Clostridium perfringens enterotoxin (CPE) bind to the cell membrane via specific receptors – possibly certain claudins for CPE, possibly GPI anchors or other sugars for ε-toxin – these receptors help raise the local concentration of the toxins, allowing oligomerisation and pore formation.
The BinB Toxin_10 component of the Lysinibacillus sphaericus Bin toxin specifically recognises a GPI anchored alpha glycosidase in the midgut of Culex and Anopheles mosquitoes but not the related protein found in Aedes mosquitoes, hence conferring specificity on the toxin.
The cyto-lethal effects of the pore
When the pore is formed, the tight regulation of what can and cannot enter/leave a cell is disrupted. Ions and small molecules, such as amino acids and nucleotides within the cell, flow out, and water from the surrounding tissue enters. The loss of important small molecules to the cell can disrupt protein synthesis and other crucial cellular reactions. The loss of ions, especially calcium, can cause cell signaling pathways to be spuriously activated or deactivated. The uncontrolled entry of water into a cell can cause the cell to swell up uncontrollably: this causes a process called blebbing, wherein large parts of the cell membrane are distorted and give way under the mounting internal pressure. In the end, this can cause the cell to burst. In particular, nuclear - free erythrocytes under the influence of alpha-staphylotoxin undergo hemolysis with the loss of a large protein hemoglobin.
Binary toxins
There are many different types of binary toxins. The term binary toxin simply implies a two part toxin where both components are necessary for toxic activity. Several β-PFTs form binary toxins.
As discussed above, the majority of the Toxin_10 family proteins act as part of binary toxins with partner proteins that may belong to the Toxin_10 or other structural families. The interplay of the individual components has not been well studied to date. Other beta sheet toxins of commercial importance are also binary. These include the Cry23/Cry37 toxin from Bacillus thuringiensis. These toxins have some structural similarity to the Cry34/Cry35 binary toxin but neither component shows a match to established Pfam families and the features of the larger Cry23 protein have more in common with the Etx/Mtx2 family than the Toxin_10 family to which Cry35 belongs.
Enzymatic binary toxins
Some binary toxins are composed of an enzymatic component and a component that is involved in membrane interactions and entry of the enzymatic component into the cell. The membrane interacting component may have structural domains that are rich in beta sheets. Binary toxins, such as anthrax lethal and edema toxins (Main article: Anthrax toxin), C. perfringens iota toxin and C. difficile cyto-lethal toxins consist of two components (hence binary):
an enzymatic component – A
a membrane-altering component – B
In these enzymatic binary toxins, the B component facilitates the entry of the enzymatic 'payload' (A subunit) into the target cell, by forming homooligomeric pores, as shown above for βPFTs. The A component then enters the cytosol and inhibits normal cell functions by one of the following means:
ADP-ribosylation
ADP-ribosylation is a common enzymatic method used by different bacterial toxins from various species. Toxins such as C. perfringens iota toxin and C. botulinum C2 toxin, attach a ribosyl-ADP moiety to surface arginine residue 177 of G-actin. This prevents G-actin assembling to form F-actin, and, thus, the cytoskeleton breaks down, resulting in cell death. Insecticidal members of the ADP-ribosyltransferase family of toxins include the Mtx1 toxin of Lysinibacillus sphaericus and the Vip1/Vip2 toxin of Bacillus thuringiensis and some members of the toxin complex (Tc) toxins from gram negative bacteria such as Photorhabdus and Xenorhabdus species. The beta sheet-rich regions of the Mtx1 protein are lectin-like sequences that may be involved in glycolipid interactions.
Proteolysis of mitogen-activated protein kinase kinases (MAPKK)
The A component of anthrax toxin lethal toxin is zinc-metalloprotease, which shows specificity for a conserved family of mitogen-activated protein kinases. The loss of these proteins results in a breakdown of cell signaling, which, in turn, renders the cell insensitive to outside stimuli – therefore no immune response is triggered.
Increasing intracellular levels of cAMP
Anthrax toxin edema toxin triggers a calcium ion influx into the target cell. This subsequently elevates intracellular cAMP levels. This can profoundly alter any sort of immune response, by inhibiting leucocyte proliferation, phagocytosis, and proinflammatory cytokine release.
Cholesterol-dependent cytolysins
CDCs, such as pneumolysin, from S. pneumoniae, form pores as large as 260 Å (26 nm), containing between 30 and 44 monomer units. Electron microscopy studies of pneumolysin show that it assembles into large multimeric peripheral membrane complexes before undergoing a conformational change in which a group of α-helices in each monomer change into extended, amphipathic β-hairpins that span the membrane, in a manner reminiscent of α-haemolysin, albeit on a much larger scale (Fig 3). CDCs are homologous to the MACPF family of pore-forming toxins, and it is suggested that both families use a common mechanism (Fig 4). Eukaryote MACPF proteins function in immune defence and are found in proteins such as perforin and complement C9 though perivitellin-2 is a MACPF attached to a delivery lectin that has enterotoxic and neurotoxic properties toward mice.
A family of highly conserved cholesterol-dependent cytolysins, closely related to perfringolysin from Clostridium perfringens are produced by bacteria from across the order Bacillales and include anthrolysin, alveolysin and sphaericolysin. Sphaericolysin has been shown to exhibit toxicity to a limited range of insects injected with the purified protein.
Biological function
Bacteria may invest much time and energy in making these toxins: CPE can account for up to 15% of the dry mass of C. perfringens at the time of sporulation. The purpose of toxins is thought to be one of the following:
Defense against phagocytosis, e.g., by a macrophage.
Inside a host, provoking a response which is beneficial for the proliferation of the bacteria, for example in cholera. or in the case of insecticidal bacteria, killing the insect to provide a rich source of nutrients in the cadaver for bacterial growth.
Food: After the target cell has ruptured and released its contents, the bacteria can scavenge the remains for nutrients or, as above, bacteria can colonise insect cadavers.
Environment: The mammalian immune response helps create the anaerobic environment that anaerobic bacteria require.
See also
Exotoxin
References
Further reading
A deadly toxin with a romantic name: Panton-Valentine Leukocidin complex. PDBe Quips
External links
Protein toxins
Peripheral membrane proteins | Pore-forming toxin | [
"Chemistry"
] | 3,237 | [
"Protein toxins",
"Toxins by chemical classification"
] |
8,854,753 | https://en.wikipedia.org/wiki/Calcium%20sparks | A calcium spark is the microscopic release of calcium (Ca2+) from a store known as the sarcoplasmic reticulum (SR), located within muscle cells. This release occurs through an ion channel within the membrane of the SR, known as a ryanodine receptor (RyR), which opens upon activation. This process is important as it helps to maintain Ca2+ concentration within the cell. It also initiates muscle contraction in skeletal and cardiac muscles and muscle relaxation in smooth muscles. Ca2+ sparks are important in physiology as they show how Ca2+ can be used at a subcellular level, to signal both local changes, known as local control, as well as whole cell changes.
Activation
As mentioned above, Ca2+ sparks depend on the opening of ryanodine receptors, of which there are three types:
Type 1 – found mainly in skeletal muscle
Type 2 – found mainly in the heart
Type 3 – found mainly in the brain
Opening of the channel allows Ca2+ to pass from the SR, into the cell. This increases the local Ca2+ concentration around the RyR, by a factor of 10. Calcium sparks can either be evoked or spontaneous, as described below.
Evoked
Electrical impulses, known as action potentials, travel along the cell membrane (sarcolemma) of muscle cells. Located in the sarcolemma of smooth muscle cells are receptors, called dihydropyridine receptors (DHPR). In skeletal and cardiac muscle cells, however, these receptors are located within structures known as T-tubules, that are extensions of the plasma membrane penetrating deep into the cell (see figure 1). These DHPRs are located directly opposite to the ryanodine receptors, located on the sarcoplasmic reticulum and activation, by the action potential causes the DHPRs to change shape.
In cardiac and smooth muscle, activation of the DHPR results in it forming an ion channel. This allows Ca2+ to pass into the cell, increasing the local Ca2+ concentration, around the RyR. When four Ca2+ molecules bind to the RyR, it opens, resulting in a larger release of Ca2+, from the SR . This process, of using Ca2+ to activate release of Ca2+ from the SR is known as calcium-induced calcium release.
However, in skeletal muscle the DHPR touches the RyR. Therefore, the shape change of the DHPR activates the RyR directly, without the need for Ca2+ to flood into the cell first. This causes the RyR to open, allowing Ca2+ to be released from the SR.
Spontaneous
Ca2+ sparks can also occur in cells at rest (i.e. cells that have not been stimulated by an action potential). This occurs roughly 100 times every second in each cell and is a result of Ca2+ concentration being too high. An increase in Ca2+ within the SR is thought to bind to Ca2+ sensitive sites on the inside of the RyR causing the channel to open. As well as this, a protein called calsequestrin (found within the SR) detaches from the RyR, when calcium concentration is too high, again allowing the channel to open (see sarcoplasmic reticulum for more details). Similarly, a decrease in Ca2+ concentration within the SR has also proven to lower RyR sensitivity. This is thought to be due to the calsequestrin binding more strongly to the RyR, preventing it from opening and decreasing the likelihood of a spontaneous spark.
Calcium after release
There are roughly 10,000 clusters of ryanodine receptors within a single cardiac cell, with each cluster containing around 100 ryanodine receptors. During a single spontaneous spark, when Ca2+ is released from the SR, the Ca2+ diffuses throughout the cell. As the RyRs in the heart are activated by Ca2+, the movement of the Ca2+ released during a spontaneous spark, can activate other neighbouring RyRs within the same cluster. However, there usually isn't enough Ca2+ present in a single spark to reach a neighbouring cluster of receptors. The calcium can, however, signal back to the DHPR causing it to close and preventing further influx of calcium. This is known as negative feedback.
An increase in Ca2+ concentration within the cell or the production of a larger spark, can lead to a large enough calcium released that the neighbouring cluster can be activated by the first. This is known as spark-induced spark activation and can lead to a Ca2+ wave of calcium release spreading across the cell.
During evoked Ca2+ sparks, all clusters of ryanodine receptors, throughout the cell are activated at almost exactly the same time. This produces an increase in Ca2+ concentration across the whole cell (not just locally) and is known as a whole cell Ca2+ transient. This Ca2+ then binds to a protein, called troponin, initiating contraction, through a group of proteins known as myofilaments.
In smooth muscle cells, the Ca2+ released during a spark is used for muscle relaxation. This is because, the Ca2+ that enters the cell via the DHPR in response to the action potential, stimulates both muscle contraction and calcium release from the SR. The Ca2+ released during the spark, then activates two other ion channels on the membrane. One channel allows potassium ions to exit the cell, whereas the other allows chloride ions to leave the cell. The result of this movement of ions, is that the membrane voltage becomes more negative. This deactivates the DHPR (which was activated by the positive membrane potential produced by the action potential), causing it to close and stopping the flow of Ca2+into the cell, leading to relaxation.
Termination
The mechanism by which SR Ca2+ release terminates is still not fully understood. Current main theories are outlined below:
Local depletion of SR Ca2+
This theory suggests that during a calcium spark, as calcium flows out of the SR, the concentration of Ca2+ within the SR becomes too low. However, this was not thought to be the case for spontaneous sparks as the total release during a Ca2+ spark is small compared to total SR Ca2+ content and researchers have produced sparks lasting longer than 200 milliseconds, therefore showing that there is still enough Ca2+ left within the SR after a 'normal' (200ms) spark. However local depletion in the junctional SR may be much larger than previously thought (see ). During the activation of a large number of ryanodine receptors however, as is the case during electrically evoked Ca2+ release , the entire SR is about 50% depleted of Ca2+ and this mechanism will play an important role in repriming of release.
Stochastic attrition
Despite the complicated name, this idea simply suggests that all ryanodine receptors in a cluster, and the associated dihydropyridine receptors happen to randomly close at the same time. This would not only prevent calcium release from the SR, but it would also stop the stimulus for calcium release (i.e. the flow of calcium through the DHPR). However, due to the large numbers of RyRs and DHPRs in a single cell, this theory seems to be unrealistic, as there is a very small probability that they would all close together at exactly the same time.
Inactivation/adaptation
This theory suggests that after activation of the RyR and the subsequent release of Ca2+, the channel closes briefly to recover. During this time, either the channel cannot be reopened, even if calcium is present (i.e. the RyR is inactivated) or the channel can be reopened, however more calcium is required to activate it than usual (i.e. the RyR is in an adaptation phase). This would mean that one-by-one the RyRs would close, thus ending the spark.
Sticky cluster theory
This theory suggests that the above three theories all play a role in preventing calcium release.
Discovery
Spontaneous Ca2+ sparks were discovered in cardiac muscle cells, of rats, in 1992 by Peace Cheng and Mark B. Cannell in Jon Lederer's laboratory at the University of Maryland, Baltimore, U.S.A.
Initially the idea was rejected by the scientific journal, Nature, who believed that the sparks were only present under laboratory conditions (i.e. they were artifacts), and so wouldn't occur naturally within the body. However they were quickly recognised as being of fundamental importance to muscle physiology, playing a huge role in excitation-contraction coupling.
The discovery was made possible due to improvements in confocal microscopes. This allowed for the detection of the release of Ca2+, which were highlighted using a substance known as fluo-3, which caused the Ca2+ to glow. Ca2+ “sparks” were so called because of the spontaneous, localised nature of the Ca2+ release as well as the fact that they are the initiation event of excitation-contraction coupling.
Detection and analysis
Because of the importance of Ca2+ sparks in explaining the gating properties of ryanodine receptors in situ (within the body), many studies have focused on improving their detectability in the hope that by accurately and reliably detecting all Ca2+ spark events, their true properties can finally help us to answer the unsolved mystery of spark termination.
See also
Calcium-induced calcium release
Confocal microscopy
Ryanodine receptor
References
External links
Software
SparkMaster - Automated Ca2+ Spark Analysis with ImageJ - Free software for Ca2+ spark analysis in confocal linescan images
Cell biology | Calcium sparks | [
"Biology"
] | 1,998 | [
"Cell biology"
] |
8,855,042 | https://en.wikipedia.org/wiki/Building%20insulation%20material | Building insulation materials are the building materials that form the thermal envelope of a building or otherwise reduce heat transfer.
Insulation may be categorized by its composition (natural or synthetic materials), form (batts, blankets, loose-fill, spray foam, and panels), structural contribution (insulating concrete forms, structured panels, and straw bales), functional mode (conductive, radiative, convective), resistance to heat transfer, environmental impacts, and more. Sometimes a thermally reflective surface called a radiant barrier is added to a material to reduce the transfer of heat through radiation as well as conduction. The choice of which material or combination of materials is used depends on a wide variety of factors. Some insulation materials have health risks, some so significant the materials are no longer allowed to be used but remain in use in some older buildings such as asbestos fibers and urea.
Consideration of materials used
Factors affecting the type and amount of insulation to use in a building include:
Thermal conductivity
Moisture sensitivity
Compressive strength
Ease of installation
Durability – resistance to degradation from compression, moisture, decomposition, etc.
Ease of replacement at end of life
Cost effectiveness
Toxicity
Flammability
Environmental impact and sustainability
Considerations regarding building and climate:
The average climate conditions in the geographical area the building is located
The temperature the building is used at
Often a combination of materials is used to achieve an optimum solution and there are products which combine different types of insulation into a single form.
Spray foam
Spray foam is a type of insulation that is sprayed in place through a gun. Polyurethane and isocyanate foams are applied as a two-component mixture that comes together at the tip of a gun, and forms an expanding foam. Cementitious foam is applied in a similar manner but does not expand. Spray foam insulation is sprayed onto concrete slabs, into wall cavities of an unfinished wall, against the interior side of sheathing, or through holes drilled in sheathing or drywall into the wall cavity of a finished wall.
Advantages
Blocks airflow by expanding & sealing off leaks, gaps and penetrations. (This can also keep out bugs or other vermin)
Can serve as a semi-permeable vapor barrier with a better permeability rating than plastic sheeting vapor barriers and consequently reduce the buildup of moisture, which can cause mold growth.
Can fill wall cavities in finished walls without tearing the walls apart (as required with batts).
Works well in tight spaces (like loose-fill, but superior).
Provides acoustical insulation (like loose-fill, but superior).
Expands while curing, filling bypasses, and providing excellent resistance to air infiltration (unlike batts and blankets, which can leave bypasses and air pockets, and superior to some types of loose-fill. Wet-spray cellulose is comparable.).
Increases structural stability (unlike loose-fill, similar to wet-spray cellulose).
Can be used in places where loose-fill cannot, such as between joists and rafters. When used between rafters, the spray foam can cover up the nails protruding from the underside of the sheathing, protecting your head.
Can be applied in small quantities.
Cementitious foam is fireproof.
Disadvantages
The cost can be high compared to traditional insulation.
Most foams, with the exception of cementitious foams, release toxic fumes when they burn.
According to the US Environmental Protection Agency, there is insufficient data to accurately assess the potential for exposures to the toxic and environmentally harmful isocyanates which constitute 50% of the foam material.
Depending on usage and building codes and environment, most foams require protection with a thermal barrier such as drywall on the interior of a house. For example, a 15-minute fire rating may be required.
Can shrink slightly while curing if not applied on a substrate heated to the manufacturer's recommended temperature.
Although CFCs are no longer used, some use HCFCs or HFCs as blowing agents. Both are potent greenhouse gases, and HCFCs have some ozone depletion potential.
Many foam insulations are made from petrochemicals and may be a concern for those seeking to reduce the use of fossil fuels and oil. However, some foams are becoming available that are made from renewable or recycled sources.
R-value will diminish slightly with age, though the degradation of R-value stops once an equilibrium with the environment is reached. Even after this process, the stabilized R-value is very high.
Most foams require protection from sunlight and solvents.
It is difficult to retrofit some foams to an existing building structure because of the chemicals and processes involved.
If one does not wear a protective mask or goggles, it is possible to temporarily impair one's vision. (2–5 days).
May require the HVAC system to have a source of fresh outside air, since the structure may not refresh inside air without it.
Advantages of closed-cell over open-cell foams
Open-cell foam is porous, allowing water vapor and liquid water to penetrate the insulation. Closed-cell foam is non-porous, and not moisture-penetrable, thereby effectively forming a semi-permeable vapor barrier. (N.B., vapor barriers are usually required by the Building Codes, regardless of the type of insulation used. Check with the local authorities to find out the requirements for your area.)
Closed-cell foams are superior insulators. While open-cell foams typically have R-values of 3 to 4 per inch (RSI-0.53 to RSI-0.70 per inch), closed-cell foams can attain R-values of 5 to 8 per inch (RSI-0.88 to RSI-1.41 per inch). This is important if space is limited, because it allows a thinner layer of insulation to be used. For example, a 1-inch layer of closed-cell foam provides about the same insulation factor as 2 inches of open-cell foam.
Closed-cell foam is very strong, and structurally reinforces the insulated surface. By contrast, open-cell foam is soft when cured, with little structural strength.
Open-cell foam requires trimming after installation, and disposal of the waste material. Unlike open-cell foam, closed-cell foam rarely requires any trimming, with little or no waste.
Advantages of open-cell over closed-cell foams
Open cell foams will allow timber to breathe.
Open cell foams are incredibly effective as a sound barrier, having about twice the sound resistance in normal frequency ranges as closed-cell foam.
Open cell foams provide a better economical yield.
Open cell foams often have a low exothermic reaction temperature; will not harm coatings on electrical wiring, plumbing or other building components.
Types
Cementitious foam One example is AirKrete, at R-3.9 (RSI-0.69) per inch and no restriction on depth of application. Non-hazardous. Being fireproof, it will not smoke at all upon direct contact with flame, and is a two-hour firewall at a (or normal stud wall) application, per ASTM E-814 testing (UL 1479). Great for sound deadening; does not echo like other foams. Environmentally friendly. Non-expansive (good for existing homes where interior sheathing is in place). Fully sustainable: Consists of magnesium oxide cement and air, which is made from magnesium oxide extracted from seawater. Blown with air (no CFCs, HCFCs or other harmful blowing agents). Nontoxic, even during application. Does not shrink or settle. Zero VOC emission. Chemically inert (no known symptoms of exposure per MSDS). Insect resistant. Mold Proof. Insoluble in water. Disadvantages: Fragile at the low densities needed to achieve the quoted R value and, like all foams, it is more expensive than conventional fiber insulations. In 2010, the Ontario Building Code Commission ruled that AirKrete did not conform to requirements for a specific application in the building code. Their ruling states "As the proposed insulation is not impermeable, it could allow water or moisture to enter the wall assembly, which could then cause damage or deterioration of the building elements." As of 2014-08-21, the domain airkretecanada.com appears to be abandoned.
Polyisocyanurate Typically R-5.6 (RSI-0.99) or slightly better after stabilization – higher values (at least R-7, or RSI-1.23) in stabilized boards. Less flammable than polyurethane.
Phenolic injection foam Such as Tripolymer R-5.1 per inch (ASTM-C-177). Known for its air sealing abilities. Tripolymer can be installed in wall cavities that have fiberglass and cellulose in them. Non-hazardous. Not restricted by depth of application. Fire resistant – flame spread 5, smoke spread 0 (ASTM-E-84) – will not smoke at all upon direct contact with flame and is a two-hour firewall at a , or normal stud wall, application per ASTM E-199. Great for sound deadening, STC 53 (ASTM E413-73); does not echo like other foams. Environmentally friendly. Non-expansive (good for existing homes where interior sheathing is in place). Fully sustainable: Consists of phenolic, a foaming agent, and air. Blown with air (no CFCs, HCFCs or other harmful blowing agents). Nontoxic, even during application. Does not shrink or settle. Zero VOC emission. Chemically inert (no known symptoms of exposure per MSDS). Insect resistant. Mold Proof. Insoluble in water. Disadvantages: Like all foams, it is more expensive than conventional fiber insulations when only comparing sq ft pricing. When you compare price to R value per sq ft the price is about the same.
Polystyrene (expanded polystyrene (EPS) and extruded polystyrene (XPS))
Closed-cell polyurethane White or yellow. May use a variety of blowing agents. Resistant to water wicking and water vapor.: An example of a commercial closed-cell polyurethane product:
Ecomate ®
R-8 per inch. Ecomate ® is a trademarked foam blowing agent technology and family of polyurethanes which has a neutral impact on the environment (the worldwide patent was awarded to Foam Supplies Incorporated (FSI) in 2002. This is a new generation eco-friendly foam blowing agent that is free of Chlorofluorocarbons (CFCs), Hydrochlorofluorocarbons (HCFCs), and Hydrofluorocarbons (HFCs) based on naturally occurring methyl methanoate.
Open-cell (low density) polyurethane White or yellow. Expands to fill and seal cavity, but expands slowly, preventing damage to the wall. Resistant to water wicking, but permeable to water vapor. Fire resistant. Some types of polyurethane insulation are pour-able.
Here are two commercial open-cell, low-density polyurethane products:
Icynene Icynene is a trademarked brand of isocyanate open-cell spray foam from Huntsman Building Solutions. The classic version has a thermal resistance (R value) of 3.7 per inch and other versions have even higher values. The formula also includes a flame retardant. Icynene uses water for its spray application and the chemical expansion is caused by the carbon dioxide generated between the water and isocyanate material. Icynene will expand up to 100 times it original size within the first 6 seconds of being applied. Icynene contains no ozone-depleting substances such as CFCs, HFC's, HCFC's. Icynene contains volatile organic compounds (VOCs). Icynene will not emit any harmful gases once cured. Icynene has a Global warming potential of 1. Flammability is relatively low. Icynene maintains its efficiency with no loss of R-Value for the life of the install. Icynene is more expensive compared to traditional insulation methods. Any potential for harm is primarily during the installation phase and particularly for installers. The manufacture of icynene involves many toxic petrochemicals.
Sealection 500 spray foamR-3.8 (RSI-0.67) per inch. a water-blown low density spray polyurethane foam that uses water in a chemical reaction to create carbon dioxide and steam which expands the foam. Flame spread is 21 and smoke developed is 217 which makes it a Class I material (best fire rating). Disadvantages: Is an Isocyanate.
Insulating concrete forms
Insulating concrete forms (ICFs) are stay-in-place formwork made from insulating materials to build energy-efficient, cast-in-place, reinforced concrete walls.
Rigid panels
Rigid panel insulation, also known as continuous insulation can be made from foam plastics such as polyisocyanurate or polystyrene, or from fibrous materials such as fiberglass, rock and slag wool. Rigid panel continuous insulation is often used to provide a thermal break in the building envelope, thus reducing thermal bridging.
Structural insulated panels
Structural insulated panels (SIPs), also called stressed-skin walls, use the same concept as in foam-core external doors, but extend the concept to the entire house. They can be used for ceilings, floors, walls, and roofs. The panels usually consist of plywood, oriented strandboard, or drywall glued and sandwiched around a core consisting of expanded polystyrene, polyurethane, polyisocyanurate, compressed wheat straw, or epoxy. Epoxy is too expensive to use as an insulator on its own, but it has a high R-value (7 to 9), high strength, and good chemical and moisture resistance.
SIPs come in various thicknesses. When building a house, they are glued together and secured with lumber. They provide the structural support, rather than the studs used in traditional framing.
Advantages
Strong. Able to bear loads, including external loads from precipitation and wind.
Faster construction than stick-built house. Less lumber required.
Insulate acoustically.
Impermeable to moisture.
Can truck prefabricated panels to construction site and assemble on site.
Create shell of solid insulation around house, while reducing bypasses common with stick-frame construction. The result is an inherently energy-efficient house.
Do not use formaldehyde, CFCs, or HCFCs in manufacturing.
True R-values and lower energy costs.
Disadvantages
More expensive than other types of insulation.
Thermal bridging at splines and lumber fastening points unless a thermally broken spline is used (insulated lumber).
Fiberglass batts and blankets (glass wool)
Batts are precut, whereas blankets are available in continuous rolls. Compressing the material reduces its effectiveness. Cutting it to accommodate electrical boxes and other obstructions allows air a free path to cross through the wall cavity. One can install batts in two layers across an unfinished attic floor, perpendicular to each other, for increased effectiveness at preventing heat bridging. Blankets can cover joists and studs as well as the space between them. Batts can be challenging and unpleasant to hang under floors between joists; straps, or staple cloth or wire mesh across joists, can hold it up.
Gaps between batts (bypasses) can become sites of air infiltration or condensation (both of which reduce the effectiveness of the insulation) and requires strict attention during the installation. By the same token careful weatherization and installation of vapour barriers is required to ensure that the batts perform optimally. Air infiltration can be also reduced by adding a layer of cellulose loose-fill on top of the material.
Types
Rock and slag wool. Usually made from rock (basalt, diabase) or iron ore blast furnace slag. Some rock wool contains recycled glass. Nonflammable.
Fiberglass. Made from molten glass, usually with 20% to 30% recycled industrial waste and post-consumer content. Nonflammable, except for the facing (if present). Sometimes, the manufacturer modifies the facing so that it is fire-resistant. Some fiberglass is unfaced, some is paper-faced with a thin layer of asphalt, and some is foil-faced. Paper-faced batts are vapor retarders, not vapor barriers. Foil-faced batts are vapor barriers. The vapor barrier must be installed toward the warm side.
High-density fiberglass
Plastic fiber, usually made from recycled plastic. Does not cause irritation like fiberglass, but more difficult to cut than fiberglass. Not used in US. Flammable, but treated with fire-retardant.
Natural fiber
Natural fiber insulations, treated as necessary with low toxicity fire and insect retardants, are available in Europe : Natural fiber insulations can be used loose as granulats or formed into flexible or semi-rigid panels and rigid panels using a binder (mostly synthetic such as polyester, polyurethane or polyolefin). The binder material can be new or recycled.
Examples include cork, cotton, recycled tissue/clothes, hemp, flax, coco, wool, lightweight wood fiber, cellulose, seaweed, etc. Similarly, many plant-based waste materials can be used as insulation such as nut shells, corncobs, most straws including lavender straw, recycled wine bottle corks (granulated), etc. They usually have significantly less thermal performance than industrial products; this can be compensated by increasing thickness of the insulation layer. They may or may not require fire retardants or anti-insect/pest treatments. Clay coating is a nontoxic additive which often meets these requirements.
Traditional clay-impregnated light straw insulation has been used for centuries in the northern climates of Europe. The clay coating gives the insulation a half hour fire rating according to DIN (German) standards.
An additional source of insulation derived from hemp is hempcrete, which consists of hemp hurds (shives) mixed with a lime binder. It has little structural strength but can provide racking strength and insulation with comparable or superior R-values depending on the ratio of hemp to binder.
Cork insulation Board
During the 2nd century C.100 -C.200 it was the first time human civilisation was introduced to material of cork, and it was only until the 19th century when cork was widely used leading to major industrial production. Cork, which is harvested from the Oak trees generally found in Portugal, Spain and other Mediterranean countries. When a tree reaches 20 to 35 years old, it can be harvested in 10-year intervals for more than 200 years. Oak bark has a lattice-like molecular structure filled with millions of air bubbles giving the bark resilience, elasticity, thermal insulating, acoustic dampening, and shock absorbing properties. The material is sustainable, reusable and recyclable.
There are two types of cork, the pure cork, which is preferable due to its natural bonding properties, and the agglomeration cork. The pure cork is made by processes of heating and steaming whereby cork granulates are molded into a block. The natural resin of the cork acts as a bonding agent. An artificial bonding agent is required for the production of agglomeration cork.
Cork is typically used for acoustic and thermal insulation within walls, floors, ceilings and facades. A natural fire retardant, thermal insulating cork board is also non-allergenic, simple-to-install and a considerably safer substitute to fiber and plastic based insulation. Notable challenges with cork include difficulty in maintenance and cleaning especially if the material is exposed to heavy use such as insulation for flooring. Minor damages to cork surface can make the material more prone to staining.
Sheep's wool insulation
Sheep's wool insulation is a very efficient thermal insulator with a similar performance to fiberglass, approximately R13-R16 for a 4-inch-thick layer. Sheep's wool has no reduction in performance even when condensation is present, but its fire retarding treatment can deteriorate through repeated moisture. It is made from the waste wool that the carpet and textile industries reject, and is available in both rolls and batts for both thermal and acoustic insulation of housing and commercial buildings. Wool is capable of absorbing as much as 40% of its own weight in condensation while remaining dry to the touch. As wool absorbs moisture it heats up and therefore reduces the risk of condensation. It has the unique ability to absorb VOC gases such as formaldehyde, nitrogen dioxide, sulphur dioxide and lock them up permanently. Sheep's wool insulation has a long lifetime due to the natural crimp in the fibre, endurance testing has shown it has a life expectancy of over 100 years.
Wood fiber
Wood fiber insulation is available as loose fill, flexible batts and rigid panels for all thermal and sound insulation uses.
It can be used as internal insulation : between studs, joists or ceiling rafters, under timber floors to reduce sound transmittance, against masonry walls
or externally : using a rain screen cladding or roofing, or directly plastered/rendered, over timber rafters or studs or masonry structures as external insulation to reduce thermal bridges.
There are two manufacturing processes:
a wet process similar to pulp mills in which the fibers are softened and under heat and pressure the ligin in the fibres is used to create boards. The boards are limited to approximately 25 mm thickness; thicker boards are made by gluing (with modified starch or PVA wood glue). Additives such as latex or bitumen are added to increase water resistance.
a dry process where a synthetic binder such as pet (polyester melted bond), polyolefin or polyurethane is added and the boards/batts pressed to different densities to make flexible batts or rigid boards.
Cotton batts
Cotton insulation is increasing in popularity as an environmentally preferable option for insulation. It has an R-value of around 3.7 (RSI-0.65), equivalent to the median value for fiberglass batts. The cotton is primarily recycled industrial scrap, providing a sustainability benefit. The batts do not use the toxic formaldehyde backing found in fiberglass, and the manufacture is nowhere near as energy intensive as the mining and production process required for fiberglass. Boric acid is used as a flame retardant. A small quantity of polyolefin is melted as an adhesive to bind the product together (and is preferable to formaldehyde adhesives). Installation is similar to fiberglass, without the need for a respirator but requiring some additional time to cut the material. Cotton insulation costs about 10-20% more than fiberglass insulation. As with any batt insulation, proper installation is important to ensure high energy efficiency.
Advantages
Equivalent R-Value to typical fiberglass batts
Recycled content, no formaldehyde or other toxic substances, and very low toxicity during manufacture (only from the polyolefin)
May help qualify for LEED or similar environmental building certification programs
Fibers do not cause itchiness, no cancer risk from airborne fibers
Disadvantages
Difficult to cut. Some installers may charge a slightly higher cost for installation as compared to other batts. This does not affect the effectiveness of the insulation, but may require choosing an installer more carefully, as any batt should be cut to fit the cavity well.
Even with proper installation, batts do not completely seal the cavity against air movement (as with cellulose or expanding foam).
Still requires a vapor retarder or barrier (unlike cellulose)
May be hard to dry if a leak allows excessive moisture into the insulated cavity
Loose-fill (including cellulose)
Loose-fill materials can be blown into attics, finished wall cavities, and hard-to-reach areas. They are ideal for these tasks because they conform to spaces and fill in the nooks and crannies. They can also be sprayed in place, usually with water-based adhesives. Many types are made of recycled materials (a type of cellulose) and are relatively inexpensive.
General procedure for retrofits in walls:
Drill holes in wall with hole saw, taking firestops, plumbing pipes, and other obstructions into account. It may be desirable to drill two holes in each wall cavity/joist section, one at the bottom and a second at the top for both verification and top-off.
Pump loose fill into wall cavity, gradually pulling the hose up as the cavity fills.
Cap the holes in the wall.
Advantages
Cellulose insulation is environmentally preferable (80% recycled newspaper) and safe. It has a high recycled content and less risk to the installer than fiberglass (loose fill or batts).
R-Value 3.4 – 3.8 (RSI-0.60 – 0.67) per inch (imperial units)
Loose fill insulation fills the wall cavity better than batts. Wet-spray applications typically seal even better than dry-spray.
Class I fire safety rating
No formaldehyde-based binders
Not made from petrochemicals nor chemicals with a high toxicity
Disadvantages
Weight may cause ceilings to sag if the material is very heavy. Professional installers know how to avoid this, and typical sheet rock is fine when dense-packed.
Will settle over time, losing some of its effectiveness. Unscrupulous contractors may "fluff" insulation using fewer bags than optimal for a desired R-value. Dry-spray (but not wet-spray) cellulose can settle 20% of its original volume. However, the expected settling is included in the stated R-Value. The dense-pack dry installation reduces settling and increases R-value.
R-values stated on packaging are based on laboratory conditions; air infiltration can significantly reduce effectiveness, particularly for fiberglass loose fill. Cellulose inhibits convection more effectively. In general, loose fill is seen as being better at reducing the presence of gaps in insulation than batts, as the cavity is sealed more carefully. Air infiltration through the insulating material itself is not studied well, but would be lower for wet-spray insulations such as wet-spray cellulose.
May absorb moisture.
Types
Rock and slag wool, also known as mineral wool or mineral fiber. Made from rock (basalt, diabase), iron ore blast furnace slag, or recycled glass. Nonflammable. More resistant to airflow than fiberglass. Clumps and loses effectiveness when moist or wet, but does not absorb much moisture, and regains effectiveness once dried. Older mineral wool can contain asbestos, but normally this is in trace amounts.
Cellulose insulation. Cellulose, is denser and more resistant to air flow than fiberglass. Persistent moisture will weaken aluminium sulphate flame-retardants in cellulose (which are sometimes used in the US). However, borate fire retardants (used primarily in Australia and commonly in the US) have been in use for more than 30 years and are not affected by moisture in any way. Dense-pack cellulose is highly resistant to air infiltration and is either installed into an open wall cavity using nets or temporary frames, or is retrofitted into finished walls. However, dense-pack cellulose blocks, but does not permanently seal, bypasses, in the way a closed-cell spray foam would. Furthermore, as with batts and blankets, warm, moist air will still pass through, unless there is a continuous near-perfect vapor barrier.
Wet-spray cellulose insulation is similar to loose-fill insulation, but is applied with a small quantity of water to help the cellulose bind to the inside of open wall cavities, and to make the cellulose more resistant to settling. Spray application provides even better protection against air infiltration and improves wall rigidity. It also allows application on sloped walls, attics, and similar spaces. Wet-spray is best for new construction, as the wall must be allowed to dry completely before sealing with drywall (a moisture meter is recommended). Moist-spray (also called stabilized) cellulose uses less water to speed up drying time.
Fiberglass. Usually pink, yellow, or white. Loses effectiveness when moist or wet, but does not absorb much water. Nonflammable. See Health effects of fiberglass.
Natural insulations such as granulated cork, hemp fibres, grains, all which can be treated with a low toxicity fire and insect retardants
Vermiculite. Generally gray or brown.
Perlite. Generally white or yellow.
Cotton, wool, hemp, corn cobs, strawdust and other harvested natural materials. Not common.
Granulated cork. Cork is as good an insulator as foam. It does not absorb water as it consists of closed cells. Resists fire. Used in Europe.
Most plant based insulations such as wood chips, wood fiber, sawdust, redwood bark, hemlock fiber, balsa wood, hemp fiber, flax fiber, etc. are hygroscopic. Wood absorbs water, which reduces its effectiveness as a thermal insulator. In the presence of moisture, wood is susceptible to mold, mildew, and rot. Careful design of wall, roof and floor systems as done in Europe avoid these problems which are due to poor design.
Regulations
US regulatory standards for cellulose insulation
16 CFR Part 1209 (Consumer Products Safety Commission, or CPSC) – covers settled density, corrosiveness, critical radiant flux, and smoldering combustion.
ASTM Standard C-739 – loose-fill cellulose insulation – covers all factors of the CPSC regulation and five additional characteristics, R-value, starch content, moisture absorption, odor, and resistance to fungus growth.
ASTM Standard C-1149 – Industry standard for self-supported spray-applied cellulose insulation for exposed or wall cavity application – covers density, R-value, surface burning, adhesive strength, smoldering combustion, fungi resistance, corrosion, moisture vapor absorption, odor, flame resistance permanency (no test exists for this characteristic), substrate deflection (for exposed application products), and air erosion (for exposed application products).
16 CFR Part 460 – (Federal Trade Commission regulation) commonly known as the "R-Value Rule," intended to eliminate misleading insulation marketing claims and ensure publication of accurate R-Value and coverage data.
Aerogels
Skylights, solariums and other special applications may use aerogels, a high-performance, low-density material. Silica aerogel has the lowest thermal conductivity of any known substance (short of a vacuum), and carbon aerogel absorbs infrared radiation (i.e., heat from sun rays) while still allowing daylight to enter. The combination of silica and carbon aerogel gives the best insulating properties of any known material, approximately twice the insulative protection of the next best insulative material, closed-cell foam.
Straw bales
The use of highly compressed straw bales as insulation, though uncommon, is gaining popularity in experimental building projects for the high R-value and low cost of a thick wall made of straw. "Research by Joe McCabe at the Univ. of Arizona found R-value for both wheat and rice bales was about R-2.4 (RSI-0.42) per inch with the grain, and R-3 (RSI-0.53) per inch across the grain. A 23" wide 3 string bale laid flat = R-54.7 (RSI-9.64), laid on edge (16" wide) = R-42.8 (RSI-7.54). For 2 string bales laid flat (18" wide) = R-42.8 (RSI-7.54), and on edge (14" wide) = R-32.1 (RSI-5.66)" (Steen et al.: The Straw Bale House, 1994). Using a straw bale in-fill sandwich roof greatly increases the R value. This compares very favorably with the R-19 (RSI-3.35) of a conventional 2 x 6 insulated wall. When using straw bales for construction, the bales must be tightly-packed and allowed to dry out sufficiently. Any air gaps or moisture can drastically reduce the insulating effectiveness.
Reflective insulation and radiant barriers
Reflective insulation and radiant barriers reduce the radiation of heat to or from the surface of a material. Radiant barriers will reflect radiant energy. A radiant barrier by itself will not affect heat conducted through the material by direct contact or heat transferred by moist air rising or convection. For this reason, trying to associate R-values with radiant barriers is difficult and inappropriate. The R-value test measures heat transfer through the material, not to or from its surface. There is no standard test designed to measure the reflection of radiated heat energy alone. Radiated heat is a significant means of heat transfer; the sun's heat arrives by radiating through space and not by conduction or convection. At night the absence of heat (i.e. cold) is the exact same phenomenon, with the heat radiating described mathematically as the linear opposite. Radiant barriers prevent radiant heat transfer equally in both directions. However, heat flow to and from surfaces also occurs via convection, which in some geometries is different in different directions.
Reflective aluminum foil is the most common material used as a radiant barrier. It has no significant mass to absorb and retain heat. It also has very low emittance values "E-values" (typically 0.03 compared to 0.90 for most bulk insulation) which significantly reduces heat transfer by radiation.
Types of radiant barriers
Foil or "reflective foil laminate"s (RFL).
Foil-faced polyurethane or foil-faced polyisocyanurate panels.
Foil-faced polystyrene. This laminated, high density EPS is more flexible than rigid panels, works as a vapor barrier, and works as a thermal break. Uses include the underside of roof sheathing, ceilings, and on walls. For best results, this should not be used as a cavity fill type insulation.
Foil-backed bubble pack. This is thin, more flexible than rigid panels, works as a vapor barrier, and resembles plastic bubble wrap with aluminum foil on both sides. Often used on cold pipes, cold ducts, and the underside of roof sheathing.
Light-colored roof shingles and reflective paint. Often called cool roofs, these help to keep attics cooler in the summer and in hot climates. To maximize radiative cooling at night, they are often chosen to have high thermal emissivity, whereas their low emissivity for the solar spectrum reflects heat during the day.
Metal roofs; e.g., aluminum or copper.
Radiant barriers can function as a vapor barriers and serve both purposes with one product.
Materials with one shiny side (such as foil-faced polystyrene) must be positioned with the shiny side facing an air space to be effective. An aluminum foil radiant barrier can be placed either way – the shiny side is created by the rolling mill during the manufacturing process and does not affect the reflective of the foil material. As radiant barriers work by reflecting infra-red energy, the aluminum foil would work just the same if both sides were dull.
Reflective Insulation
Insulation is a barrier material to resist/reduce substance (water, vapor, etc. ) /energy (sound, heat, electric, etc.) to transfer from one side to another.
Heat/ Thermal Insulation is a barrier material to resist / block / reflect the heat energy (either one or more of the Conduction, Convection or Radiation) to transfer from one side to another.
Reflective Insulation is one of the Heat/Thermal Insulation to reflect Radiation Heat (Radiant Heat) transfer from one side to another due to the reflective surface (or low emittance).
There are a lot of definitions about “Thermal/Heat Insulation” and the common misinterpretation of “Thermal/Heat Insulation” = “Bulk/Mass/Batt Insulation” which is actually uses to resist Conduction Heat Transfer with certain "R-Value".
As such Materials reflecting Radiant Heat with negligible “R-Value” should also be classified as “Thermal/ Heat Insulation”.
Thus
Reflective Insulation = Radiant Barrier
Advantages
Very effective in warmer climates
No change in thermal performance over time due to compaction, disintegration or moisture absorption
Thin sheets takes up less room than bulk insulation
Can act as a vapor barriers
Non-toxic/non-carcinogenic
Will not mold or mildew
Radon retarder, will limit radon penetration through the floor
Disadvantages
Must be combined with other types of insulation in very cold climates
May result in an electrical safety hazard where the foil comes into contact with faulty electrical wiring
Hazardous and discontinued insulation
Certain forms of insulation used in the past are now no longer used because of recognized health risks.
Urea-formaldehyde foam (UFFI) and panels
Urea-formaldehyde insulation releases poisonous formaldehyde gas, causing indoor air quality problems. The chemical bond between the urea and formaldehyde is weak, resulting in degradation of the foam cells and emission of toxic formaldehyde gas into the home over time. Furthermore, some manufacturers used excess formaldehyde to ensure chemical bonding of all of the urea. Any leftover formaldehyde would escape after the mixing. Most states outlawed it in the early 1980s after dangers to building occupants were discovered. However emissions are highest when the urea-formaldehyde is new and decrease over time, so houses that have had urea-formaldehyde within their walls for years or decades do not require remediation.
UFFI provides little mechanical strength, as the material is weak and brittle. Before its risks were recognized, it was used because it was a cheap, effective insulator with a high R-value and its open-cell structure was a good acoustic insulator. Though it absorbed moisture easily, it regained effectiveness as an insulator when dried.
Asbestos
Asbestos is a mineral fiber that occurs in rock and soil that has traditionally been used as an insulation material in many homes and buildings. It is fireproof, a good thermal and electrical insulator, and resistant to chemical attack and wear. It has also been found that asbestos can cause cancer when in friable form (that is, when likely to release fibers into the air – when broken, jagged, shredded, or scuffed).
When found in the home, asbestos often resembles grayish-white corrugated cardboard coated with cloth or canvas, usually held in place around pipes and ducts with metal straps. Things that typically might contain asbestos:
Boiler and furnace insulation.
Heating duct wrapping.
Pipe insulation ("lagging").
Ducting and transite pipes within slabs.
Acoustic ceilings.
Textured materials.
Resilient flooring.
Blown-in insulation.
Roofing materials and felts.
Health and safety issues
Spray polyurethane foam (SPF)
All polyurethane foams are composed of petrochemicals. Foam insulation often uses hazardous chemicals with high human toxicity, such as isocyanates, benzene and toluene. The foaming agents no longer use ozone-depleting substances. Personal Protective Equipment is required for all people in the area being sprayed to eliminate exposure to isocyanates which constitute about 50% of the foam raw material.
Fiberglass
Fiberglass is the most common residential insulating material, and is usually applied as batts of insulation, pressed between studs. Health and safety issues include potential cancer risk from exposure to glass fibers, formaldehyde off-gassing from the backing/resin, use of petrochemicals in the resin, and the environmental health aspects of the production process. Green building practices shun Fiberglass insulation.
The World Health Organization has declared fiber glass insulation as potentially carcinogenic (WHO, 1998). In October 2001, an international expert review by the International Agency for Research on Cancer (IARC) re-evaluated the 1988 IARC assessment of glass fibers and removed glass wools from its list of possible carcinogens by downgrading the classification of these fibers from Group 2B (possible carcinogen) to Group 3 (not classifiable as to carcinogenicity in humans). All fiber glass wools that are commonly used for thermal and acoustical insulation are included in this classification. IARC noted specifically: "Epidemiologic studies published during the 15 years since the previous IARC Monographs review of these fibers in 1988 provide no evidence of increased risks of lung cancer or mesothelioma (cancer of the lining of the body cavities) from occupational exposures during manufacture of these materials, and inadequate evidence overall of any cancer risk."
The IARC downgrade is consistent with the conclusion reached by the US National Academy of Sciences, which in 2000 found "no significant association between fiber exposure and lung cancer or nonmalignant respiratory disease in the MVF [man-made vitreous fiber] manufacturing environment." However, manufacturers continue to provide cancer risk warning labels on their products, apparently as indeminfication against claims.
However, the literature should be considered carefully before determining that the risks should be disregarded. The OSHA chemical sampling page provides a summary of the risks, as does the NIOSH Pocket Guide.
Miraflex is a new type of fiberglass batt that has curly fibers that are less itchy and create less dust. You can also look for fiberglass products factory-wrapped in plastic or fabric.
Fiberglass is energy intensive in manufacture. Fiberglass fibers are bound into batts using adhesive binders, which can contain adhesives that can slowly release formaldehyde over many years. The industry is mitigating this issue by switching to binder materials not containing formaldehyde; some manufacturers offer agriculturally based binder resins made from soybean oil. Formaldehyde-free batts and batts made with varying amounts of recycled glass (some approaching 50% post-consumer recycled content) are available.
Loose-fill cellulose
Cellulose is 100% natural and 75–85% of it is made from recycled newsprint. Health issues (if any) appear to be minor, and most concerns around the flame retardants and mold potential seem to be misrepresentations.
Cellulose is classified by OSHA as a dust nuisance during installation, and the use of a dust mask is recommended.
Cellulose is treated with a flame retardant and insect repellent, usually boric acid and sometimes borax to resist insects and rodents. To humans, boric acid has a toxicity comparable to table salt.
Mold has been seen as a potential concern. However, according to the Cellulose Manufacturer's Association, "One thing that has not contributed to mold problems is the growing popularity of cellulose insulation among knowledgeable home owners who are interested in sustainable building practices and energy conservation. Mycology experts (mycology is the study of mold) are often quoted as saying: “Mold grows on cellulose.” They are referring to cellulose the generic material that forms the cell walls of all plants, not to cellulose insulation. Unfortunately, all too often this statement is taken to mean that cellulose insulation is exceptionally susceptible to mold contamination. In fact, due to its favorable moisture control characteristics and other factors associated with the manufacturing process relatively few cases of significant mold growth on cellulose insulation have been reported. All the widely publicized incidents of serious mold contamination of insulation have involved fiber insulation materials other than cellulose.".
Moisture is always a concern for homes, and the wet-spray application of cellulose may not be a good choice in particularly wet climates unless the insulation can be verified to be dry before drywall is added. In very wet climates the use of a moisture meter will ensure proper installation and eliminate any installation mold issues (almost any insulation that becomes and remains wet can in the future cause a mold issue). The dry-spray application is another option for very wet climates, allowing for a faster installation (though the wet-spray cellulose has an even higher R-value and can increase wall rigidity).
US Health and Safety Partnership Program
In May 1999, the North American Insulation Manufacturers Association began implementing a comprehensive voluntary work practice partnership with the US Occupational Safety and Health Administration (OSHA). The program, known as the Health and Safety Partnership Program, or HSPP, promotes the safe handling and use of insulation materials and incorporates education and training for the manufacture, fabrication, installation and removal of fiber glass, rock wool and slag wool insulation products. (See health effects of fiberglass). (For authoritative and definitive information on fiber glass and rock and slag wool insulation, as well as the HSPP, consult the North American Insulation Manufacturers Association (NAIMA) website).
See also
Condensation
Enovate
Low-energy building
Superinsulation
Thermal mass
Quadruple glazing
Weatherization
Notes
References
U.S. Environmental Protection Agency and the US Department of Energy's Office of Building Technologies.
Loose-Fill Insulations, DOE/GO-10095-060, FS 140, Energy Efficiency and Renewable Energy Clearinghouse (EREC), May 1995.
Insulation Fact Sheet, US Department of Energy, update to be published 1996. Also available from EREC.
Lowe, Allen. "Insulation Update," The Southface Journal, 1995, No. 3. Southface Energy Institute, Atlanta, Georgia, US
ICAA Directory of Professional Insulation Contractors, 1996, and A Plan to Stop Fluffing and Cheating of Loose-Fill Insulation in Attics, Insulation Contractors Association of America, 1321 Duke St., #303, Alexandria, VA 22314, (703)739-0356.
US DOE Consumer Energy Information.
Insulation Information for Nebraska Homeowners, NF 91–40.
Article in Daily Freeman, Thursday, 8 September 2005, Kingston, New York, US
TM 5-852-6 AFR 88–19, Volume 6 (Army Corps of Engineers publication).
CenterPoint Energy Customer Relations.
US DOE publication, Residential Insulation
US DOE publication, Energy Efficient Windows
US EPA publication on home sealing
DOE/CE 2002
University of North Carolina at Chapel Hill
Alaska Science Forum, May 7, 1981, Rigid Insulation, Article #484, by T. Neil Davis, provided as a public service by the Geophysical Institute, University of Alaska Fairbanks, in cooperation with the UAF research community.
Guide raisonné de la construction écologique (Guide to products /fabricants of green building materials mainly in France but also surrounding countries), Batir-Sain 2004
Insulators
Building materials
de:Dämmstoff#W.C3.A4rmed.C3.A4mmstoffe im Vergleich .5B | Building insulation material | [
"Physics",
"Engineering"
] | 9,812 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
8,855,086 | https://en.wikipedia.org/wiki/HELP%20assay | For the purpose of DNA replication, the HpaII tiny fragment Enrichment by Ligation-mediated PCR Assay (HELP Assay) is one of several techniques used for determining whether DNA has been methylated. The technique can be adapted to examine DNA methylation within and around individual genes, or it can be expanded to examine methylation in an entire genome.
The technique relies upon the properties of two restriction enzymes: HpaII and MspI. The HELP assay compares representations generated by HpaII and by MspI digestion of the genome followed by ligation-mediated PCR. HpaII only digests 5'-CCGG-3' sites when the cytosine in the central CG dinucleotide is unmethylated, the HpaII representation is enriched for the hypomethylated fraction of the genome. The MspI representation is a control for copy number changes and PCR amplification difficulties.
It was recently shown that cytosine methylation patterns tend to be concordant over short (~1 kb) regions. The patterns represented by the HpaII sites therefore tend to be representative of other CG dinucleotides locally.
The analysis of HELP data involves quality analysis and normalization. An analytical pipeline written in the R programming language was recently published to allow HELP data processing.
References
External links
The protocol for the HELP assay
Microbiology | HELP assay | [
"Chemistry",
"Biology"
] | 287 | [
"Microbiology",
"Microscopy"
] |
8,855,209 | https://en.wikipedia.org/wiki/Pillars%20of%20Creation | Pillars of Creation is a photograph taken by the Hubble Space Telescope of elephant trunks of interstellar gas and dust in the Eagle Nebula, in the Serpens constellation, some from Earth. These elephant trunks had been discovered by John Charles Duncan in 1920 on a plate made with the Mount Wilson Observatory 60-inch telescope.
They are so named because the gas and dust are in the process of creating new stars, while also being eroded by the light from nearby stars that have recently formed.
Taken on April 1, 1995, it was named one of the top ten photographs from Hubble by Space.com. The astronomers responsible for the photo were Jeff Hester and Paul Scowen from Arizona State University. The region was rephotographed by ESA's Herschel Space Observatory in 2011, again by Hubble in 2014 with a newer camera, and the James Webb Space Telescope in 2022.
Released in 2007, Chandra X-ray Observatory (AXAF) had observed the area in 2001. It did not find many X-ray sources in the towers but was able to observe sources at various X-ray energy levels in the area from young stars.
The image is noted for its global culture impact, being considered the most iconic picture taken by the Hubble Telescope and National Geographic noting on its 20th anniversary that the image had been featured on everything from "t-shirts to coffee-mugs".
Name
The name is based on a phrase used by Charles Spurgeon in his 1857 sermon "The Condescension of Christ":
In calling the Hubble's spectacular new image of the Eagle Nebula the Pillars of Creation, NASA scientists were tapping a rich symbolic tradition with centuries of meaning, bringing it into the modern age. As much as we associate pillars with the classical temples of Greece and Rome, the concept of the pillars of creationthe very foundations that hold up the world and all that is in itreverberates significantly in the Christian tradition. When William Jennings Bryan published The World's Famous Orations in 1906, he included an 1857 sermon by London pastor Charles Haddon Spurgeon titled "The Condescension of Christ". In it, Spurgeon uses the phrase to convey not only the physical world but also the force that keeps it all together, emanating from the divine: "And now wonder, ye angels," Spurgeon says of the birth of Christ, "the Infinite has become an infant; he, upon whose shoulders the universe doth hang, hangs at his mother's breast; He who created all things, and bears up the pillars of creation, hath now become so weak, that He must be carried by a woman!"
Composition
The pillars are composed of cool molecular hydrogen and dust that are being eroded by photoevaporation from the ultraviolet light of relatively close and hot stars. The leftmost pillar is about four light-years in length. The finger-like protrusions at the top of the clouds are larger than the Solar System, and are made visible by the shadows of evaporating gaseous globules (EGGs), which shield the gas behind them from intense UV flux. EGGs are themselves incubators of new stars. The stars then emerge from the EGGs, which then are evaporated.
Theorized destruction
Images taken with the Spitzer Space Telescope uncovered a cloud of dust in the vicinity of the Pillars of Creation that hypothetically could be a shock wave produced by a supernova. The appearance of the cloud suggests the supernova shockwave would have destroyed the Pillars of Creation 6,000 years ago. Given the distance of roughly 7,000 light-years between Earth and the Pillars of Creation, this would mean that they have actually already been destroyed, but because light travels at a finite speed, this destruction should be visible from Earth in about 1,000 years.
This interpretation of the hot dust has been disputed by an astronomer uninvolved in the Spitzer observations, who argues that a supernova should have resulted in stronger radio and x-ray radiation than has been observed, and that winds from massive stars could instead have heated the dust. If this is the case, the Pillars of Creation will undergo a more gradual erosion.
Photographs
Original Hubble Space Telescope photo
Hubble's photo of the pillars is composed of 32 different images from four CCD sensors in the Wide Field and Planetary Camera 2 on board Hubble. The photograph was made with light emitted by different elements in the cloud and appears as a different color in the composite image: green for hydrogen, red for singly ionized sulfur and blue for double-ionized oxygen atoms.
The "stair-shaped" missing part of the picture at the top right corner originates from the fact that the camera for the top-right quadrant has a magnified view; when its images are scaled down to match the other three cameras, there is necessarily a gap in the rest of that quadrant. This effect is also present on other WFPC2 images, and can be displayed at any corner depending on how the image has been re-oriented for publication.
The Wide Field and Planetary Camera 2 was replaced by the Wide Field Camera 3, and the former was taken back to Earth where it is displayed in a museum. It was replaced in 2009 as part of a Space Shuttle mission (STS-125).
Herschel's photo
In 2010 Herschel Space Observatory captured a new image of the Pillars of Creation in far-infrared wavelengths, which allows astronomers to look inside the pillars and structures in the region, and come to a much fuller understanding of the creative and destructive forces inside the Eagle Nebula.
Revisits
In celebration of the 25th anniversary since the launch of the Hubble Space Telescope, astronomers assembled a larger and higher-resolution photograph of the Pillars of Creation which was unveiled in January 2015 at the American Astronomical Society meeting in Seattle. The image was photographed by the Hubble Telescope's Wide Field Camera 3, installed in 2009, in visible light. An infrared image was also taken. The re-imaging has a wider view that shows more of the base of the nebulous columns.
In October 2022, it was unveiled that the James Webb Space Telescope captured a new image of the Pillars of Creation utilizing the NIRCam aboard the spacecraft. The image was able to capture ejections from the formation of young stars still in development in great detail, as seen by the red spots near the edges of the pillars.
The most recent visualization of the Pillars of Creation was released by NASA in June 2024. It is a 3D rendering created by images from both the James Webb Space Telescope and the Hubble Space Telescope. NASA described it as "the most comprehensive and detailed multiwavelength movie yet of this star-birthing region."
See also
List of photographs considered the most important
References
External links
NASA on making the infrared imaging
1995 in science
Astronomy image articles
Carina–Sagittarius Arm
Hubble Space Telescope images
Serpens
Sky regions
Articles containing video clips
1995 works
1995 in art
1990s photographs
Color photographs | Pillars of Creation | [
"Astronomy"
] | 1,432 | [
"Serpens",
"Astronomy image articles",
"Works about astronomy",
"Constellations",
"Sky regions"
] |
8,855,574 | https://en.wikipedia.org/wiki/Thermal%20bridge | A thermal bridge, also called a cold bridge, heat bridge, or thermal bypass, is an area or component of an object which has higher thermal conductivity than the surrounding materials, creating a path of least resistance for heat transfer. Thermal bridges result in an overall reduction in thermal resistance of the object. The term is frequently discussed in the context of a building's thermal envelope where thermal bridges result in heat transfer into or out of conditioned space.
Thermal bridges in buildings may impact the amount of energy required to heat and cool a space, cause condensation (moisture) within the building envelope, and result in thermal discomfort. In colder climates (such as the United Kingdom), thermal heat bridges can result in additional heat losses and require additional energy to mitigate.
There are strategies to reduce or prevent thermal bridging, such as limiting the number of building members that span from unconditioned to conditioned space and applying continuous insulation materials to create thermal breaks.
Concept
Heat transfer occurs through three mechanisms: convection, radiation, and conduction. A thermal bridge is an example of heat transfer through conduction. The rate of heat transfer depends on the thermal conductivity of the material and the temperature difference experienced on either side of the thermal bridge. When a temperature difference is present, heat flow will follow the path of least resistance through the material with the highest thermal conductivity and lowest thermal resistance; this path is a thermal bridge. Thermal bridging describes a situation in a building where there is a direct connection between the outside and inside through one or more elements that possess a higher thermal conductivity than the rest of the envelope of the building.
Identifying Thermal Bridges
Surveying buildings for thermal bridges is performed using passive infrared thermography (IRT) according to the International Organization for Standardization (ISO). Infrared Thermography of buildings can allow thermal signatures that indicate heat leaks. IRT detects thermal abnormalities that are linked to the movement of fluids through building elements, highlighting the variations in the thermal properties of the materials that correspondingly cause a major change in temperature. The drop shadow effect, a situation in which the surrounding environment casts a shadow on the facade of the building, can lead to potential accuracy issues of measurements through inconsistent facade sun exposure. An alternative analysis method, Iterative Filtering (IF), can be used to solve this problem.
In all thermographic building inspections, the thermal image interpretation if performed by a human operator, involving a high level of subjectivity and expertise of the operator. Automated analysis approaches, such as Laser scanning technologies can provide thermal imaging on 3 dimensional CAD model surfaces and metric information to thermographic analyses. Surface temperature data in 3D models can identify and measure thermal irregularities of thermal bridges and insulation leaks. Thermal imaging can also be acquired through the use of unmanned aerial vehicles (UAV), fusing thermal data from multiple cameras and platforms. The UAV uses an infrared camera to generate a thermal field image of recorded temperature values, where every pixel represents radiative energy emitted by the surface of the building.
Thermal Bridging in Construction
Frequently, thermal bridging is used in reference to a building’s thermal envelope, which is a layer of the building enclosure system that resists heat flow between the interior conditioned environment and the exterior unconditioned environment. Heat will transfer through a building’s thermal envelope at different rates depending on the materials present throughout the envelope. Heat transfer will be greater at thermal bridge locations than where insulation exists because there is less thermal resistance. In the winter, when exterior temperature is typically lower than interior temperature, heat flows outward and will flow at greater rates through thermal bridges. At a thermal bridge location, the surface temperature on the inside of the building envelope will be lower than the surrounding area. In the summer, when the exterior temperature is typically higher than the interior temperature, heat flows inward, and at greater rates through thermal bridges. This causes winter heat losses and summer heat gains for conditioned spaces in buildings.
Despite insulation requirements specified by various national regulations, thermal bridging in a building's envelope remain a weak spot in the construction industry. Moreover, in many countries building design practices implement partial insulation measurements foreseen by regulations. As a result, thermal losses are greater in practice that is anticipated during the design stage.
An assembly such as an exterior wall or insulated ceiling is generally classified by a U-factor, in W/m2·K, that reflects the overall rate of heat transfer per unit area for all the materials within an assembly, not just the insulation layer. Heat transfer via thermal bridges reduces the overall thermal resistance of an assembly, resulting in an increased U-factor.
Thermal bridges can occur at several locations within a building envelope; most commonly, they occur at junctions between two or more building elements. Common locations include:
Floor-to-wall or balcony-to-wall junctions, including slab-on-grade and concrete balconies or outdoor patios that extend the floor slab through the building envelope
Roof/Ceiling-to-wall junctions, especially where full ceiling insulation depths may not be achieved
Window-to-wall junctions
Door-to-wall junctions
Wall-to-wall junctions
Wood, steel or concrete members, such as studs and joists, incorporated in exterior wall, ceiling, or roof construction
Recessed luminaries that penetrate insulated ceilings
Windows and doors, especially frames components
Areas with gaps in or poorly installed insulation
Metal ties in masonry cavity walls
Structural elements remain a weak point in construction, commonly leading to thermal bridges that result in high heat loss and low surface temperatures in a room.
Masonry Buildings
While thermal bridges exist in various types of building enclosures, masonry walls experience significantly increased U-factors caused by thermal bridges. Comparing thermal conductivities between different building materials allows for assessment of performance relative to other design options. Brick materials, which are usually used for facade enclosures, typically have higher thermal conductivities than timber, depending on the brick density and wood type. Concrete, which may be used for floors and edge beams in masonry buildings are common thermal bridges, especially at the corners. Depending on the physical makeup of the concrete, the thermal conductivity can be greater than that of brick materials. In addition to heat transfer, if the indoor environment is not adequately vented, thermal bridging may cause the brick material to absorb rainwater and humidity into the wall, which can result in mold growth and deterioration of building envelope material.
Curtain Wall
Similar to masonry walls, curtain walls can experience significantly increased U-factors due to thermal bridging. Curtain wall frames are often constructed with highly conductive aluminum, which has a typical thermal conductivity above 200 W/m·K. In comparison, wood framing members are typically between 0.68 and 1.25 W/m·K. The aluminum frame for most curtain wall constructions extends from the exterior of the building through to the interior, creating thermal bridges.
Impacts of Thermal Bridging
Thermal bridging can result in increased energy required to heat or cool a conditioned space due to winter heat loss and summer heat gain. At interior locations near thermal bridges, occupants may experience thermal discomfort due to the difference in temperature. Additionally, when the temperature difference between indoor and outdoor space is large and there is warm and humid air indoors, such as the conditions experienced in the winter, there is a risk of condensation in the building envelope due to the cooler temperature on the interior surface at thermal bridge locations. Condensation can ultimately result in mold growth with consequent poor indoor air quality and insulation degradation, reducing the insulation performance and causing insulation to perform inconsistently throughout the thermal envelope
Design Methods to Reduce Thermal Bridges
There are several methods that have been proven to reduce or eliminate thermal bridging depending on the cause, location, and the construction type. The objective of these methods is to either create a thermal break where a building component would span from exterior to interior otherwise, or to reduce the number of building components spanning from exterior to interior. These strategies include:
A continuous thermal insulation layer in the thermal envelope, such as with rigid foam board insulation
Lapping of insulation where direct continuity is not possible
Double and staggered wall assemblies
Structural Insulated Panels (SIPs) and Insulating Concrete Forms (ICFs)
Reducing framing factor by eliminating unnecessary framing members, such as implemented with advanced framing
Raised heel trusses at wall-to-roof junctions to increase insulation depth
Quality insulation installation without voids or compressed insulation
Installing double or triple pane windows with gas filler and low-emissivity coating
Installing windows with thermally broken frames made of low conductivity material
Analysis Methods and Challenges
Due to their significant impacts on heat transfer, correctly modeling the impacts of thermal bridges is important to estimate overall energy use. Thermal bridges are characterized by multi-dimensional heat transfer, and therefore they cannot be adequately approximated by steady-state one-dimensional (1D) models of calculation typically used to estimate the thermal performance of buildings in most building energy simulation tools. Steady state heat transfer models are based on simple heat flow where heat is driven by a temperature difference that does not fluctuate over time so that heat flow is always in one direction. This type of 1D model can substantially underestimate heat transfer through the envelope when thermal bridges are present, resulting in lower predicted building energy use.
The currently available solutions are to enable two-dimensional (2D) and three-dimensional (3D) heat transfer capabilities in modeling software or, more commonly, to use a method that translates multi-dimensional heat transfer into an equivalent 1D component to use in building simulation software. This latter method can be accomplished through the equivalent wall method in which a complex dynamic assembly, such as a wall with a thermal bridge, is represented by a 1D multi-layered assembly that has equivalent thermal characteristics.
See also
Damp proofing
List of thermal conductivities
Thermal conduction
Building Science
Thermography
Heat Transfer
References
External links
Design Guide: Solutions to Prevent Thermal Bridging.
Manufactured Structural Thermal Breaks.
EU IEE SAVE Project ASIEPI: topic 'Thermal bridges' - An effective handling of thermal bridges in the EPBD context
Passivhaus Institute: Thermal Bridges in construction - how to avoid them
A bridge too far - ASHRAE Journal article on thermal bridging
International Building Code, 2009: Interior Environment
Online Energy2D simulation of thermal bridge (Java required)
What Defines Thermal Bridge Free Design
Building Envelope Thermal Bridging Guide
Insulators
Thermal protection
Building engineering
Building defects
Low-energy building | Thermal bridge | [
"Materials_science",
"Engineering"
] | 2,117 | [
"Building engineering",
"Civil engineering",
"Building defects",
"Mechanical failure",
"Architecture"
] |
8,855,773 | https://en.wikipedia.org/wiki/Allied%20technological%20cooperation%20during%20World%20War%20II | The Allies of World War II cooperated extensively in the development and manufacture of new and existing technologies to support military operations and intelligence gathering during the Second World War. There are various ways in which the allies cooperated, including the American Lend-Lease scheme and hybrid weapons such as the Sherman Firefly as well as the British Tube Alloys nuclear weapons research project which was absorbed into the American-led Manhattan Project. Several technologies invented in Britain proved critical to the military and were widely manufactured by the Allies during the Second World War.
Tizard Mission
The origin of the cooperation stemmed from a 1940 visit by the Aeronautical Research Committee chairman Henry Tizard, during which Tizard arranged to transfer UK military technology to the US in the event that Hitler's planned invasion of the UK should succeed. Tizard led a British technical mission, known as the Tizard Mission, containing details and examples of British technological developments in fields such as radar, jet propulsion and also the early British research into the atomic bomb. One of the devices brought to the US by the mission, the cavity magnetron, was later described as "the most valuable cargo ever brought to our shores".
Small arms
Small arms began to be shared after the fall of France, most of the 'sharing' being one sided as America was not yet directly involved in the conflict and thus all the movement was from the United States to the United Kingdom. In the months following Operation Dynamo, as British manufacturers progressed in building replacements for the materiel lost by the British Army in France, the British government looked overseas for additional sources of equipment to assist in overcoming shortages and prepare for future offensives. The most extreme example of the shortages were found in the quickly improvised Local Defence Volunteers, later renamed the Home Guard, who were forced to train with broom handles and makeshift pikes using lengths of piping and old bayonets until weapons could be supplied.
In addition to those produced in Britain, small arms and ammunition were obtained from Commonwealth countries and also purchased from U.S. manufacturers until they were supplied under Lend-Lease beginning in 1941. The weapons obtained from the United States included the Tommy gun, M1911A1 pistol and the M1917 revolver produced by Colt and Smith & Wesson, all primarily produced in .45 ACP. The Home Guard received the Browning .30 machine gun, the M1918 .30 BAR and the P17 .30 Enfield rifle. M1917 Enfield rifles chambered for .303 British were also provided by the U.S. while all .30-caliber U.S. rifles, BARs and machine guns were chambered for .30-06 Springfield
Later, the M1919 .30 machine gun and the M2HB .50 machine gun chambered in .50 BMG were provided by the U.S. for infantry and anti-aircraft use. Browning AN2 light machine guns in .303 British caliber were already in standard use on British aircraft beginning in the late 1930s.
Britain supplied small arms to the USSR, and the 9mm Sten submachine gun was supplied to Soviet partisan troops.
Artillery
The British made use of many American towed artillery pieces during the war, such as the M2 105 mm howitzers, M1A1 75 mm pack howitzers, 155 mm guns (Long Toms). These weapons were supplied under lend-lease or bought outright. Tank/tank destroyer guns used by the British included the 37 mm M5/M6 gun (General Stuart and General Grant/Lee tanks), 75 mm M2 gun (General Grant/Lee), 75 mm M3 gun (General Grant/Lee and General Sherman), 76 mm gun M1 (General Sherman) and 3-inch gun M7 (3-inch GMC M10).
The Americans in turn used a British artillery piece, the Ordnance QF 6-pounder 7 cwt anti-tank gun. The US realized at the start of the war that their own 37 mm gun M3 would soon be obsolete and thus they produced a license built version of the QF 6-pounder under the designation 57 mm gun M1.
Both 76 mm and 75 mm guns were mounted on tanks sent to the Soviets by the US, while the British tanks sent were armed with both the Ordnance QF 2-pounder and the Ordnance QF 6-pounder.
Another technology taken to the US, by Henry Tizard, for further development and mass production, was the (radio-frequency) proximity fuse. It was five times as effective as contact or timed fuzes and was devastating in naval use against Japanese aircraft and so effective against German ground troops that General George S. Patton said it "won the Battle of the Bulge for us."
Tanks and other vehicles
The medium tank M4 was used in all theatres of the Second World War. It had a versatile reliable design and was easy to produce, thus huge numbers were made and provided to both Britain and the USSR by the United States under Lend-Lease. Despite official opinions, the medium tank M4 was well liked by some Soviet tankers, while others called it the best tank for peacetime service. When Britain received the tank, it was given the designation Sherman, as part of the UK practice of naming its US-built tanks after American Civil War generals. Both the British and the Soviets re-armed their M4s with their own tank guns. The Soviets re-armed a small number with the standard 76 mm F-34 tank gun but so much 75 mm ammunition was supplied by the US that the conversions were not widespread. Unfortunately, the fairly short-barreled 75mm gun most Shermans came equipped with did not offer very good armor penetration even with specialty ammunition, especially against the then-new Panther and Tiger. However, the British 76.2mm (3-inch) Ordnance QF 17-pounder, one of the best anti-tank guns of the period could be fitted in the Sherman's turret with modifications to the gun, a new gun mantlet and welding a bustle to the turret rear; this modification was known as the Firefly. The combination of British and American weaponry proved desirable, although despite the United States building a few 17-pounder Fireflies from new, it never went into mass production and did not see action. The US had its own 76 mm calibre long-barrel gun for the Sherman. While it wasn't as good as the 17-pounder, it still had a much better chance of successfully engaging German heavy tanks especially at close range, offered consistent kill-power against more equally-matched opponents at all ranges, and didn't require major modification to fit like the 17-pounder did. The Firefly thus remained a British variant of the Sherman. The M10 tank destroyer was also up-gunned with the 17-pounder, creating the M10C tank destroyer, sometimes known as "Achilles". This was used in accordance with British tactical doctrine for tank destroyers, in that they were considered self-propelled anti-tank guns rather than aggressive 'tank hunters'. Used in this fashion, it proved an effective weapon.
The British also used the Sherman hull for two other Sherman variants known as the Crab, a mine flailing tank, and the DD Sherman, the 'DD' (Duplex Drive) The DD was an amphibious tank. A flotation screen gave buoyancy and two propellers powered by the tank's engine gave propulsion in the water. On reaching land the screens could be dropped and the tank could fight in the normal manner. The DD, another key example of combining technologies, was used by both British and American forces during Operation Overlord. The DD had impressed US General Dwight D. Eisenhower during demonstrations and was readily accepted by the Americans. The Americans did not accept the Sherman Crab, which could have assisted combat engineers with clearing mines under fire, protected by armour. Armoured recovery vehicles (ARVs) were also converted from Shermans by the British as well as the specialist BARV (Beach Armoured Recovery Vehicle) designed to push-off landing craft and salvage vehicles which would otherwise have been lost.
The British supplied tanks to the USSR in the form of the Matilda, Valentine and Churchill infantry tanks. Soviet tank soldiers liked the Valentine for its reliability, cross country performance and low silhouette. The Soviet's opinion of the Matilda and Churchill was less favourable as a result of their weak 40-mm guns (without HE shells) and inability to operate in harsh rasputitsa, winter and offroad conditions.
Deliveries of M3 Half-tracks from the US to the Soviet Union were a significant benefit to mechanized Red Army units. Soviet industry produced few armoured personnel carriers, so Lend-Lease American vehicles were in great demand for fast movement of troops in front-line conditions. While M3s had only limited protection, common trucks had no protection at all. In addition, a large part of the Red Army truck fleet was American Studebakers, which were highly regarded by Soviet drivers. After the war, Soviet designers paid a lot of attention to create their own 6x6 army truck and the Studebaker was the template for this development.
In 1942, a T-34 and a KV-1 tank were sent by the Soviet Union to the US where they were evaluated at the Aberdeen Proving Ground. Another T-34 was sent to the British.
Aircraft
Britain supplied Hawker Hurricanes to the Soviet Union early in the Great Patriotic War to help equip the Soviet Air Force against the then technologically superior Luftwaffe. British RAF engineer Frank Whittle travelled to the US in 1942 to help General Electric start jet engine production.
The American P-51 Mustang was originally designed to a British specification for use by the Royal Air Force and entered service with them in 1942, and later versions were built with a Rolls-Royce Merlin aero-engine. This engine was being produced in the United States by Packard as the Packard Merlin. In addition to the British making use of American planes the US also made use of some Supermarine Spitfires both in escorting USAAF 8th Air Force bombers in Europe as well as being the primary fighter of the 12th Air Force in North Africa. In addition Bristol Beaufighter served as night fighters in the Mediterranean, and two squadrons of de Havilland Mosquito equipped the 8th Air Force as its primary photo reconnaissance and chaff deployment aircraft.
The United States supplied several aircraft types to both the Royal Navy and RAF - all three of the U.S. Navy's primary fighters during the war years, the Wildcat, Corsair (with the RN assisting the Americans with preparing the Corsair for U.S. naval carrier service by 1944), and Hellcat also served with the RN's Fleet Air Arm, with the Royal Air Force using a wide range of USAAF types. A wide range of American aircraft designs also went to the Soviet Union's VVS air arm through Lend-Lease, primarily fighters like the P-39 and P-63 used for aerial combat, along with attack and medium bombers like the A-20 and the B-25 being among the more prominent types, both bombers being well suited to the type of lower-altitude strike missions the Soviets had as a top priority.
Radar
The British demonstrated the cavity magnetron to the Americans at RCA, Bell Labs. It was 100 times as powerful than anything they had seen and enabled the development of airborne radar.
Nuclear weapons
In 1942, the British nuclear weapons research had fallen behind US and unable to match US resources, the United Kingdom agreed to merging their work with the American efforts. Around 20 British scientists and technical staff to America, along with their work, which had been carried out under the codename 'Tube Alloys'. The scientists joined the Manhattan Project at Los Alamos, New Mexico, where their work on uranium enrichment was instrumental in jump-starting the project. In addition Britain, was vital in sourcing raw materials for the project, both as the only source in the world of Nickel Powder required to build gaseous diffusers and providing Uranium both from its mine in British Congo as well as contracting a secondary supply from Sweden.
Code-breaking technology
Considerable information was transmitted from the UK to the US during and after WWII relating to code-breaking methods, the codes themselves, cryptoanalyst visits, mechanical and digital devices for speeding code-breaking, etc. When the Atlantic convoys of war material from the US to the UK came under serious threat from U-boats, considerable encouragement and practical help was given by the US to accelerate the development of code-breaking machines. Subsequent co-operation led to significant success in Australia and the far East for breaking encrypted Japanese messages.
Other technologies
Other technologies developed by the British and shared with the Americans and other Allies include ASDIC (sonar), the Bailey bridge, gyro gunsight, jet engine, Liberty ship, RDX, Rhino tank, Torpex, traveling-wave tube, proximity fuze.
Technologies developed by the Americans and shared with the British and Allies include the bazooka, LVT, DUKW, Fido (acoustic torpedo). Canada and the U.S. independently developed and shared the walkie-talkie.
Legacy
The Tizard Mission was the foundation for cooperation in scientific research at institutions within and across the United States, United Kingdom and Canada.
Many Norwegian scientists and technologists took part in British scientific research during the period when Germany occupied Norway between 1940 and 1945. This resulted in the Norwegian Defence Research Establishment, formed in 1946.
After the war ended, the US ended all nuclear co-operation with Britain. However, the demonstration of British Hydrogen bomb, and the launch of Sputnik 1 by the Soviet Union, both in 1957, resulted in the US resuming the wartime co-operation and led to a Mutual Defence Agreement between the two nations in 1958. Under this agreement, American technology was adapted for British nuclear weapons and various fissile materials were exchanged to resolve each other's specific shortages.
Cooperation between British intelligence agencies and the United States Intelligence Community in the post-war period became the cornerstone of Western intelligence gathering and the "Special Relationship" between the United Kingdom and the United States.
Many military inventions during the war found civilian uses.
See also
British Purchasing Commission
List of World War II electronic warfare equipment
Operations research
Radiation Laboratory
Telecommunications Research Establishment
References
Military equipment of World War II
United Kingdom–United States military relations
Soviet Union–United States military relations
Soviet Union–United Kingdom military relations
Technological races
Science and technology during World War II
Allies of World War II | Allied technological cooperation during World War II | [
"Technology"
] | 2,931 | [
"Science and technology during World War II",
"Science and technology by war"
] |
8,855,979 | https://en.wikipedia.org/wiki/Nmrpipe | NMRPipe is a Nuclear Magnetic Resonance data processing program.
The project was preceded by other functionally similar programs but is, by and large, one of the most popular software packages for NMR Data Processing in part due to its efficiency (due to its utilization of Unix pipes) and ease of use (due to the large amount of logic embedded in its individual functions).
NMRPipe consists of a series of "functions" which can be applied to a FID data file in any sequence, by using UNIX pipes.
Each individual function in NMRPipe has a specific task and a set of arguments which can be sent to configure its behavior.
See also
Comparison of NMR software
External links
NmrPipe website
nmrPipe on NMR wiki
Nuclear magnetic resonance software
Medical software | Nmrpipe | [
"Chemistry",
"Biology"
] | 162 | [
"Nuclear magnetic resonance",
"Nuclear magnetic resonance software",
"Medical software",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance stubs",
"Medical technology"
] |
8,856,166 | https://en.wikipedia.org/wiki/Fine%20bubble%20diffusers | Fine bubble diffusers are a pollution control technology used to aerate wastewater for sewage treatment.
Description
Fine bubble diffusers produce a plethora of very small air bubbles which rise slowly from the floor of a wastewater treatment plant or sewage treatment plant aeration tank and provide substantial and efficient mass transfer of oxygen to the water. The oxygen, combined with the food source, sewage, allows the bacteria to produce enzymes which help break down the waste so that it can settle in the secondary clarifiers or be filtered by membranes. A fine bubble diffuser is commonly manufactured in various forms: tube, disc, plate, and dome.
Bubble size
The subject of bubble size is important because the aeration system in a wastewater or sewage treatment plant consumes an average of 50 to 70 percent of the energy of the entire plant. Increasing the oxygen transfer efficiency decreases the power the plant requires to provide the same quality of effluent water. Furthermore, fine bubble diffusers evenly spread out (often referred to as a 'grid arrangement') on the floor of a tank, provide the operator of the plant a great deal of operational flexibility. This can be used to create zones with high oxygen concentrations (oxic or aerobic), zones with minimal oxygen concentration (anoxic) and zones with no oxygen (anaerobic). This allows for more precise targeting and removal of specific contaminants.
The importance of achieving ever smaller bubble sizes has been a hotly debated subject in the industry as ultra fine bubbles (micrometre size) are generally perceived to rise too slowly and provide too little "pumpage" to provide adequate mixing of sewage in an aeration tank. On the other hand, the industry standard "fine bubble" with a typical discharge diameter of 2 mm is probably larger than it needs to be for many plants. Average bubble diameters of 0.9 mm are possible nowadays, using special polyurethane (PUR) or special recently developed EPDM membranes.
Fine bubble diffusers have largely replaced coarse bubble diffusers and mechanical aerators in most of the developed world and in much of the developing world. The exception would be in secondary treatment phases, such as activated sludge processing tanks, where 85 to 90 percent of any remaining solid materials (floating on the surface) are removed through settling or biological processes. The biological process uses air to encourage bacterial growth that would consume many of these waste materials, such as phosphorus and nitrogen that are dissolved in the wastewater. The larger air release openings of a coarse bubble diffuser helps to facilitate a higher oxygen transfer rate and bacterial growth. One disadvantage of using fine bubble diffusers in activated sludge tanks is the tendency of floc (particle) clogging the small air release holes.
See also
List of waste-water treatment technologies
References
Sewerage
Water treatment
Environmental engineering
Water technology | Fine bubble diffusers | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 575 | [
"Water treatment",
"Chemical engineering",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering",
"Water technology"
] |
8,856,484 | https://en.wikipedia.org/wiki/Broad-leaved%20tree | A broad-leaved, broad-leaf, or broadleaf tree is any tree within the diverse botanical group of angiosperms that has flat leaves and produces seeds inside of fruits. It is one of two general types of trees, the other being a conifer, a tree with needle-like or scale-like leaves and seeds borne in woody cones. Broad-leaved trees are sometimes known as hardwoods.
Most deciduous trees are broad-leaved but some are coniferous, like larches.
Tree types
Gallery
See also
Leaf
Temperate broadleaf and mixed forests
Mixed coniferous forest
Tropical and subtropical dry broadleaf forests
References
External links
Identifying Broadleaf Trees and Shrubs. CMG Garden Notes. Colorado State University Extension.
Trees
Plant morphology | Broad-leaved tree | [
"Biology"
] | 152 | [
"Plant morphology",
"Plants"
] |
8,856,894 | https://en.wikipedia.org/wiki/Energy%20poverty | In developing countries and some areas of more developed countries, energy poverty is lack of access to modern energy services in the home. In 2022, 759 million people lacked access to consistent electricity and 2.6 billion people used dangerous and inefficient cooking systems. Their well-being is negatively affected by very low consumption of energy, use of dirty or polluting fuels, and excessive time spent collecting fuel to meet basic needs.
Predominant indices for measuring the complex nature of energy poverty include the Energy Development Index (EDI), the Multidimensional Energy Poverty Index (MEPI), and Energy Poverty Index (EPI). Both binary and multidimensional measures of energy poverty are required to establish indicators that simplify the process of measuring and tracking energy poverty globally. Energy poverty often exacerbates existing vulnerabilities amongst underprivileged communities and negatively impacts public and household health, education, and women's opportunities.
According to the Energy Poverty Action initiative of the World Economic Forum, "Access to energy is fundamental to improving quality of life and is a key imperative for economic development. In the developing world, energy poverty is still rife." As a result of this situation, the United Nations (UN) launched the Sustainable Energy for All Initiative and designated 2012 as the International Year for Sustainable Energy for All, which had a major focus on reducing energy poverty.
The term energy poverty is also sometimes used in the context of developed countries to mean an inability to afford energy in the home. This concept is also known as fuel poverty or household energy insecurity.
Description
Many people in developing countries do not have modern energy infrastructure. They have heavily relied on traditional biomass such as wood fuel, charcoal, crop residual, and wood pellets. Although some developing countries like the BRICS have neared the energy-related technological level of developed countries and have financial power, most developing countries are still dominated by traditional biomass. According to the International Energy Agency, "use of traditional biomass will decrease in many countries, but is likely to increase in South Asia and sub-Saharan Africa alongside population growth."
An energy ladder shows the improvement of energy use corresponding to an increase in the household income. Basically, as income increases, the energy types used by households would be cleaner and more efficient but more expensive as moving from traditional biomass to electricity. "Households at lower levels of income and development tend to be at the bottom of the energy ladder, using fuel that is cheap and locally available but not very clean nor efficient. According to the World Health Organization, over three billion people worldwide are at these lower rungs, depending on biomass fuels—crop waste, dung, wood, leaves, etc.—and coal to meet their energy needs. A disproportionate number of these individuals reside in Asia and Africa: 95% of the population in Afghanistan uses these fuels, 95% in Chad, 87% in Ghana, 82% in India, 80% in China, and so forth. As incomes rise, we would expect that households would substitute to higher-quality fuel choices. However, this process has been quite slow. In fact, the World Bank reports that the use of biomass for all energy sources has remained constant at about 25% since 1975."
Causes
One cause of energy poverty is lack of modern energy infrastructure like power plants, transmission lines, and underground pipelines to deliver energy resources such as natural gas. When infrastructure does make modern energy available, its cost may be out of reach for poorer households, so they avoid using it.
Units of analysis
Domestic energy poverty
Domestic energy poverty refers to a situation where a household does not have access or cannot afford to have the basic energy or energy services to achieve day to day living requirements. These requirements can change from country to country and region to region. The most common needs are lighting, cooking energy, domestic heating or cooling.
Other authors consider different categories of energy needs from "fundamental energy needs" associated to human survival and extremely poor situations. "Basic energy needs" required for attaining basic living standards, which includes all the functions in the previous (cooking, heating and lighting) and, in addition energy to provide basic services linked to health, education and communications. "Energy needs for productive uses" when additionally basic energy needs the user requires energy to make a living; and finally "Energy for recreation", when the user has fulfilled the previous categories and needs energy for enjoyment." Until recently energy poverty definitions took only the minimum energy quantity required into consideration when defining energy poverty, but a different school of thought is that not only energy quantity but the quality and cleanliness of the energy used should be taken into consideration when defining energy poverty.
One such definition reads as:
"A person is in 'energy poverty' if they do not have access to at least:
(a) the equivalent of 35 kg LPG for cooking per capita per year from liquid and/or gas fuels or from improved supply of solid fuel sources and improved (efficient and clean) cook stoves
and
(b) 120kWh electricity per capita per year for lighting, access to most basic services (drinking water, communication, improved health services, education improved services and others) plus some added value to local production
An 'improved energy source' for cooking is one which requires less than 4 hours person per week per household to collect fuel, meets the recommendations WHO for air quality (maximum concentration of CO of 30 mg/M3 for 24 hours periods and less than 10 mg/ M3 for periods 8 hours of exposure), and the overall conversion efficiency is higher than 25%. "
Composite indices
Energy Development Index (EDI)
First introduced in 2004 by the International Energy Agency (IEA), the Energy Development Index (EDI) aims to measure a country's transition to modern fuels. It is calculated as the weighted average of four indicators: "1) Per capita commercial energy consumption as an indicator of the overall economic development of a country; 2) Per capita consumption of electricity in the residential sector as a metric of electricity reliability and customers׳ ability to financially access it; 3) Share of modern fuels in total residential energy sector consumption to indicate access to modern cooking fuels; 4) Share of population with access to electricity." (The EDI was modeled after the Human Development Index (HDI).) Because the EDI is calculated as the average of indicators which measure the quality and quantity of energy services at a national level, the EDI provides a metric that provides an understanding of the national level of energy development. At the same time, this means that the EDI is not well-equipped to describe energy poverty at a household level.
Multidimensional Energy Poverty Index (MEPI)
Measures whether an individual is energy poor or rich based on how intensely they experience energy deprivation. Energy deprivation is categorized by seven indicators: "access to light, modern cooking fuel, fresh air, refrigeration, recreation, communication, and space cooling." An individual is considered energy poor if they experience a predetermined number of energy deprivations. The MEPI is calculated by multiplying the ratio of people identified as energy poor to the total sample size and the average intensity of energy deprivation of the energy poor. Some strengths of the MEPI is that it takes into account the number of energy poor along with the intensity of their energy poverty. On the other hand, because it collects data at a household or individual level, it is harder to understand the broader national context.
Energy Poverty Index (EPI)
Developed by Mirza and Szirmai in their 2010 study to measure energy poverty in Pakistan, the Energy Poverty Index (EPI) is calculated by averaging the energy shortfall and energy inconvenience of a household. Energy inconvenience is measured through indicators such as: "Frequency of buying or collecting a source of energy; Distance from household traveled; Means of transport used; Household member's involvement in energy acquisition; Time spent on energy collection per week; Household health; Children's involvement in energy collection." Energy shortfall is measured as the lack of sufficient energy to meet basic household needs. This index weighs more heavily the impact of the usability of energy services rather than its access. Similar to the MEPI, the EPI collects data at a micro-level which lends to greater understanding of energy poverty at the household level.
Critiques of measuring energy poverty
Energy poverty is challenging to define and measure because energy services cannot be measured concretely and there are no universal standards of what are considered basic energy services. Energy poverty is too complex to work and measure with an indicator and framework that is internationally accepted in a global context. Therefore, binary measures and multidimensional measures of energy poverty are required to consolidate and establish indicators that simplify the process of measuring and tracking energy poverty globally. There is no homogenous definition and international measure to use as a standard globally, even the definition of energy poverty is not the same among countries in the European Union.
Intersectional issues
Energy poverty often exacerbates existing vulnerabilities amongst already disadvantaged communities. For instance, energy poverty negatively impacts women's health, threatens the quality and quantity of children's education, and damages household and public health.
Gender
In developing countries, women and girls' health, educational, and career opportunities are significantly affected by energy because they are usually responsible for providing the primary energy for households. Women and girls spend significant amount of time looking for fuel sources like wood, paraffin, dung, etc. leaving them less time to pursue education, leisure, and their careers. Additionally, using biomass as fuel for heating and cooking disproportionately affects women and children as they are the primary family members responsible for cooking and other domestic activities within the home. Being more vulnerable to household air pollution from burning biomass, 85% of the 2 million deaths from indoor air pollution are attributed to women and children. In developed countries, women are more vulnerable to experiencing energy poverty because of their relatively low income compared to the high cost of energy services. For example, women-headed households made up 38% of the 5.6 million French households who were unable to adequately heat their homes. Older women are particularly more vulnerable to experiencing energy poverty because of structural gender inequalities in financial resources and the ability to invest in energy-saving strategies.
Education
With many dimensions of poverty, education is a very powerful agent for mitigating the effects of energy poverty. Limited electricity access affects students' quality of education because it can limit the amount of time students can study by not having reliable energy access to study after sunset. Additionally, having consistent access to energy means that girl children, who are usually responsible for collecting fuel for their household, have more time to focus on their studies and attend school.
Ninety percent of children in Sub-Saharan Africa go to primary schools that lack electricity. In Burundi and Guinea, only 2% of schools are electrified, while in DR Congo there is only 8% school electrification for a population of 75.5 million. In the DRC alone, by these statistics, there are almost 30 million children attending school without power.
Education is a key component in growing human capital which in turn facilitates economic growth by enabling people to be more productive workers in the economy. As developing nations accumulate more capital, they can invest in building modern energy services while households gain more options to pursue modern energy sources and alleviate energy poverty.
Health
Due to traditional gender roles, women are generally responsible to gathering traditional biomass for energy. Women also spend much time cooking in a kitchen. Spending significant time harvesting energy resources means women have less time to devote to other activities, and the physically straining labor brings chronic fatigue to women. Moreover, women and children, who stick around their mothers to help with domestic chores, respectively, are in danger of long-term exposure to indoor air pollution caused by burning traditional biomass fuels. During combustion, carbon monoxide, particulates, benzene, and the likes threaten their health. As a result, many women and children suffer from acute respiratory infections, lung cancer, asthma, and other diseases. "According to the World Health Organization, exposure to indoor air pollution is responsible for the nearly two million excess deaths, primarily women and children, from cancer, respiratory infections and lung diseases and for four percent of the global burden of disease. In relative terms, deaths related to biomass pollution kill more people than malaria (1.2 million) and tuberculosis (1.6 million) each year around the world." Lack of access to energy services has even been proven to increase feelings of isolation and despair within those affected by these disadvantages.
Another connection between energy poverty and health is that households who are energy poor are more likely to use traditional biomass such as wood and cow dung to fulfill their energy needs. However, burning wood and cow dung leads to incomplete combustion and releases black carbon into the atmosphere. Black carbon can be a health hazard. Research has found that people who live in energy poverty have an increased risk of respiratory diseases like influenza and asthma and even a positive correlation with higher mortality rates during winters. Moreover, research analyzing the inadequate heating systems in houses in the United Kingdom has found a correlation between this lack of access to proper heating services and an increased risk of mortality from cardiovascular diseases.
One specific recommendation for the case of reducing the negative effects of energy poverty on public health is the distribution and improvement to clean, efficient cook stoves among disadvantaged communities that suffer from the effects of lack of access to energy services. Proposed as an alternative for the improvement of public health and welfare, the distribution of cooking stoves could be a more inexpensive and immediate approach to decreasing mortality rates within the sector of energy poverty. Distributing cleaner liquified petroleum gas (LPG) or electric stoves among developing countries would prevent the inadequate cooking and dangerous exposure to traditional biomass fuel. Although this change to cleaner, and convenient to use appliances can be practical, there is still great emphasis within the movement to eliminate energy poverty through substantial policy change.
Development
"Energy provides services to meet many basic human needs, particularly heat, motive power (e.g. water pumps and transport) and light. Business, industry, commerce and public services such as modern healthcare, education and communication are highly dependent on access to energy services. Indeed, there is a direct relationship between the absence of adequate energy services and many poverty indicators such as infant mortality, illiteracy, life expectancy and total fertility rate. Inadequate access to energy also exacerbates rapid urbanization in developing countries, by driving people to seek better living conditions. Increasing energy consumption has long been tied directly to economic growth and improvement in human welfare. However it is unclear whether increasing energy consumption is a necessary precondition for economic growth, or vice versa. Although developed countries are now beginning to decouple their energy consumption from economic growth (through structural changes and increases in energy efficiency), there remains a strong direct relationship between energy consumption and economic development in developing countries."
Climate change
In 2018, 70% of greenhouse gas emissions were a result of energy production and use. Historically, 5% of countries account for 67.74% of total emissions and 50% of the lowest-emitting countries produce only 0.74% of total historic greenhouse gas emissions. Thus, the distribution, production, and consumption of energy services are highly unequal and reflect the greater systemic barriers that prevent people from accessing and using energy services. Additionally, there is a greater emphasis on developing countries to invest in renewable sources of energy rather than following the energy development patterns of developed nations.
The effects of global warming, as a result of climate change, vary in their correlation to energy poverty. In countries with cold climates where energy poverty is primarily due to the lack of access to proper heating sources, average temperature increases from global warming result in warmer winters and decrease energy poverty rates. On the contrary, in countries with warm climates where energy poverty is primarily a result of inadequate access to cooling energy sources, warmer temperatures exacerbate energy poverty in these regions.
Regional analysis
Energy poverty is a complex issue that is sensitive to the nuances of the culture, time, and space of a region. Thus, the terms "Global North" and "Global South" are generalizations and not always sufficient to describe the nuances of energy poverty, although there are broad trends in how energy poverty is experienced and mitigated between the Global North and South.
Global North
Energy poverty is most commonly discussed as fuel poverty in the Global North where discourse is focused on households' access to energy sources to heat, cool, and power their homes. Fuel poverty is driven by high energy costs, low household incomes, and inefficient appliances (a global perspective). Additionally, older people are more vulnerable to experiencing fuel poverty because of their income status and lack of access to energy-saving technologies. According to the European Fuel Poverty and Energy Efficiency (EPEE), approximately 50-125 million people live in fuel poverty. Like energy poverty, fuel poverty is hard to define and measure because of its many nuances. The United Kingdom (UK) and Ireland, are one of the few countries which have defined fuel poverty to be if 10% of a household's income is spent on heating/cooling. The British New Economics Foundation has proposed a National Energy Guarantee (NEG) to lower and fix prices on essential energy. Another EPEE project found that 1 in 7 households in Europe were on the margins of fuel poverty by using three indicators of checking for leaky roofs, arrears on utility bills, ability to pay for adequate heating, mold in windows. High energy prices, insufficient insulation in dwellings, and low incomes contribute to increased vulnerability to fuel poverty. Climate change adds more pressure as weather events become colder and hotter, thereby increasing demand for fuel to cool and heat the home. The ability to provide adequate heating during cold weather has implications for people's health as cold weather can be an antagonistic factor to cardiovascular and respiratory illness.
Brenda Boardman's book, Fuel Poverty: From Cold Homes to Affordable Warmth (1991) motivated the need to develop public policy to address energy poverty and also study its causes, symptoms, and effects in society. When energy poverty was first introduced in Boardman's book, energy poverty was described as not having enough power to heat and cool homes. Today, energy poverty is understood to be the result of complex systemic inequalities which create barriers to access modern energy at an affordable price. Energy poverty is challenging to measure and thus analyze because it is privately experienced within households, specific to cultural contexts, and dynamically changes depending on the time and space.
Global South
Energy poverty in the Global South is largely driven by a lack of access to modern energy sources because of poor energy infrastructure, weak energy service markets, and insufficient household incomes to afford energy services. However, recent research suggests that alleviating energy poverty requires more than building better power grids because there is a complex web of political, economic, and cultural factors that influence a region's ability to transition to modern energy sources. Energy poverty is strongly linked to many sustainable development goals because greater energy access enables people to exercise more of their capabilities. For example: greater access to clean energy for cooking improves the health of women by reducing the indoor air pollution associated with burning traditional biomasses for cooking; farmers can find better prices for their crops using telecommunication networks; people have more time to pursue leisure and other activities which can increase household income from the time saved from looking for firewood and other traditional biomasses, etc. Because the impacts of energy poverty on sustainable development are so complex, energy poverty is largely addressed through other avenues that promote sustainable development in regions within the Global South.
Africa
Sub-Saharan Africa, Latin America, and South Asia are the three regions in the world most affected by energy poverty. Africa's unique challenge with energy poverty is its rapid urbanization and booming urban centers. On average, only 25% of people who reside in urban areas in Africa have electricity access. Study findings have informed policy makers in African countries on state intervention methods to increase household energy access and reduce the gap in educational opportunities between rural and urban areas. Historical trends show that Africa's rapid population growth has not been proportionally matched by increased access to electricity. The rise of poverty in urban centers in addition to the growing population and energy demand is driving up the cost of electricity, making energy even more inaccessible for Africa's least advantaged individuals.
A study involving data from 33 African countries from 2010-2017 demonstrates a strong correlation between energy poverty, infant mortality, and inequality in education. Infant mortality for children under 5 in Africa is a prevalent consequence of energy poverty. The spread of waterborne diseases, smoke emissions, and low fuel quality continues to affect infant mortality and negatively impact educational performance among children in the region. Although urban areas in Africa are not proportionally increasing to meet the fast pace of urbanization, of the 2.8 billion people who still use unclean and unsafe cooking facilities, most reside in the rural regions of Sub-Saharan Africa. On average, girls receive lower education than boys in the rural areas of the region affected by the lack of clean energy sources. There is an increased need for decentralized sources of energy to mitigate the consequences of energy poverty in rural areas of Africa and its disproportionate effect on women's health and education.
The population of African suffering from energy poverty is 57% women and 43% men. The case for "energy-gender-poverty" demonstrates a relationship between energy poverty and gender inequality. Because women are typically assigned "energy responsibilities" in African cultures, like fetching daily loads of coal and firewood to meet their households' energy needs, they are typically at the forefront of the consequences of energy poverty. Policies to mitigate include a change towards gender-friendly allocation of energy responsibilities and increased access and affordability of modern and clean energy.
South Asia
Energy poverty in South Asia encompasses more than just unreliable, unaffordable access to energy; it also includes the broader dimensions of the growing demand for electricity, access to energy, energy dependence, environmental threats to the energy system, and global pressures to decarbonize. Energy demand in South Asia has grown at an average annual rate of five percent in the past two decades, and this demand is projected to double by 2050. The demand for electricity in particular has been driven by the increasing population and the development of industry throughout the region. Although a push for energy efficiency has substantially reduced electricity demand due to economic growth, the electricity system in the region is still struggling to meet the needs of the growing population and economy.
In 2020, 95.8 percent of the total population in South Asia, and 99.7 percent of the urban population, had access to electricity, making it the second-largest region in the world with an electricity access deficit. However, in India only ten percent of homes in a village must be connected to the electricity grid in order for that entire village to be considered electrified. Other complications that lead to energy poverty include: flaws in the energy system that result in power losses, load shedding practices that shut down the grid during peak periods, and power that is stolen through informal electricity lines.
The reliability of the electricity system can also be hindered by the source of the electricity generated. In 2014, South Asia imported one-third of the total energy consumed in the region. Due to this energy dependence on imported fuel, energy resource scarcity and fluctuations in global price can result in higher costs for electricity in South Asia and can therefore make electricity services less accessible for the least advantaged people. The issue of energy poverty is compounded when climate change is factored into the equation. South Asian cities like Delhi in India are bearing the social and fiscal costs of this demand-supply gap, resulting in a power crisis.
Latin America
The United Nations Development Programme (UNDP) and the Inter-American Development Bank have provided reports and reviews of programs and policies designed to address energy poverty within Latin America and the Caribbean (LAC). Although studies show 96 percent of inhabitants of the LAC have access to electricity, gaps in energy poverty are still prevalent. Oftentimes linked to socioeconomic cleavages, energy poverty within LAC still exposes more than 80 million people to respiratory illnesses and diseases for relying on fuels like charcoal to cook.
According to the United Nations, urban energy poverty in Latin America has nearly doubled in the last two decades. Growing rates of urbanization and industrialization in Buenos Aires, Argentina, Rio de Janeiro, Brazil, and Caracas, Venezuela have exacerbated the regions' high energy losses, increased inefficient energy use, and increased political opportunism on marginalized groups affected by urban poverty. The case for analyzing energy poverty in Argentina, Brazil, and Venezuela has been critical in understanding the context of energy access within urban areas and the challenges within the context of global development. The widespread increase in energy across Latin America does not have a uniform solution. In fact, different efforts and legislation to increase energy accessibility have had opposing effects in different Latin American countries. In Venezuela, for instance, public attitude supports the free supply of energy across the nation while in Brazil, the public is willing to pay as long as the government passes reforms for the affordability of energy services. Although there has been a recent increase in studies related to energy poverty in Latin America, there have not been many studies and data in the past on the prevalence of energy poverty in many Latin American countries with different climatic areas. For instance, studies in Mexico in 2022 determined that 66 percent of households suffered from energy poverty, with 38 percent of the cases being due to accessibility, and 34 percent due to affordability.
International efforts
International development agencies' intervention methods have not been entirely successful. The COVID-19 Pandemic has demonstrated an increased need for international energy resilience through housing, economic, social, and environmental policies after more than 150 million people were pushed into poverty. "International cooperation needs to be shaped around a small number of key elements that are all familiar to energy policy, such as institutional support, capacity development, support for national and local energy plans, and strong links to utility/public sector leadership. This includes national and international institutions as well as the ability to deploy technologies, absorb and disseminate financing, provide transparent regulation, introduce systems of peer review, and share and monitor relevant information and data."
European Union
There is an increasing focus on energy poverty in the European Union, where in 2013 its European Economic and Social Committee formed an official opinion on the matter recommending Europe focus on energy poverty indicators, analysis of energy poverty, considering an energy solidarity fund, analyzing member states' energy policy in economic terms, and a consumer energy information campaign. In 2016, it was reported how several million people in Spain live in conditions of energy poverty. These conditions have led to a few deaths and public anger at the electricity suppliers' artificial and "absurd pricing structure" to increase their profits. In 2017, poor households of Cyprus were found to live in low indoor thermal quality, i.e. their average indoor air temperatures were outside the accepted limits of the comfort zone for the island, and their heating energy consumption was found to be lower than the country's average for the clusters characterized by high and partial deprivation. This is because low income households cannot afford to use the required energy to achieve and maintain the indoor thermal requirements.
Global Environmental Facility
"In 1991, the World Bank Group, an international financial institution that provides loans to developing countries for capital programs, established the Global Environmental Facility (GEF) to address global environmental issues in partnership with international institutions, private sector, etc., especially by providing funds to developing countries' all kinds of projects. The GEF provides grants to developing countries and countries with economies in transition for projects related to biodiversity, climate change, international waters, land degradation, the ozone layer, and persistent organic pollutants. These projects benefit the global environment, linking local, national, and global environmental challenges and promoting sustainable livelihoods. GEF has allocated $10 billion, supplemented by more than $47 billion in cofinancing, for more than 2,800 projects in more than 168 developing countries and countries with economies in transition. Through its Small Grants Programme (SGP), the GEF has also made more than 13,000 small grants directly to civil society and community-based organizations, totalling $634 million.
The GEF partnership includes 10 agencies: the UN Development Programme; the UN Environment Programme; the World Bank; the UN Food and Agriculture Organization; the UN Industrial Development Organization; the African Development Bank; the Asian Development Bank; the European Bank for Reconstruction and Development; the Inter-American Development Bank; and the International Fund for Agricultural Development. The Scientific and Technical Advisory Panel provides technical and scientific advice on the GEF's policies and projects."
Climate Investment Funds
"The Climate Investment Funds (CIF) comprises two Trust Funds, each with a specific scope and objective and its own governance structure: the Clean Technology Fund (CTF) and the Strategic Climate Fund (SCF). The CTF promotes investments to initiate a shift towards clean technologies. The CTF seeks to fill a gap in the international architecture for development finance available at more concessional rates than standard terms used by the Multilateral Development Banks (MDBs) and at a scale necessary to help provide incentives to developing countries to integrate nationally appropriate mitigation actions into sustainable development plans and investment decisions. The SCF serves as an overarching fund to support targeted programs with dedicated funding to pilot new approaches with potential for scaled-up, transformational action aimed at a specific climate change challenge or sectoral response. One of SCF target programs is the Program for Scaling-Up Renewable Energy in Low Income Countries (SREP), approved in May 2009, and is aimed at demonstrating the economic, social and environmental viability of low carbon development pathways in the energy sector by creating new economic opportunities and increasing energy access through the use of renewable energy."
See also
Agency for Non-conventional Energy and Rural Technology
Ashden Awards for Sustainable Energy
Energy for All
International Renewable Energy Agency
Nusantara Development Initiatives
Renewable energy in Africa
Renewable energy in China
Renewable energy in developing countries
Solar power in South Asia
Solar powered refrigerator
SolarAid
Sustainable Energy for All
UN-Energy
Wind power in Asia
Fuel poverty
Energy conservation
Climate change mitigation
Energy poverty and gender
References
External links
Alliance for Rural Electrification - a not-for-profit business association that promotes access to energy in developing countries
Household Energy Network (HEDON) - NGO promoting household energy solutions in developing countries
Lifeline Energy - a not-for-profit organization that provides renewable energy alternatives to those most in need in sub-Saharan Africa
Energy Poverty Advisory HUB (EPAH) - https://energypoverty.eu/
- Paper on energy poverty of the poor in India
GatesNotes 2016 Annual Letter
Energizing Finance reports - Supply and demand for finance for electricity and clean cooking
Tracking SDG7: The Energy Progress Report by the International Energy Agency (IEA), the International Renewable Energy Agency (IRENA), United Nations Statistics Division (UNSD), the World Bank, and the World Health Organization (WHO)
Energy policy
Renewable energy commercialization
International development
Poverty
Aid
Development economics
Humanitarian aid | Energy poverty | [
"Environmental_science"
] | 6,348 | [
"Environmental social science",
"Energy policy"
] |
8,857,956 | https://en.wikipedia.org/wiki/Progymnosperm | The progymnosperms are an extinct group of woody, spore-bearing plants that is presumed to have evolved from the trimerophytes, and eventually gave rise to the spermatophytes, ancestral to both gymnosperms and angiosperms (flowering plants). They have been treated formally at the rank of division Progymnospermophyta or class Progymnospermopsida (as opposite). The stratigraphically oldest known examples belong to the Middle Devonian order the Aneurophytales, with forms such as Protopteridium, in which the vegetative organs consisted of relatively loose clusters of axes. Tetraxylopteris is another example of a genus lacking leaves. In more advanced aneurophytaleans such as Aneurophyton these vegetative organs started to look rather more like fronds, and eventually during Late Devonian times the aneurophytaleans are presumed to have given rise to the pteridosperm order, the Lyginopteridales. In Late Devonian times, another group of progymnosperms gave rise to the first really large trees known as Archaeopteris. The latest surviving group of progymnosperms is the Noeggerathiales, which persisted until the end of the Permian.
Other characteristics:
Vascular cambium with unlimited growth potential is present as well as xylem and phloem.
Ancestors of the earliest seed plants as well as the first true trees.
Strong monopodial growth is exhibited.
Some were heterosporous but others were homosporous.
Phylogeny
Progymnosperms are a paraphyletic grade of plants.
References
External links
Progymnospermophyta
Botany: an introduction to plant biology
Middle Devonian first appearances
Middle Devonian plants
Mississippian plants
Mississippian extinctions
Late Devonian plants
Paraphyletic groups
Prehistoric plant taxa | Progymnosperm | [
"Biology"
] | 413 | [
"Phylogenetics",
"Paraphyletic groups"
] |
8,858,088 | https://en.wikipedia.org/wiki/Emodepside | Emodepside is an anthelmintic drug that is effective against a number of gastrointestinal nematodes, is licensed for use in cats and belongs to the class of drugs known as the octadepsipeptides, a relatively new class of anthelmintic (research into these compounds began in the early 1990s), which are suspected to achieve their anti-parasitic effect by a novel mechanism of action due to their ability to kill nematodes resistant to other anthelmintics.
Synthesis
Emodepside is synthesised by attaching a morpholine ring “at the paraposition of each of the two D-phenyllactic acids” to PF1022A, a metabolite of Mycelia sterile, a fungus that inhabits the leaves of Camellia japonica – a flowering shrub.
Anthelmintic effects
When applied to nematodes, emodepside has been shown to have a range of effects, inhibiting muscle in the parasitic nematode Ascaris sum, and inhibiting locomotive and pharyngeal movement in Caenorhabditis elegans in addition to having effects in other tissues such as the inhibition of egg laying.
Mechanism of action
One of the ways in which this drug achieves its effects has been shown to be through binding to a group of G-protein coupled receptors called latrophilins, first identified as being target proteins for α-latrotoxin (the other target protein of α-LTX being neurexin, a membrane receptor with laminin-like extracellular domains), a component of black widow spider venom that can cause paralysis and subsequent death in nematodes and humans alike. LAT-1 (1014 amino acids, 113 KDa coded by the B0457.1 gene) and LAT-2 (1338 amino acids, 147 KDa coded by the B0286.2 gene) are located presynaptically at the neuromuscular junction in Caenorhabditis elegans and share 21% amino acid identity with each other (the amino acid sequence homology LAT-1 shares with rat, bovine and human latrophilins has been shown to be 22, 23 and 21% respectively).
Following receptor-ligand binding, a conformational change induced in the receptor activates the Gq protein, freeing the Gqα subunit from the βγ complex. The Gqα protein then goes on to couple-to and activate the signaling molecule phospholipase-C-β, a protein that has been identified as being key to the modulation of regulatory pathways of vesicle release in C.elegans.
In its signaling cascade, PLC-β (like other phospholipases) hydrolyses phosphatidylinositolbisphosphate to yield inositol trisphosphate (IP3) and diacylglycerol (DAG). As IP3 receptors have sparse or little distribution throughout the pharyngeal nervous system of C.elegans (one of the tissues where LAT-1 agonists such as α-LTX and emodepside have their most predominant effects) and β-phorbel esters (which mimic the effects of DAG) have been shown to have a stimulatory action on synaptic transmission, it has been concluded that it is the DAG component of the cascade that regulates neurotransmitter release.
Indeed, in C.elegans DAG regulates UNC-13, a plasma-membrane associated protein critical for vesicle-mediated neurotransmitter release and mutational studies have shown that two UNC-13 reduction of function mutants show resistance to emodepside, observations supporting this hypothesized mechanism of action.
The mechanism by which activation of UNC-13 results in neurotransmitter release (the ultimate result of latrophilin activation) is through interaction with the synaptosomal membrane protein syntaxin, with UNC-13 binding to the N-terminus of syntaxin and promoting the switch from the closed form of syntaxin (which is incompatible with SNARE complex synaptobrevin, SNAP-25 and syntaxin formation) to its open formation so that SNARE complex formation can be achieved, thereby allowing vesicle fusion and release to take place.
At a molecular level, the net result of the activation of this pathway, is the spontaneous stimulation of inhibitory PF1-like neuropeptide release (this is suspected due to Emodepside's inhibition of acetylcholine-elicited muscle contraction requiring both calcium ions and extracellular potassium ions, similar to the action of PF1/PF2). Although in experiments on synaptosomes, α-LTX triggered non-calcium dependent exocytosis of vesicles containing acetylcholine, glutamate and GABA, both glutamate and GABA have been ruled out as the sole neurotransmitters responsible for emodepside's action) which then acts on the post-synaptic membrane (i.e. the pharyngeal/muscle membrane) of the nematode, having an inhibitory effect thereby either inducing paralysis or inhibiting pharyngeal pumping, both of which ultimately result in the death of the organism.
Mutational studies involving LAT-1 knockout and LAT-2 gene deletion mutants have revealed that the role of latrophilin receptors in the different tissues that they are expressed differs between subtypes, with LAT-1 being expressed in the pharynx of C.elegans (thereby modulating pharyngeal pumping) and LAT-2 having a role in locomotion.
In addition to exerting an effect on the nematode via binding to Latrophilin receptors, there is also recent evidence that indicates that emodepside also interacts with the BK potassium channel coded by the gene Slo-1. This protein (see figure for structure) is a member of the 6 transmembrane helix structural class of potassium ion channels with each subunit consisting of 6 transmembrane helices and 1 P domain (this P domain is conserved in all potassium ion channels and forms the selectivity filter that enables the channel to transport potassium ions across the membrane in great preference to other ions). These subunits group together to form high conductance BK-type channels that are gated by both membrane potential and intracellular calcium levels (this calcium ion sensing ability is accommodated by an intracellular tail region on Slo-like subunits that form a calcium ion binding motif consisting of a run of conserved aspartate residues, termed a “calcium bowl”), with their physiological role being to regulate the excitability of neurons and muscle fibres, through the way in which they participate in action potential repolariziation (with potassium ion efflux being used to repolarize the cell following depolarization).
The presumable effect that emodepside interaction with these channels would exert on the neuron would be to activate the channel causing potassium ion efflux, hyper-polarization and subsequent inhibition of excitatory neurotransmitter effect (acetylcholine if acting at the neuromuscular junction), having an inhibitory effect on synaptic transmission, the production of postsynaptic action potentials and ultimately muscle contraction (manifesting itself as paralysis or reduced pharyngeal pumping).
Which out of Latrophilin receptors and BK-potassium channels is emodepside's primary site of action remains to be completely deduced. Both LAT-1/LAT-2 and slo-1 mutants (reduction/loss of function) show significant resistance to emodepside with it being conceivable that the presence of both is required for emodepside to induce its full effect.
Therapeutic use
The patent for emodepside is owned by the Bayer Health Care group and is sold in combination with another anthelmintic (praziquantel) for topical application under the tradename Profender.
References
Depsipeptides
Macrocycles
4-Morpholinyl compounds
Peptides
Anthelmintics
Antiparasitic agents
Veterinary drugs | Emodepside | [
"Chemistry",
"Biology"
] | 1,749 | [
"Biomolecules by chemical classification",
"Antiparasitic agents",
"Organic compounds",
"Macrocycles",
"Molecular biology",
"Biocides",
"Peptides"
] |
8,858,360 | https://en.wikipedia.org/wiki/Alvar%20Aalto%20Medal | The Alvar Aalto Medal was established in 1967 by the Museum of Finnish Architecture, the Finnish Association of Architects (SAFA), and the Finnish Architectural Society. The Medal has been awarded intermittently since 1967, when the medal was created in honour of Alvar Aalto. The award is given in recognition of a significant contribution to creative architecture. The award was given earlier at the Alvar Aalto Symposium, held every three years in Jyväskylä, Aalto's hometown. Recently the ceremony has been organized on Aalto's birthday, February 3rd, today the Finnish national Day of Architecture.
The Alvar Aalto medal is typically awarded every 3 years in association with 5 organisations: the Alvar Aalto Foundation, The Finnish Association of Architects (SAFA), the City of Helsinki, Foundation for the Museum of Finnish Architecture and Architecture Information Finland, and The Finnish Society of Architecture. The medal, said to be awarded to future star architects; avoiding both currently vogue and the most radical avant-garde work. The medal was last awarded in 2024 to Marie-José Van Hee.
The physical medal awarded to recipients was designed by Finnish architect Heikki Hyytiäinen in collaboration with Alvar Aalto. Its shape resembles a classical amphitheatre, a motif frequently used by Aalto in many of his designs. The medal is cast in bronze.
Recipients of the Alvar Aalto Medal
See also
List of architecture prizes
References
Architecture awards
Awards established in 1967
Alvar Aalto | Alvar Aalto Medal | [
"Engineering"
] | 307 | [
"Architecture stubs",
"Architecture"
] |
8,858,560 | https://en.wikipedia.org/wiki/Phenylene%20group | In organic chemistry, the phenylene group () is based on a di-substituted benzene ring (arylene). For example, poly(p-phenylene) is a polymer built up from para-phenylene repeating units. The phenylene group has three structural isomers, based on which hydrogens are substituted: para-phenylene, meta-phenylene, and ortho-phenylene.
References
Arenediyl groups | Phenylene group | [
"Chemistry"
] | 107 | [
"Substituents",
"Arenediyl groups"
] |
8,858,789 | https://en.wikipedia.org/wiki/Evidence-based%20pharmacy%20in%20developing%20countries | Many developing nations have developed national drug policies, a concept that has been actively promoted by the WHO. For example, the national drug policy for Indonesia drawn up in 1983 had the following objectives:
To ensure the availability of drugs according to the needs of the population.
To improve the distribution of drugs in order to make them accessible to the whole population.
To ensure efficacy, safety quality and validity of marketed drugs and to promote proper, rational and efficient use.
To protect the public from misuse and abuse.
To develop the national pharmaceutical potential towards the achievements of self-reliance in drugs and in support of national economic growth.
To achieve these objectives in Indonesia, the following changes were implemented:
A national list of essential drugs was established and implemented in all public sector institutions. The list is revised periodically.
A ministerial decree in 1989 required that drugs in public sector institutions be prescribed generically and that Pharmacy and Therapeutics committees be established in all hospitals.
District hospitals and health centers have to procure their drugs based on the essential drugs list.
Most drugs are supplied by three government-owned companies.
Training modules have been developed for drug management and rational drug use and these have been rolled out to relevant personnel.
The central drug laboratory and provincial quality control laboratories have been strengthened.
A major teaching hospital has developed a program on rational drug use, developing a hospital formulary, guidelines for rational diagnosis and treatment guidelines for the rational use of antibiotics.
Generic drugs have been available at affordable costs to low-income groups.
Encouraging rational prescribing
One of the first challenges is to promote and develop rational prescribing, and a number of international initiatives exist in this area. WHO has actively promoted rational drug use as one of the major elements in its Drug Action Programme. In its publication A Guide to Good Prescribing the process is outlined as:
define the patient's problem
specify the therapeutic objectives
verify whether your personal treatment choice is suitable for this patient
start the treatment
give information, instructions and warnings
monitor (stop) the treatment.
The emphasis is on developing a logical approach, and it allows for clinicians to develop personal choices in medicines (a personal formulary) which they may use regularly. The program seeks to promote appraisal of evidence in terms of proven efficacy and safety from controlled clinical trial data, and adequate consideration of quality, cost and choice of competitor drugs by choosing the item that has been most thoroughly investigated, has favorable pharmacokinetic properties and is reliably produced locally. The avoidance of combination drugs is also encouraged.
The routine and irrational use of injections should also be challenged. One study undertaken in Indonesia found that nearly 50% of infants and children and 75% of the patients aged five years or over visiting government health centers received one or more injections. The highest use of injections was for skin disorders, musculoskeletal problems and nutritional deficiencies. Injections, as well as being used inappropriately, are often administered by untrained personnel; these include drug sellers who have no understanding of clean or aseptic techniques.
Another group active in this area is the International Network for the Rational Use of Drugs (INRUD). This organization, established in 1989, exists to promote rational drug use in developing countries. As well as producing training programs and publications, the group is undertaking research in a number of member countries, focused primarily on changing behavior to improve drug use. One of the most useful publications from this group is entitled Managing Drug Supply. It covers most of the drug supply processes and is built up from research and experience in many developing countries. There a number of case studies described, many of which have general application for pharmacists working in developing countries.
In all the talk of rational drug use, the impact of the pharmaceutical industry cannot be ignored, with its many incentive schemes for doctors and pharmacy staff who dispense, advise or encourage use of particular products. These issues have been highlighted in a study of pharmaceutical sales representative (medreps) in Mumbai. This was an observational study of medreps' interactions with pharmacies, covering a range of neighborhoods containing a wide mix of social classes. It is estimated that there are approximately 5000 medreps in Mumbai, roughly one for every four doctors in the city. Their salaries vary according to the employing organization, with the multinationals paying the highest salaries. The majority work to performance-related incentives. One medrep stated "There are a lot of companies, a lot of competition, a lot of pressure to sell, sell! Medicine in India is all about incentives to doctors to buy your medicines, incentives for us to sell more medicines. Even the patient wants an incentive to buy from this shop or that shop. Everywhere there is a scheme, that's business, that's medicine in India.'
The whole system is geared to winning over confidence and getting results in terms of sales; this is often achieved by means of gifts or invitations to symposia to persuade doctors to prescribe. With the launch of new and expensive antibiotics worldwide, the pressure to sell with little regard to the national essential drug lists or rational prescribing. One medrep noted that this was not a business for those overly concerned with morality. Such a statement is a sad reflection on parts of the pharmaceutical industry, which has an important role to play in the development of the health of a nation. It seems likely that short-term gains are made at the expense of increasing problems such as antibiotic resistance. The only alternatives are to ensure practitioners have the skills to appraise medicine promotion activities or to more stringently control pharmaceutical promotional activities.
Rational dispensing
In situations where medicines are dispensed in small, twisted-up pieces of brown paper, the need for patient instruction takes on a whole new dimension. Medicines should be issued in appropriate containers and labelled. While the patient may be unable to read, the healthcare worker is probably literate. There are many tried-and-tested methods in the literature for using pictures and diagrams to aid patient compliance. Symbols such as a rising or setting sun to depict time of day have been used, particularly for treatments where regular medication is important, such as cases of tuberculosis or leprosy.
Poverty may force patients to purchase one day's supply of medicines at a time, so it is important to ensure that antibiotics are used rationally and not just for one or two days' treatment. Often, poor patients need help from pharmacists to understand which are the most important medicines and to identify the items, typically vitamins, that can be missed to reduce the cost of the prescription to a more manageable level.
The essential drugs concept
The essential drugs list concept was developed from a report to the 28th World Health Assembly in 1975 as a scheme to extend the range of necessary drugs to populations who had poor access because of the existing supply structure. The plan was to develop essential drugs lists based on the local health needs of each country and to periodically update these with the advice of experts in public health, medicine, pharmacology, pharmacy and drug management. Resolution number 28.66 at the Assembly requested the WHO Director-General to implement the proposal, which led subsequently to an initial model list of essential drugs (WHO Technical Series no 615, 1977). This model list has undergone regular review at approximately two-yearly intervals and the current 14th list was published in March 2005. The model list is perceived by the WHO to be an indication of a common core of medicines to cover most common needs. There is a strong emphasis on the need for national policy decisions and local ownership and implementation. In addition, a number of guiding principles for essential drug programs have emerged.
The initial essential drugs list should be seen as a starting point.
Generic names should be used where possible, with a cross-index to proprietary names.
Concise and accurate drug information should accompany the list.
Quality, including drug content stability and bioavailability, should be regularly assessed for essential drug supplies.
Decisions should be made about the level of expertise required for drugs. Some countries make all the drugs on the list available to teaching hospitals and have smaller lists for district hospitals and a very short list for health centers.
Success depends on the efficient supply, storage and distribution at every point.
Research is sometimes required to settle the choice of a particular product in the local situation.
The model list of essential drugs
The model list of essential drugs is divided into 27 main sections, which are listed in English in alphabetical order. Recommendations are for drugs and presentations. For example, paracetamol appears as tablets in strengths of 100 mg to 500 mg, suppositories 100 mg and syrup 125 mg/5ml. Certain drugs are marked with an asterisk (previously a ៛), which denotes an example of a therapeutic group, and other drugs in the same group could serve as alternatives.
The lists are drawn up by consensus and generally are sensible choices. There are ongoing initiatives to define the evidence that supports the list. This demonstrates the areas where RCTs (randomized controlled trials) or systematic reviews exist and serves to highlight areas either where further research is needed or where similar drugs may exist which have better supporting evidence.
In addition to work to strengthen the evidence base, there is a proposal to encourage the development of Cochrane reviews for drugs that do not have systematic review evidence.
Application of NNTs (numbers needed to treat) to the underpinning evidence should further strengthen the lists. At present, there is an assumption among doctors in some parts of the world that the essential drugs list is really for the poor of society and is somehow inferior. The use of NNTs around analgesics in the list goes some way to disprove this and these developments may increase the importance of essential drugs lists.
Communicating clear messages
The impact of pharmaceutical representatives and the power of this approach has led to the concept of academic detailing to provide clear messages. A study by Thaver and Harpham described the work of 25 private practitioners in area around Karachi. The work was based on assessment of prescribing practices, and for each practitioner included 30 prescriptions for acute respiratory infections (ARIs) or diarrhea in children under 12 years of age. A total of 736 prescriptions were analysed and it was found that an average of four drugs were either prescribed or dispensed for each consultation. An antibiotic was prescribed in 66% of prescriptions, and 14% of prescriptions were for an injection. Antibiotics were requested for 81% of diarrhea cases and 62% of ARI cases. Of the 177 prescriptions for diarrhea, only 29% were for oral rehydration solution. The researchers went on to convert this information into clear messages for academic dealing back to the doctors. The researchers went on to implement the program and assessed the benefits. This was a good piece of work based on developing messages that are supported by evidence.
Drug donations
It is a natural human reaction to want to help in whatever way possible when face with human disaster, either as a result of some catastrophe or because of extreme poverty. Sympathetic individuals want to take action to help in a situation in which they would otherwise be helpless, and workers in difficult circumstances, only too aware of waste and excess at home, want to make use of otherwise worthless materials. The problem is that these situations do not lend themselves to objectivity. There are numerous accounts of tons of useless drugs being air-freighted into disaster areas. It the requires huge resources to sort out these charitable acts and often the drugs cannot be identified because the labels are not in a familiar language. In many cases, huge quantities have to be destroyed simply because the drugs are out of date, spoiled, unidentifiable, or totally irrelevant to local needs. Generally, had the cost of shipping been donated instead, then many more people would have benefited.
In response to this, the WHO has generated guidelines for drug donations from a consensus of major international agencies involved in emergency relief. If these are followed, a significant improvement in terms of patient benefit and use of human resources will result.
WHO guidelines for drug donations 2005
Selection of drugs
Drugs should be based on expressed need, be relevant to disease pattern and be agreed with the recipient.
Medicines should be listed on the country's essential drugs list or WHO model list.
Formulations and presentations should be similar to those used in the recipient country.
Quality assurance (QA) and shelf life
Drugs should be from a reliable source and WHO certification for quality of pharmaceuticals should be used.
No returned drugs from patients should be used.
All drugs should have a shelf life of at least 12 months after arrival in the recipient country.
Presentation, packing and labelling
All drugs must be labelled in a language that is easily understood in the recipient country and contain details of generic name, batch number, dosage form, strength, quantity, name of manufacturer, storage conditions and expiry date.
Drugs should be presented in reasonable pack sizes (e.g. no sample or patient starter packs).
Material should be sent according to international shipping regulations with detailed packing lists. Any storage conditions must be clearly stated on the containers, which should not weigh more than 50 kg. Drugs should not be mixed with other supplies.
Information and management
Recipients should be informed of all drug donations that are being considered or under way.
Declared value should be based on the wholesale price in the recipient country or on the wholesale world market price.
Cost of international and local transport, warehousing, etc., should be paid by the donor agency unless otherwise agreed with the recipient in advance.
Evidence-based pharmacy practice
While modern practices, including the development of clinical pharmacy, are important, many basic issues await significant change in developing countries.
Medicines can often be found stored together in pharmacological groups rather than in alphabetical order by type.
Refrigerator space is often inadequate and refrigerators unreliable.
There are different challenges, such as ensuring that termites do not consume the outer packages and labels or that storage is free of other vermin such as rats.
Dispensary packaging and labelling can be woefully inadequate and patients leave with little or no understanding of how to take medicines which may have cost them at least one week's earnings.
Medicines are often out of stock, not just for a few hours but for days or even weeks, particularly at the end of the financial year.
Protocols and standard operating procedures are rarely found.
Even when graduate pharmacists are employed, they often have little opportunity to perform above the level of salesperson, simply issuing medicines and collecting payment. For example, several hospital pharmacies in Mumbai, India, are open 24 hours per day for 365 days per year but only to function as retail outlets selling medicines to outpatients or to relatives of inpatients who then hand over the medicines to the nursing staff for administration.
Conclusions
Evidence is as important in the developing world as it is in the developed world. Poverty comes in many forms. While the most noticed are famine and poor housing, both potent killers, medical and knowledge poverty are also significant. Evidence-based practice is one of the ways in which these problems can be minimized. Potentially, one of the greatest benefits of the internet is the possibility of ending knowledge poverty and in turn influencing the factors that undermine wellbeing. Essential drugs programs have been a major step in ensuring that the maximum number benefit from effective drug therapy for disease.
See also
Essential medicines
WHO Model List of Essential Medicines
Department of Essential Drugs and Medicines
Campaign for Access to Essential Medicines
Evidence-based practice
Universities Allied for Essential Medicines
References
Useful sources of information
The following is a list of useful publications from the WHO Department of Essential Drugs and Medicines Policy about essential drugs programs.
General publications
Essential Drugs Monitor - periodical issued twice a year, covering drug policy, research, rational drug use and recent publications.
WHO Action Programme on Essential Drugs in the South-East Asia Region - report on an Intercountry Consultative Meeting, New Delhi, 4–8 March 1991. 49 pages, ref no SEA/Drugs/83 Rev.1.
National drug policy
Report of the WHO Expert Committee on National Drug Policies - contribution to updating the WHO Guidelines for Developing Drug Policies. Geneva. 19–23 June 1995. 78 pages, ref no WHO/DAP/95.9.
Guidelines for Developing National Drug Policies - 1988, 52 pages, .
Indicators for Monitoring National Drug Policies - P Brudon-Jakobowicz, JD Rainhorn, MR Reich, 1994, 205 pages, order no 1930066.
Selection and use
Rational Drug Use: consumer education and information - DA Fresle, 1996, 50 pages, ref no DAP/MAC/(8)96.6.
Estimating Drug Requirements: a practical manual - 1988, 136 pages, ref no WHO/DAP/88.2.
The Use of Essential Drugs. Model List of essential drugs - updated every two years. Currently 14th edition, 2005. The list is available at: www.who.int/medicines
Drugs Used in Sexually Transmitted Diseases and HIV Infection - 1995, 97 pages, .
Drugs Used in Parasitic Diseases (2e) - 1995, 146 pages, .
Drugs Used in Mycobacterial Diseases - 1991, 40 pages, .
Who Model Prescribing Information: Drugs Used in Anaesthesia - 1989, 53 pages, .
Guidelines for Safe Disposal of Unwanted Pharmaceuticals In and After Emergencies - ref no WHO/EDM/PAR/99.4.
Supply and marketing
Guidelines for Drug Donations - interagency guidelines, revised 1999. Ref no WHO/EDM/PAR/99.4.
Operational Principles for Good Pharmaceutical Procurement - Essential Drugs and Medicines Policy / Interagency Pharmaceutical Coordination Group, Geneva, 1999.
Managing Drug Supply - Management Sciences for Health in collaboration with WHO, 1997, 832 pages, .
Ethical Criteria for Medicinal Drug Promotion - 1988, 16 pages, .
Quality assurance
WHO/UNICEF Study on the Stability of Drugs During International Transport - 1991, 68 pages, ref no WHO/DAP/91.1.
Human resources and training
The Role of the Pharmacist in the Health Care System - 1994, 48 pages, ref no WHO/PHARM 94.569.
Guide to Good Prescribing - TPGM de Vries, RH Henning, HV Hogerzeil, DA Fresle, 1994, 108 pages, order no. 1930074. Free to developing countries.
Developing Pharmacy Practice: a Focus on Patient Care - 2006, 97 pages, World Health Organization (WHO) and International Pharmaceutical Federation (FIP).
Research
No 1 Injection Practices Research - 1992, 61 pages, ref no WHO/DAP92.9.
No 3 Operational Research on the Rational Use of Drugs - PKM Lunde, G Tognoni, G Tomson, 1992, 38 pages, ref no WHO/DAP/92.4.
No 24 Public Education in Rational Drug Use: a global survey - 1997, 75 pages, ref no WHO/DAP/97.5.
No 25 Comparative Analysis of National Drug Policies - Second Workshop, Geneva, 10–13 June 1996. 1997, 114 pages, ref no WHO/DAP/97.6.
No 7 How to Investigate Drug Use in Health Facilities: selected drug use indicators - 1993, 87 pages, order no 1930049.
External links
WHO Medicines Policy and Standards, Technical Cooperation for Essential Drugs and Traditional Medicine
WHO Global Medicines Strategy: Countries at the Core 2004-2007
Pharmacy
Evidence-based practices | Evidence-based pharmacy in developing countries | [
"Chemistry"
] | 3,966 | [
"Pharmacology",
"Pharmacy"
] |
8,860,434 | https://en.wikipedia.org/wiki/Coarse%20bubble%20diffusers | Coarse bubble diffusers are a pollution control technology used to aerate and or mix wastewater for sewage treatment.
Description
Coarse bubble diffusers produce 1/4 to 1/2 inch (6.4 to 13 mm) bubbles which rise rapidly from the floor of a wastewater treatment plant or sewage treatment plant tank. They are typically used in grit chambers, equalization basins, chlorine contact tanks, and aerobic digesters, and sometimes also in aeration tanks. Generally they are better at vertically "pumping" water than at mass transfer of oxygen. Coarse bubble diffusers typically provide half the mass transfer of oxygen as compared to fine bubble diffusers, given the same air volume.
Application
Often in non-Newtonian or pseudoplastic fluids, such as a digester with high solids concentration, it does make sense to use coarse bubble diffusers rather than fine bubble diffusers, due to the larger bubbles' ability to shear through more viscous wastewater.
However, over the past two decades, coarse bubble diffusers have been used less frequently, primarily due to the ever increasing cost of energy and the availability of more reliable, highly efficient fine bubble diffusers. Manufacturers of diffused aeration systems claim that converting from coarse bubble to fine bubble system should yield a 50 percent energy cost savings. Specifically, in aeration tanks, a system that utilizes coarse bubble diffusers requires 30 to 40 percent more process air than a fine bubble diffused air system to provide the same level of treatment. The exception would be in secondary treatment (or side processing) phases. In these processing tanks, floc particles, sediment and carbonate buildup tend to plug or clog the small air release openings on the fine bubble diffusers. Because of their small air openings, fine bubble diffusers cease to have an advantage. Currently, coarse bubble diffusers are the mainstay solution.
These diffusers are typically made in the shape of a perforated rectangular pipe called a wide band, or a cap of in diameter with an elastomeric membrane. Other varieties of coarse bubble diffusers exist, though it is generally accepted that all of them perform similarly with respect to mass oxygen transfer. When comparing disc-shaped diffusers, the majority fail to withstand specific challenges, beyond 1 or 2 years, which include: clogging, blowing off and cracking. Any coarse bubble diffuser that eliminates these problems would deliver a huge cost-savings, not only in product replacement, but in system downtime to facilitate their exchange. This is a motivating factor considered by budget-sensitive operators at municipal waste water treatment processing plants.
See also
List of waste-water treatment technologies
References
Sewerage
Water treatment
Environmental engineering
Water technology | Coarse bubble diffusers | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 544 | [
"Water treatment",
"Chemical engineering",
"Water pollution",
"Sewerage",
"Civil engineering",
"Environmental engineering",
"Water technology"
] |
8,860,450 | https://en.wikipedia.org/wiki/Laurel%20water | Laurel water is distilled from the fresh leaves of the cherry laurel, and contains the poison prussic acid (hydrocyanic acid), along with other products carried over in the process.
Pharmacological usage
Historically, the water (Latin aqua laurocerasi) was used for asthma, coughs, indigestion and dyspepsia, and as a sedative narcotic; however, since it is effectively a solution of hydrogen cyanide, of uncertain strength, it would be extremely dangerous to attempt medication with laurel water. The Roman emperor Nero used cherry laurel water to poison the wells of his enemies.
References
Poisons
Prunus | Laurel water | [
"Environmental_science"
] | 141 | [
"Poisons",
"Toxicology"
] |
8,860,861 | https://en.wikipedia.org/wiki/PGP%20word%20list | The PGP Word List ("Pretty Good Privacy word list", also called a biometric word list for reasons explained below) is a list of words for conveying data bytes in a clear unambiguous way via a voice channel. They are analogous in purpose to the NATO phonetic alphabet, except that a longer list of words is used, each word corresponding to one of the 256 distinct numeric byte values.
History and structure
The PGP Word List was designed in 1995 by Patrick Juola, a computational linguist, and Philip Zimmermann, creator of PGP. The words were carefully chosen for their phonetic distinctiveness, using genetic algorithms to select lists of words that had optimum separations in phoneme space. The candidate word lists were randomly drawn from Grady Ward's Moby Pronunciator list as raw material for the search, successively refined by the genetic algorithms. The automated search converged to an optimized solution in about 40 hours on a DEC Alpha, a particularly fast machine in that era.
The Zimmermann–Juola list was originally designed to be used in PGPfone, a secure VoIP application, to allow the two parties to verbally compare a short authentication string to detect a man-in-the-middle attack (MiTM). It was called a biometric word list because the authentication depended on the two human users recognizing each other's distinct voices as they read and compared the words over the voice channel, binding the identity of the speaker with the words, which helped protect against the MiTM attack. The list can be used in many other situations where a biometric binding of identity is not needed, so calling it a biometric word list may be imprecise. Later, it was used in PGP to compare and verify PGP public key fingerprints over a voice channel. This is known in PGP applications as the "biometric" representation. When it was applied to PGP, the list of words was further refined, with contributions by Jon Callas. More recently, it has been used in Zfone and the ZRTP protocol, the successor to PGPfone.
The list is actually composed of two lists, each containing 256 phonetically distinct words, in which each word represents a different byte value between 0 and 255. Two lists are used because reading aloud long random sequences of human words usually risks three kinds of errors: 1) transposition of two consecutive words, 2) duplicate words, or 3) omitted words. To detect all three kinds of errors, the two lists are used alternately for the even-offset bytes and the odd-offset bytes in the byte sequence. Each byte value is actually represented by two different words, depending on whether that byte appears at an even or an odd offset from the beginning of the byte sequence. The two lists are readily distinguished by the number of syllables; the even list has words of two syllables, the odd list has three. The two lists have a maximum word length of 9 and 11 letters, respectively. Using a two-list scheme was suggested by Zhahai Stewart.
Word lists
Here are the two lists of words as presented in the PGPfone Owner's Manual.
Examples
Each byte in a bytestring is encoded as a single word. A sequence of bytes is rendered in network byte order, from left to right. For example, the leftmost (i.e. byte 0) is considered "even" and is encoded using the PGP Even Word table. The next byte to the right (i.e. byte 1) is considered "odd" and is encoded using the PGP Odd Word table. This process repeats until all bytes are encoded. Thus, "E582" produces "topmost Istanbul", whereas "82E5" produces "miser travesty".
A PGP public key fingerprint that displayed in hexadecimal as
E582 94F2 E9A2 2748 6E8B
061B 31CC 528F D7FA 3F19
would display in PGP Words (the "biometric" fingerprint) as
topmost Istanbul Pluto vagabond treadmill Pacific brackish dictator goldfish Medusa
afflict bravado chatter revolver Dupont midsummer stopwatch whimsical cowbell bottomless
The order of bytes in a bytestring depends on endianness.
Other word lists for data
There are several other word lists for conveying data in a clear unambiguous way via a voice channel:
the NATO phonetic alphabet maps individual letters and digits to individual words
the S/KEY system maps 64 bit numbers to 6 short words of 1 to 4 characters each from a publicly accessible 2048-word dictionary. The same dictionary is used in RFC 1760 and RFC 2289.
the Diceware system maps five base-6 random digits (almost 13 bits of entropy) to a word from a dictionary of 7,776 distinct words.
the Electronic Frontier Foundation has published a set of improved word lists based on the same concept
FIPS 181: Automated Password Generator converts random numbers into somewhat pronounceable "words".
mnemonic encoding converts 32 bits of data into 3 words from a vocabulary of 1626 words.
what3words encodes geographic coordinates in 3 dictionary words.
the BIP39 standard permits encoding a cryptographic key of fixed size (128 or 256 bits, usually the unencrypted master key of a Cryptocurrency wallet) into a short sequence of readable words known as the seed phrase, for the purpose of storing the key offline. This is used in cryptocurrencies such as Bitcoin or Monero.
Like the PGP word list, the Bytewords standard maps each possible byte to a word. There is only one list, rather than two. The words are uniformly four letters long and can be uniquely identified by their first and last letters
References
This article incorporates material that is copyrighted by PGP Corporation and has been licensed under the GNU Free Documentation License. (per Jon Callas, CTO, CSO PGP Corporation, 4-Jan-2007)
Spelling alphabets
Binary-to-text encoding formats
Military communications
Cryptography
OpenPGP | PGP word list | [
"Mathematics",
"Engineering"
] | 1,262 | [
"Cybersecurity engineering",
"Telecommunications engineering",
"Cryptography",
"Applied mathematics",
"Military communications"
] |
8,861,079 | https://en.wikipedia.org/wiki/Euler%27s%20continued%20fraction%20formula | In the analytic theory of continued fractions, Euler's continued fraction formula is an identity connecting a certain very general infinite series with an infinite continued fraction. First published in 1748, it was at first regarded as a simple identity connecting a finite sum with a finite continued fraction in such a way that the extension to the infinite case was immediately apparent. Today it is more fully appreciated as a useful tool in analytic attacks on the general convergence problem for infinite continued fractions with complex elements.
The original formula
Euler derived the formula as
connecting a finite sum of products with a finite continued fraction.
The identity is easily established by induction on n, and is therefore applicable in the limit: if the expression on the left is extended to represent a convergent infinite series, the expression on the right can also be extended to represent a convergent infinite continued fraction.
This is written more compactly using generalized continued fraction notation:
Euler's formula
If ri are complex numbers and x is defined by
then this equality can be proved by induction
.
Here equality is to be understood as equivalence, in the sense that the 'th convergent of each continued fraction is equal to the 'th partial sum of the series shown above. So if the series shown is convergent – or uniformly convergent, when the ri's are functions of some complex variable z – then the continued fractions also converge, or converge uniformly.
Proof by induction
Theorem: Let be a natural number. For complex values ,
and for complex values ,
Proof: We perform a double induction. For , we have
and
Now suppose both statements are true for some .
We have
where
by applying the induction hypothesis to .
But if implies implies , contradiction. Hence
completing that induction.
Note that for ,
if , then both sides are zero.
Using
and ,
and applying the induction hypothesis to the values ,
completing the other induction.
As an example, the expression can be rearranged into a continued fraction.
This can be applied to a sequence of any length, and will therefore also apply in the infinite case.
Examples
The exponential function
The exponential function ex is an entire function with a power series expansion that converges uniformly on every bounded domain in the complex plane.
The application of Euler's continued fraction formula is straightforward:
Applying an equivalence transformation that consists of clearing the fractions this example is simplified to
and we can be certain that this continued fraction converges uniformly on every bounded domain in the complex plane because it is equivalent to the power series for ex.
The natural logarithm
The Taylor series for the principal branch of the natural logarithm in the neighborhood of 1 is well known:
This series converges when |x| < 1 and can also be expressed as a sum of products:
Applying Euler's continued fraction formula to this expression shows that
and using an equivalence transformation to clear all the fractions results in
This continued fraction converges when |x| < 1 because it is equivalent to the series from which it was derived.
The trigonometric functions
The Taylor series of the sine function converges over the entire complex plane and can be expressed as the sum of products.
Euler's continued fraction formula can then be applied
An equivalence transformation is used to clear the denominators:
The same argument can be applied to the cosine function:
The inverse trigonometric functions
The inverse trigonometric functions can be represented as continued fractions.
An equivalence transformation yields
The continued fraction for the inverse tangent is straightforward:
A continued fraction for
We can use the previous example involving the inverse tangent to construct a continued fraction representation of π. We note that
And setting x = 1 in the previous result, we obtain immediately
The hyperbolic functions
Recalling the relationship between the hyperbolic functions and the trigonometric functions,
And that the following continued fractions are easily derived from the ones above:
The inverse hyperbolic functions
The inverse hyperbolic functions are related to the inverse trigonometric functions similar to how the hyperbolic functions are related to the trigonometric functions,
And these continued fractions are easily derived:
See also
Gauss's continued fraction
Engel expansion
List of topics named after Leonhard Euler
References
Continued fractions
Leonhard Euler | Euler's continued fraction formula | [
"Mathematics"
] | 849 | [
"Continued fractions",
"Number theory"
] |
8,861,250 | https://en.wikipedia.org/wiki/Annual%20Review%20of%20Ecology%2C%20Evolution%2C%20and%20Systematics | The Annual Review of Ecology, Evolution, and Systematics is an annual scientific journal published by Annual Reviews. The journal was established in 1970 as the Annual Review of Ecology and Systematics and changed its name beginning in 2003. It publishes invited review articles on topics considered to be timely and important in the fields of ecology, evolutionary biology, and systematics. Journal Citation Reports gave the journal a 2023 impact factor of 11.2, ranking it third of 195 journals in the "Ecology" category and third of 54 journals in "Evolutionary Biology". it is being published as open access, under the Subscribe to Open model.
History
The Annual Review of Ecology and Systematics was first published in 1970, with Richard F. Johnston as its first editor. In 1975 it began publishing biographies of notable ecologists in the prefatory chapter. In 2003, its name was changed to its current form, the Annual Review of Ecology, Evolution, and Systematics.
It defines its scope as covering significant developments in the field of ecology, evolution, and systematics of all life on earth. This includes reviews about molecular evolution, phylogeny, speciation, population dynamics, conservation biology, environmental resource management, and the study of invasive species.
As of 2024, Journal Citation Reports gave the journal a 2023 impact factor of 11.2, ranking it third of 195 journals in the "Ecology" category and third of 54 journals in "Evolutionary Biology".
It is abstracted and indexed in Scopus, Science Citation Index Expanded, Aquatic Sciences and Fisheries Abstracts, BIOSIS, and Academic Search, among others.
Editorial processes
The Annual Review of Ecology, Evolution, and Systematics is helmed by the editor or the co-editors. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee.
Editors of volumes
Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death.
Richard F. Johnston (1970–1991)
Daphne Gail Fautin (1992–2001)
Douglas J. Futuyma (2002–present)
Current editorial committee
As of 2022, the editorial committee consists of the editor and the following members:
H. Bradley Shaffer
Daniel Simberloff
Judith Bronstein
Aimée Classen
Kathleen Donohue
James A. Estes
Anjali Goswami
Michael Turelli
Stuart West
See also
List of biology journals
List of environmental journals
References
Ecology journals
Academic journals established in 1970
Ecology, Evolution, and Systematics
Annual journals
English-language journals
Evolutionary biology journals
Systematics journals | Annual Review of Ecology, Evolution, and Systematics | [
"Biology",
"Environmental_science"
] | 667 | [
"Environmental science journals",
"Systematics journals",
"Taxonomy (biology)",
"Ecology journals"
] |
8,861,328 | https://en.wikipedia.org/wiki/Geek | The word geek is a slang term originally used to describe eccentric or non-mainstream people; in current use, the word typically connotes an expert or enthusiast obsessed with a hobby or intellectual pursuit. In the past, it had a generally pejorative meaning of a "peculiar person, especially one who is perceived to be overly intellectual, unfashionable, boring, or socially awkward". In the 21st century, it was reclaimed and used by many people, especially members of some fandoms, as a positive term.
Some use the term self-referentially without malice or as a source of pride, often referring simply to "someone who is interested in a subject (usually intellectual or complex) for its own sake".
Etymology
The word comes from English dialect geek or geck (meaning a "fool" or "freak"; from Middle Low German Geck). Geck is a standard term in modern German and means "fool" or "fop". The root also survives in the Dutch and Afrikaans adjective gek ("crazy"), as well as some German dialects, like the Alsatian word Gickeleshut ("jester's hat"; used during carnival). In 18th century Austria, Gecken were freaks on display in some circuses. In 19th century North America, the term geek referred to a performer in a geek show in a circus, traveling carnival or travelling funfair sideshows (see also freak show). The 1976 edition of the American Heritage Dictionary included only the definition regarding geek shows. This is the sense of "geek" in William Lindsay Gresham's 1946 novel Nightmare Alley, twice adapted for the screen in 1947 and 2021.
Definitions
The 1975 edition of the American Heritage Dictionary, published a decade before the Digital Revolution, gave only one definition: "Geek [noun, slang]. A carnival performer whose act usually consists of biting the head off a live chicken or snake." The tech revolution found new uses for this word, but it still often conveys a derogatory sting. In 2017, Dictionary.com gave five definitions, the fourth of which is "a carnival performer who performs sensationally morbid or disgusting acts, as biting off the head of a live chicken."
The term nerd has a similar, practically synonymous meaning as geek, but many choose to identify different connotations among these two terms, although the differences are disputed. In a 2007 interview on The Colbert Report, Richard Clarke said the difference between nerds and geeks is "geeks get it done" or "ggid". Julie Smith defined a geek as "a bright young man turned inward, poorly socialized, who felt so little kinship with his own planet that he routinely traveled to the ones invented by his favorite authors, who thought of that secret, dreamy place his computer took him to as cyberspace—somewhere exciting, a place more real than his own life, a land he could conquer, not a drab teenager's room in his parents' house."
Impact
Technologically oriented geeks, in particular, now exert a powerful influence over the global economy and society. Whereas previous generations of geeks tended to operate in research departments, laboratories and support functions, now they increasingly occupy senior corporate positions, and wield considerable commercial and political influence. When U.S. President Barack Obama met with Facebook's Mark Zuckerberg and the CEOs of the world's largest technology firms at a private dinner in Woodside, California on February 17, 2011, New York magazine ran a story titled "The world's most powerful man meets President Obama". At the time, Zuckerberg's company had grown to over one billion users.
According to Mark Roeder the rise of the geek represents a new phase of human evolution. In his book, Unnatural Selection: Why The Geeks Will Inherit The Earth he suggests that "the high-tech environment of the Anthropocene favours people with geek-like traits, many of whom are on the autism spectrum, ADHD, or dyslexia. Previously, such people may have been at a disadvantage, but now their unique cognitive traits enable some of them to resonate with the new technological zeitgeist and become very successful."
The Economist magazine observed, on June 2, 2012, "Those square pegs (geeks) may not have an easy time in school. They may be mocked by jocks and ignored at parties. But these days no serious organisation can prosper without them."
Fashion
"Geek chic" refers to a minor fashion trend that arose in the mid 2000s (decade), in which young people adopted "geeky" fashions, such as oversized black horn-rimmed glasses or browline glasses, suspenders/braces, and capri pants. The glasses quickly became the defining aspect of the trend, with the media identifying various celebrities as "trying geek" or "going geek" for wearing such glasses, such as David Beckham and Justin Timberlake. Meanwhile, in the sports world, many NBA players wore "geek glasses" during post-game interviews, drawing comparisons to Steve Urkel.
The term "geek chic" was appropriated by some self-identified "geeks" to refer to a new, socially acceptable role in a technologically advanced society.
See also
Akiba-kei and Otaku, Japanese slang
Anorak and boffin, British slang
Battleboarding
Dweeb
Furry
Gamer
Gamer girl
Geek Code
Geek girl
Geek Pride Day
Geek rock
Geekcorps
Girl Geek Dinners
Greaser
Grok
Internet culture
Jock
Neckbeard (slang)
Nerd
Preppy
Reappropriation
Trekkie
Video game culture
References
Further reading
External links
Geek Culture: The Third Counter-Culture, an article discussing geek culture as a new kind of counter-culture.
The Origins of Geek Culture: Perspectives on a Parallel Intellectual Milieu, an article about geek culture seen in a cultural historical perspective.
Hoevel, Ann. "Are you a nerd or a geek?" CNN. December 2, 2010.
"Geek Chic", USA Today, October 22, 2003
"How Geek Chic Works"
2000s fashion
2010s fashion
2020s fashion
2000s slang
2010s slang
Computing culture
English-language slang
Epithets related to nerd culture
Fashion aesthetics
History of subcultures
Internet culture
Nerd culture
Stereotypes | Geek | [
"Technology"
] | 1,300 | [
"Computing culture",
"Computing and society"
] |
8,861,618 | https://en.wikipedia.org/wiki/Patterson%20power%20cell | The Patterson power cell is a cold fusion device invented by chemist James A. Patterson, which he claimed created 200 times more energy than it used. Patterson claimed the device neutralized radioactivity without emitting any harmful radiation. Cold fusion was the subject of an intense scientific controversy in 1989, before being discredited in the eyes of mainstream science. Physicist Robert L. Park describes the device as fringe science in his book Voodoo Science.
Company formed
In 1995, Clean Energy Technologies Inc. was formed to produce and promote the power cell.
Claims and observations
Patterson variously said it produced a hundred or two hundred times more power than it used. Representatives promoting the device at the Power-Gen '95 Conference said that an input of 1 watt would generate more than 1,000 watts of excess heat (waste heat). This supposedly happened as hydrogen or deuterium nuclei fuse together to produce heat through a form of low energy nuclear reaction. The by-products of nuclear fusion, e.g. a tritium nucleus and a proton or an 3He nucleus and a neutron, were not detected in any reliable way, leading experts to think that no such fusion was taking place.
It was further claimed that if radioactive isotopes such as uranium were present, the cell enables the hydrogen nuclei to fuse with these isotopes, transforming them into stable elements and thus neutralizing the radioactivity. It was claimed that the transformation would be achieved without releasing any radiation to the environment and without expending any energy. A televised demonstration on June 11, 1997, on Good Morning America provided no proof for the claims. As at 2002, the neutralization of radioactive isotopes has only been achieved through intense neutron bombardment in a nuclear reactor or large scale high energy particle accelerator, and at a large expense of energy.
Patterson has carefully distanced himself from the work of Fleischmann and Pons and from the label of "cold fusion", due to the negative connotations associated to them since 1989. Ultimately, this effort was unsuccessful, and not only did it inherit the label of pathological science, but it managed to make cold fusion look a little more pathological in the public eye. Some cold fusion proponents view the cell as a confirmation of their work, while critics see it as "the fringe of the fringe of cold fusion research", since it attempts to commercialize cold fusion on top of making bad science.
In 2002, John R. Huizenga, professor of nuclear chemistry at the University of Rochester, who was head of a government panel convened in 1989 to investigate the cold fusion claims of Fleischmann and Pons, and who wrote a book about the controversy, said "I would be willing to bet there's nothing to it", when asked about the Patterson Power Cell.
Replications
George H. Miley is a professor of nuclear engineering and a cold fusion researcher who claims to have replicated the Patterson power cell. During the 2011 World Green Energy Symposium, Miley stated that his device continuously produces several hundred watts of power. Earlier results by Miley have not convinced researchers.
On Good Morning America, Quintin Bowles, professor of mechanical engineering at the University of Missouri–Kansas City, claimed in 1996 to have successfully replicated the Patterson power cell. In the book Voodoo Science, Bowles is quoted as having stated: "It works, we just don't know how it works."
A replication has been attempted at Earthtech, using a CETI supplied kit. They were not able to replicate the excess heat.
References
Further reading
Bailey, Patrick and Fox, Hal (October 20, 1997). A review of the Patterson Power Cell. Retrieved November 19, 2011. An earlier version of this paper appears in: Energy Conversion Engineering Conference, 1997; Proceedings of the 32nd Intersociety Energy Conversion Engineering Conference. Publication Date: Jul 27 – Aug 1, 1997. Volume 4, pages 2289–2294. Meeting Date: July 27, 1997 – January 8, 1997. Location: Honolulu, HI, USA.
Ask the experts, "What is the current scientific thinking on cold fusion? Is there any possible validity to this phenomenon?", Scientific American, October 21, 1999,(Patterson is mentioned on page 2). Retrieved December 5, 2007
Chemical equipment
Electrochemistry
Electrolysis
Fringe physics
Cold fusion | Patterson power cell | [
"Physics",
"Chemistry",
"Engineering"
] | 877 | [
"Nuclear physics",
"Chemical equipment",
"Electrochemistry",
"Cold fusion",
"nan",
"Electrolysis",
"Nuclear fusion"
] |
8,861,687 | https://en.wikipedia.org/wiki/Year%20Zero%20%28album%29 | Year Zero is the fifth studio album by the American industrial rock band Nine Inch Nails, released by Interscope Records on April 17, 2007. Conceived while touring in support of the band's previous album, With Teeth (2005), the album was recorded in late 2006. It was produced by Trent Reznor and Atticus Ross, and was the band's first studio album since 1994's The Downward Spiral that was not co-produced by long-time collaborator Alan Moulder. It was the band's last album for Interscope, following Reznor's departure the same year due to a dispute regarding overseas pricing.
In contrast to the introspective style of songwriting featured on the band's previous work, the record is a concept album that criticizes contemporary policies of the United States government by presenting a dystopian vision of the year 2022. It was part of a larger Year Zero project, which included a remix album, an alternate reality game of the same name, as well as a conceived television or film adaptation. The game expanded upon the album's storyline, using websites, pre-recorded phone messages, murals, among other media in promotion of the project. The album was promoted by two singles: "Survivalism" and "Capital G".
Year Zero received positive reviews from critics, who complimented its concept and production, as well as the accompanying alternate reality game. The album reached number 2 in the United States, number 6 in the United Kingdom, and the top 10 in some other countries.
Recording
In a 2005 interview with Kerrang!, Trent Reznor raised his intentions to write material for a new release while on tour promoting With Teeth. He reportedly began work on the new album by September 2006. Reznor devised much of the album's musical direction on his laptop. Reznor told Kerrang! in a later interview, "When I was on the Live: With Teeth tour, to keep myself busy I just really hunkered down and was working on music the whole time, so this kept me in a creative mode and when I finished the tour I felt like I wasn't tired and wanted to keep at it."
The limitations of devising the album's musical direction on a tour bus forced Reznor to work differently from usual. Reznor said, "I didn't have guitars around because it was too much hassle... It was another creative limitation... If I were in my studio, I would have done things the way I normally do them. But not having the ability to do that forced me into trying some things that were fun to do."
By the end of the tour, Reznor began work on the album's lyrical concepts, attempting to break away from his typically introspective approach. Reznor drew inspiration from his concern at the state of affairs in the United States and at what he envisioned as the country's political, spiritual, and social direction. Year Zero was mixed in January 2007, and Reznor stated on his blog that the album was finished as of February 5. The album's budget was a reported US$2million, but since Reznor composed most of the album himself on his laptop and in his home-studio, much of the budget instead went toward the extensive accompanying promotional campaign.
A song cut from the album included vocal work by Queens of the Stone Age's Josh Homme. The same year, Reznor contributed vocals to their song "Era Vulgaris", which was also cut from the album of the same name.
Composition
The album's music features the styles of industrial rock, electro-industrial, electronic, and digital hardcore. Reznor called Year Zero a "shift in direction" in that it "doesn't sound like With Teeth". He also said that when he finishes a new album, he has to "go into battle with the people whose job it is to figure out how to sell the record. The only time that didn't happen was [for] With Teeth. This time, however, [he was] expecting an epic struggle. [Year Zero] is not a particularly friendly record and it certainly doesn't sound like anything else out there right now."
Fifteen original tracks were considered for inclusion on the album, which Reznor described as "Highly conceptual. Quite noisy. Fucking cool." Reznor also described the album as a "collage of sound type of thing", citing musical inspiration from early Public Enemy records, specifically the production techniques of The Bomb Squad. Most of Year Zeros musical elements were created by Reznor solely on his laptop, as opposed to the instrument-heavy With Teeth. AllMusic's review described the album's laptop-mixed sound: "guitars squall against glitches, beeps, pops, and blotches of blurry sonic attacks. Percussion looms large, distorted, organic, looped, screwed, spindled and broken." Many reviews of the album compared the album's electronic sound to earlier Nine Inch Nails releases such as The Downward Spiral and The Fragile, while contrasting its heavily modified sounds to the more "organic" approach of With Teeth. Many critics also commented on the album's overall tone, including descriptions such as "lots of silver and grey ambience" and reference to the album's "oblique tone". The New York Times review described the album's sound by saying "Hard beats are softened with distortion, static cushions the tantrums, sneaky bass lines float beneath the surface." The article went on to describe individual tracks: "And as usual the music is packed with details: "Meet Your Master" goes through at least three cycles of decay and rebirth; part of the fun of "The Warning" is tracking the ever-mutating timbres."
Many of the songs on the album feature an extended instrumental ending, which encompasses the entire second half of the three-minute long "The Great Destroyer". The album was co-produced by Reznor and Atticus Ross, mixed by long-time collaborator Alan Moulder, and mastered by Brian Gardner. The album features instrumental contributions by live band member Josh Freese and vocals by Saul Williams.
Themes
Nine Inch Nails' 2006 tour merchandise designs featured overt references to the United States military, which Reznor said "reflect[ed] future directions". Reznor later described Year Zero as "the soundtrack to a movie that doesn't exist". The album criticizes the American government's policies, and "could be about the end of the world". Reznor specifically cited what he labeled as the "erosion of freedoms" and "the way that we treat the rest of the world and our own citizens". Reznor had previously called the results of the 2004 US election "one step closer to the end of the world."
Even though the fictional story begins in January 2007, the timeline of the album and alternate reality game mentions historical events, such as the September 11 attacks and the Iraq War. From there, fictional events lead to worldwide chaos, including bioterrorism attacks, the United States engaging in nuclear war with Iran, and the elimination of American civil liberties at the hands of the fictional government agency the Bureau of Morality. Regardless of being fictional, a columnist of the Hartford Courant commented, "What's scary is that this doesn't seem as far-fetched as it should, given recent revelations about the FBI's abuse of the Patriot Act and the dissent-equals-disloyalty double-speak coming out of Washington in recent years."
Artwork
All of the artwork for Year Zero was created by Rob Sheridan, art director for Nine Inch Nails, who is also credited for artwork on With Teeth, among other Nine Inch Nails releases since 2000. The album features a thermo-chrome heat-sensitive CD face which appears black when first opened, but reveals a black binary code on a white background when heat is generated from the album being played. The binary sequence translates to "exterminal.net", the address of a website involved in the alternate reality game. Reznor displayed displeasure at the extra dollars added to the CD's price in Australia for the thermo-coating.
Included with the album is a small insert that is a warning from the fictional United States Bureau of Morality (USBM), with a phone number to report people who have "engaged in subversive acts". When the number is called, a recording from the USBM is played, claiming "By calling this number, you and your family are implicitly pleading guilty to the consumption of anti-American media and have been flagged as potential militants."
It was named one of the best album covers of 2007 by Rolling Stone.
Promotion
While work continued on the album, Reznor hinted in an interview that it was "part of a bigger picture of a number of things I'm working on". In February 2007 fans discovered that a new Nine Inch Nails tour t-shirt contained highlighted letters that spelled out the words "I am trying to believe". This phrase was registered as a website URL, and soon several related websites were also discovered in the IP range, all describing a dystopian vision of the fictional "Year 0". It was later reported that 42 Entertainment had created these websites to promote Year Zero as part of an alternate reality game.
The Year Zero story takes place in the United States in the year 2022; or "Year 0" according to the American government, being the year that America was reborn. The United States has suffered several major terrorist attacks, and in response the government has seized absolute control on the country and reverted to a Christian fundamentalist theocracy. The government maintains control of the populace through institutions such as the Bureau of Morality and the First Evangelical Church of Plano, as well as increased surveillance and the secret drugging of tap water with a mild sedative. In response to the increasing oppression of the government, several corporate, government, and subversive websites were transported back in time to the present by a group of scientists working clandestinely against the authoritarian government. The websites-from-the-future were sent to the year 2007 to warn the American people of the impending dystopian future and to prevent it from ever forming in the first place.
The Year Zero game consisted of an expansive series of websites, phone numbers, e-mails, videos, MP3s, murals, and other media that expanded upon the fictional storyline of the album. Each new piece of media contained various hints and clues to discover the next, relying on fan participation to discover each new facet of the expanding game. Rolling Stone described the fan involvement in this promotion as the "marketing team's dream". Reznor, however, argued that "marketing" was an inaccurate description of the game, and that it was "not some kind of gimmick to get you to buy a recordit is the art form".
Part of this promotional campaign involved USB drives that were left in concert venues for fans to find during Nine Inch Nails' 2007 European tour. During a concert in Lisbon, Portugal, a USB flash drive was found in a bathroom stall containing a high-quality MP3 of the track "My Violent Heart", a song from the then-unreleased album. Another USB drive was found at a concert in Barcelona, Spain, containing the track "Me, I'm Not". Messages found on the drives and tour clothing led to additional websites and images from the game, and the early release of several unheard songs from the album. Following the discovery of the USB drives, the high-quality audio files quickly circulated the internet. Owners of websites hosting the files soon received cease and desist orders from the Recording Industry Association of America, despite Interscope having sanctioned the viral campaign and the early release of the tracks. Reznor told The Guardian:
On February 22, 2007, a teaser trailer was released through the official Year Zero website. It featured a quick glimpse of a blue road sign that said "I AM TRYING TO BELIEVE", as well as a distorted glimpse of "The Presence" from the album cover. One frame in the teaser led fans to a URL containing the complete album cover. In March, the multitrack audio files of Year Zeros first single, "Survivalism", were released in GarageBand format for fan remixing. The multitrack files for "Capital G", "My Violent Heart" and "Me, I'm Not" were released the following month, and files for "The Beginning of the End", "Vessel" and "God Given" were released on the month after that. In response to an early leak of the album, the entire album became available for streaming on Nine Inch Nails' MySpace page a week before the album's official release.
Tour
After taking a break from touring to complete work on Year Zero, the Nine Inch Nails live band embarked on a world tour in 2007 dubbed Performance 2007. The tour included the band's first performance in China. Reznor continued to tour with the same band he concluded the previous tour with: Aaron North, Jeordie White, Josh Freese, and Alessandro Cortini. The tour spanned 91dates across Europe, Asia, Australia, and Hawaii.
Between tour legs Nine Inch Nails gave a performance as part of the Year Zero game. A small group of fans received fictional in-game telephone-calls that invited them to a "resistance meeting" in a Los Angeles parking lot. Those who arrived were given "resistance kits", some of which contained cellphones that would later inform the participants of further details. After receiving instructions from the cellphones, fans who attended a fictional Art is Resistance meeting in Los Angeles were rewarded with an unannounced performance by Nine Inch Nails. The concert was cut short as the meeting was raided by a fictional SWAT team and the audience was rushed out of the building.
Release
Upon its release in April 2007, Year Zero sold over 187,000 copies in its first week. The album reached number two on the Billboard 200 and peaked in the top 10 in six other countries, including Australia, Canada and the United Kingdom. The album's lead single, "Survivalism" peaked at number 68 on the Billboard Hot 100, and topped the Modern Rock and Canadian singles charts. The second single, "Capital G", reached number six on the Modern Rock chart.
In a post on the official Nine Inch Nails website, Reznor condemned Universal Music Groupthe parent company of Interscope Recordsfor its pricing and distribution plans for Year Zero. He wrote that he hated Interscope for setting the price of the album higher than usual, humorously labeling the company's retail pricing of Year Zero in Australia as "ABSURD", and concluding that "as a reward for being a 'true fan' you get ripped off." Reznor went on to say in later years the "climate" of record labels may have an increasingly ambivalent impact on consumers who buy music. Reznor's post, specifically his criticism of the recording industry at large, elicited considerable media attention. Reznor continued his attack on Universal Music Group during a September 2007 concert in Australia, where he urged fans to "steal" his music online instead of purchasing it legally. Reznor went on to encourage the crowd to "steal and steal and steal some more and give it to all your friends and keep on stealin'." Although Universal never replied publicly to the criticism, a spokesperson for the Australian Music Retailers Association said "It is the same price in Australia as it is in the US because of the extra packaging." Due to the pricing dispute, plans to release a "Capital G" maxi-single in Europe were scrapped. The track was instead released as a limited-edition single, without a "Halo number", unlike most official Nine Inch Nails releases.
Year Zero was the last Nine Inch Nails studio album released on Interscope. Reznor announced in October 2007 that Nine Inch Nails had fulfilled its contractual commitments to Interscope and could proceed "free of any recording contract with any label", effectively ending the band's relationship with its record label.
Related projects
A remix album, titled Year Zero Remixed, was released in November 2007. Due to the expiration of his contract with Interscope Records, the album's release, marketing, and promotion were completely in Reznor's control. The album features remixes from artists including The Faint, Ladytron, Bill Laswell, Saul Williams, Olof Dreijer of The Knife, and Sam Fogarino of Interpol. Reznor himself strongly supports fan-made remixes of songs from the album, as evidenced by his decision to upload every song in multi-track form to the then-newly launched Nine Inch Nails remix website. Instrumental versions of the songs on Year Zero are available at the site for download in multiple formats, including MP3, WAV, GarageBand, and Ableton Live formats.
A planned film adaption of Year Zero became a television project in 2007. Reznor met with various writers and pitched the idea to television networks. The 2007–2008 Writers Guild of America strike affected the pre-production stage. Nevertheless, Reznor commented in 2008 that the project is "still churning along", and that he had begun working with American film producer Lawrence Bender. In 2010, Reznor started developing the Year Zero miniseries with HBO and BBC Worldwide Productions. Reznor and Bender collaborated with Carnivàle writer Daniel Knauf to create the science fiction epic. When asked about the miniseries during an "Ask Me Anything" session on Reddit on November 13, 2012, Reznor said it was "currently in a holding state" and explained, "We [Reznor and Sheridan] didn't find the right match with a writer, and really have been avoiding doing what we should have done from the beginning: write it ourselves. [...] This project means a lot to me and will see the light of day in one form or another." In 2017, during an interview promoting new Nine Inch Nails EP Add Violence, Reznor said that "They got so far as hiring a writer for it, but then it fell to shit because we never had the right writer. I should have just done it [myself]."
Critical reception
Year Zero received generally favorable reviews from music critics, with an average rating of 76% based on 28 reviews on review aggregator Metacritic. Robert Christgau described Year Zero as Reznor's "most songful album", while Thomas Inskeep of Stylus magazine praised it as "one of the most forward-thinking 'rock' albums to come down the pike in some time". Rob Sheffield of Rolling Stone called the album Reznor's "strongest, weirdest and most complex record since The Downward Spiral", and concluded that "he's got his bravado back." Rolling Stone ranked it at number 21 on its "Top 50 Albums of 2007" list.
Several reviewers also commented on the accompanying alternate reality game. Ann Powers of the Los Angeles Times, praised the album and game concept as "a total marriage of the pop and gamer aesthetics that unlocks the rusty cages of the music industry and solves some key problems facing rock music as its cultural dominance dissolves into dust." In relation to the declining music industry, Joseph Jaffe of Brandweek commented that such "mysterious marketing measures [...] are what's desperately needed to gain attention in this uncertain era of distribution dilemmas and sagging sales", also commending acts such as Nine Inch Nails and Radiohead for being "more innovative than marketers". Entertainment Weekly gave the album a B+, comparing it to The X-Files and calling it "A sci-fi concept album whose end-of-days, paranoia-drenched storyline has been disseminated via the Internet". It also stated: "Amid its carefully calibrated sonic assaults, Year Zero has a number of tracks that will stop you in yours. Sometimes, it's a matter of dropping the volume [...] Even his use of electronics has shifted to a new level [...] Is the truth in here? Dunno, but Reznor's claim that 'I got my violence in high def ultra-realism' sounds like gospel to us."
On the fictional world depicted in the album and promotional campaign, the Cleveland Free Times commented that the album's fictionalized world and characters "often seemed heavy-handed and forced", but also conceded that "its clotted claustrophobia suited its subject matter". Ann Powers added, "The songs on 'Year Zero,' each from the perspective of a character or characters already existent in the ARG, draw a connection between the music fan's passionate identification with songs and the gamer's experience of becoming someone else online." In 2008, 42 Entertainment won two Webby Awards for its work on the Year Zero game, in the categories of "Integrated Campaigns" and "Other Advertising: Branded Content".
Track listing
Note: "Hyperpower!" is stylized in all caps.
Personnel
Credits adapted from the CD liner notes.
Trent Reznor – vocals, writer, instrumentation, production, engineering and art direction
Atticus Ross – production and engineering
William Artope – trumpet on "Capital G"
Brett Bachemin – engineering
Matt Demeritt – tenor sax on "Capital G"
Josh Freese – drums on "Hyperpower!" and "Capital G"
Jeff/Geoff Gallegos – brass and woodwind musical arrangement, baritone sax on "Capital G"
Brian Gardner – mastering
Steve Genewick – engineering assistant on "Hyperpower!"
Hydraulx – artwork
Elizabeth Lea – trombone on "Capital G"
Alan Mason – engineering
Alan Moulder – engineering and mixing
Rob Sheridan – artwork and art direction
Doug Trantow – engineering
Saul Williams – backing vocals on "Survivalism" and "Me, I'm Not"
Charts
Weekly charts
Year-end charts
Certifications
References
2007 albums
Albums produced by Atticus Ross
Albums produced by Trent Reznor
Dystopian music
Interscope Geffen A&M Records albums
Interscope Records albums
Multimedia works
Nine Inch Nails albums
Science fiction concept albums
Year Zero (game) | Year Zero (album) | [
"Technology"
] | 4,588 | [
"Multimedia",
"Multimedia works"
] |
8,862,061 | https://en.wikipedia.org/wiki/Natural-gas%20processing | Natural-gas processing is a range of industrial processes designed to purify raw natural gas by removing contaminants such as solids, water, carbon dioxide (CO2), hydrogen sulfide (H2S), mercury and higher molecular mass hydrocarbons (condensate) to produce pipeline quality dry natural gas for pipeline distribution and final use. Some of the substances which contaminate natural gas have economic value and are further processed or sold. Hydrocarbons that are liquid at ambient conditions: temperature and pressure (i.e., pentane and heavier) are called natural-gas condensate (sometimes also called natural gasoline or simply condensate).
Raw natural gas comes primarily from three types of wells: crude oil wells, gas wells, and condensate wells. Crude oil and natural gas are often found together in the same reservoir. Natural gas produced in wells with crude oil is generally classified as associated-dissolved gas as the gas had been associated with or dissolved in crude oil. Natural gas production not associated with crude oil is classified as “non-associated.” In 2009, 89 percent of U.S. wellhead production of natural gas was non-associated. Non-associated gas wells producing a dry gas in terms of condensate and water can send the dry gas directly to a pipeline or gas plant without undergoing any separation processIng allowing immediate use.
Natural-gas processing begins underground or at the well-head. In a crude oil well, natural gas processing begins as the fluid loses pressure and flows through the reservoir rocks until it reaches the well tubing. In other wells, processing begins at the wellhead which extracts the composition of natural gas according to the type, depth, and location of the underground deposit and the geology of the area.
Natural gas when relatively free of hydrogen sulfide is called sweet gas; natural gas that contains elevated hydrogen sulfide levels is called sour gas; natural gas, or any other gas mixture, containing significant quantities of hydrogen sulfide or carbon dioxide or similar acidic gases, is called acid gas.
Types of raw-natural-gas wells
Crude oil wells: Natural gas that comes from crude oil wells is typically called associated gas. This gas could exist as a separate gas cap above the crude oil in the underground reservoir or could be dissolved in the crude oil, ultimately coming out of solution as the pressure is reduced during production. Condensate produced from oil wells is often referred to as lease condensate.
Dry gas wells: These wells typically produce only raw natural gas that contains no condensate with little to no crude oil and are called non-associated gas. Condensate from dry gas is extracted at gas processing plants and is often called plant condensate.
Condensate wells: These wells typically produce raw natural gas along with natural gas liquid with little to no crude oil and are called non-associated gas. Such raw natural gas is often referred to as wet gas.
Coal seam wells: These wells typically produce raw natural gas from methane deposits in the pores of coal seams, often existing underground in a more concentrated state of adsorption onto the surface of the coal itself. Such gas is referred to as coalbed gas or coalbed methane (coal seam gas in Australia). Coalbed gas has become an important source of energy in recent decades.
Contaminants in raw natural gas
Raw natural gas typically consists primarily of methane (CH4) and ethane (C2H6), the shortest and lightest hydrocarbon molecules. It often also contains varying amounts of:
Heavier gaseous hydrocarbons: propane (C3H8), normal butane (n-C4H10), isobutane (i-C4H10) and pentanes. All of these are collectively referred to as Natural Gas Liquids or NGL and can be processed into finished by-products.
Liquid hydrocarbons (also referred to as casinghead gasoline or natural gasoline) and/or crude oil.
Acid gases: carbon dioxide (CO2), hydrogen sulfide (H2S) and mercaptans such as methanethiol (CH3SH) and ethanethiol (C2H5SH).
Other gases: nitrogen (N2) and helium (He).
Water: water vapor and liquid water. Also dissolved salts and dissolved gases (acids).
Mercury: minute amounts of mercury primarily in elemental form, but chlorides and other species are possibly present.
Naturally occurring radioactive material (NORM): natural gas may contain radon, and the produced water may contain dissolved traces of radium, which can accumulate within piping and processing equipment; rendering piping and equipment radioactive over time.
Natural gas quality standards
Raw natural gas must be purified to meet the quality standards specified by the major pipeline transmission and distribution companies. Those quality standards vary from pipeline to pipeline and are usually a function of a pipeline system's design and the markets that it serves. In general, the standards specify that the natural gas:
Be within a specific range of heating value (caloric value). For example, in the United States, it should be about 1035 ± 5% BTU per cubic foot of gas at 1 atmosphere and 60 °F (41 MJ ± 5% per cubic metre of gas at 1 atmosphere and 15.6 °C). In the United Kingdom the gross calorific value must be in the range 37.0 – 44.5 MJ/m3 for entry into the National Transmission System (NTS).
Be delivered at or above a specified hydrocarbon dew point temperature (below which some of the hydrocarbons in the gas might condense at pipeline pressure forming liquid slugs that could damage the pipeline.) Hydrocarbon dew-point adjustment reduces the concentration of heavy hydrocarbons so no condensation occurs during the ensuing transport in the pipelines. In the UK the hydrocarbon dew point is defined as <-2 °C for entry into the NTS. The hydrocarbon dewpoint changes with the prevailing ambient temperature, the seasonal variation is:
The natural gas should:
Be free of particulate solids and liquid water to prevent erosion, corrosion or other damage to the pipeline.
Be dehydrated of water vapor sufficiently to prevent the formation of methane hydrates within the gas processing plant or subsequently within the sales gas transmission pipeline. A typical water content specification in the U.S. is that gas must contain no more than seven pounds of water per million standard cubic feet of gas. In the UK this is defined as <-10 °C @ 85barg for entry into the NTS.
Contain no more than trace amounts of components such as hydrogen sulfide, carbon dioxide, mercaptans, and nitrogen. The most common specification for hydrogen sulfide content is 0.25 grain H2S per 100 cubic feet of gas, or approximately 4 ppm. Specifications for CO2 typically limit the content to no more than two or three percent. In the UK hydrogen sulfide is specified ≤5 mg/m3 and total sulfur as ≤50 mg/m3, carbon dioxide as ≤2.0% (molar), and nitrogen as ≤5.0% (molar) for entry into the NTS.
Maintain mercury at less than detectable limits (approximately 0.001 ppb by volume) primarily to avoid damaging equipment in the gas processing plant or the pipeline transmission system from mercury amalgamation and embrittlement of aluminum and other metals.
Description of a natural-gas processing plant
There are a variety of ways in which to configure the various unit processes used in the treatment of raw natural gas. The block flow diagram below is a generalized, typical configuration for the processing of raw natural gas from non-associated gas wells showing how raw natural gas is processed into sales gas piped to the end user markets. and various byproducts:
Natural-gas condensate
Sulfur
Ethane
Natural gas liquids (NGL): propane, butanes and C5+ (which is the commonly used term for pentanes plus higher molecular weight hydrocarbons)
Raw natural gas is commonly collected from a group of adjacent wells and is first processed in a separator vessels at that collection point for removal of free liquid water and natural gas condensate. The condensate is usually then transported to an oil refinery and the water is treated and disposed of as wastewater.
The raw gas is then piped to a gas processing plant where the initial purification is usually the removal of acid gases (hydrogen sulfide and carbon dioxide). There are several processes available for that purpose as shown in the flow diagram, but amine treating is the process that was historically used. However, due to a range of performance and environmental constraints of the amine process, a newer technology based on the use of polymeric membranes to separate the carbon dioxide and hydrogen sulfide from the natural gas stream has gained increasing acceptance. Membranes are attractive since no reagents are consumed.
The acid gases, if present, are removed by membrane or amine treating and can then be routed into a sulfur recovery unit which converts the hydrogen sulfide in the acid gas into either elemental sulfur or sulfuric acid. Of the processes available for these conversions, the Claus process is by far the most well known for recovering elemental sulfur, whereas the conventional Contact process and the WSA (Wet sulfuric acid process) are the most used technologies for recovering sulfuric acid. Smaller quantities of acid gas may be disposed of by flaring.
The residual gas from the Claus process is commonly called tail gas and that gas is then processed in a tail gas treating unit (TGTU) to recover and recycle residual sulfur-containing compounds back into the Claus unit. Again, as shown in the flow diagram, there are a number of processes available for treating the Claus unit tail gas and for that purpose a WSA process is also very suitable since it can work autothermally on tail gases.
The next step in the gas processing plant is to remove water vapor from the gas using either the regenerable absorption in liquid triethylene glycol (TEG), commonly referred to as glycol dehydration, deliquescent chloride desiccants, and or a Pressure Swing Adsorption (PSA) unit which is regenerable adsorption using a solid adsorbent. Other newer processes like membranes may also be considered.
Mercury is then removed by using adsorption processes (as shown in the flow diagram) such as activated carbon or regenerable molecular sieves.
Although not common, nitrogen is sometimes removed and rejected using one of the three processes indicated on the flow diagram:
Cryogenic process (Nitrogen Rejection Unit), using low temperature distillation. This process can be modified to also recover helium, if desired (see also industrial gas).
Absorption process, using lean oil or a special solvent as the absorbent.
Adsorption process, using activated carbon or molecular sieves as the adsorbent. This process may have limited applicability because it is said to incur the loss of butanes and heavier hydrocarbons.
NGL fractionation train
The NGL fractionation process treats offgas from the separators at an oil terminal or the overhead fraction from a crude distillation column in a refinery. Fractionation aims to produce useful products including natural gas suitable for piping to industrial and domestic consumers; liquefied petroleum gases (Propane and Butane) for sale; and gasoline feedstock for liquid fuel blending. The recovered NGL stream is processed through a fractionation train consisting of up to five distillation towers in series: a demethanizer, a deethanizer, a depropanizer, a debutanizer and a butane splitter. The fractionation train typically uses a cryogenic low temperature distillation process involving expansion of the recovered NGL through a turbo-expander followed by distillation in a demethanizing fractionating column. Some gas processing plants use lean oil absorption process rather than the cryogenic turbo-expander process.
The gaseous feed to the NGL fractionation plant is typically compressed to about 60 barg and 37 °C. The feed is cooled to -22 °C, by exchange with the demethanizer overhead product and by a refrigeration system and is split into three streams:
Condensed liquid passes through a Joule-Thomson valve reducing the pressure to 20 bar and enters the demethanizer as the lower feed at -44.7 °C.
Some of the vapour is routed through a turbo-expander and enters the demethanizer as the upper feed at -64 °C.
The remaining vapor is chilled by the demethanizer overhead product and Joule-Thomson cooling (through a valve) and enters the column as reflux at -96 °C.
The overhead product is mainly methane at 20 bar and -98 °C. This is heated and compressed to yield a sales gas at 20 bar and 40 °C. The bottom product is NGL at 20 barg which is fed to the deethanizer.
The overhead product from the deethanizer is ethane and the bottoms are fed to the depropanizer. The overhead product from the depropanizer is propane and the bottoms are fed to the debutanizer. The overhead product from the debutanizer is a mixture of normal and iso-butane, and the bottoms product is a C5+ gasoline mixture.
The operating conditions of the vessels in the NGL fractionation train are typically as follows.
A typical composition of the feed and product is as follows.
Sweetening Units
The recovered streams of propane, butanes and C5+ may be "sweetened" in a Merox process unit to convert undesirable mercaptans into disulfides and, along with the recovered ethane, are the final NGL by-products from the gas processing plant. Currently, most cryogenic plants do not include fractionation for economic reasons, and the NGL stream is instead transported as a mixed product to standalone fractionation complexes located near refineries or chemical plants that use the components for feedstock. In case laying pipeline is not possible for geographical reason, or the distance between source and consumer exceed 3000 km, natural gas is then transported by ship as LNG (liquefied natural gas) and again converted into its gaseous state in the vicinity of the consumer.
Products
The residue gas from the NGL recovery section is the final, purified sales gas which is pipelined to the end-user markets. Rules and agreements are made between buyer and seller regarding the quality of the gas. These usually specify the maximum allowable concentration of CO2, H2S and H2O as well as requiring the gas to be commercially free from objectionable odours and materials, and dust or other solid or liquid matter, waxes, gums and gum forming constituents, which might damage or adversely affect operation of the buyers equipment. When an upset occurs on the treatment plant buyers can usually refuse to accept the gas, lower the flow rate or re-negotiate the price.
Helium recovery
If the gas has significant helium content, the helium may be recovered by fractional distillation. Natural gas may contain as much as 7% helium, and is the commercial source of the noble gas. For instance, the Hugoton Gas Field in Kansas and Oklahoma in the United States contains concentrations of helium from 0.3% to 1.9%, which is separated out as a valuable byproduct.
See also
Liquid carryover
Natural gas prices
Petroleum extraction
Oil refinery
List of natural gas and oil production accidents in the United States
References
External links
Simulate natural gas processing using Aspen HYSYS
Natural Gas Processing Principles and Technology (an extensive and detailed course text by Dr. A.H. Younger, University of Calgary, Alberta, Canada).
Processing Natural Gas, Website of the Natural Gas Supply Association (NGSA).
Natural Gas Processing (part of the US EPA's AP-42 publication)
Natural Gas Processing Plants (a US Department of Transportation website)
Gas Processors Association, Website of the Gas Processors Association (GPA) headquartered in Tulsa, Oklahoma, United States.
Gas Processing Journal (Publisher: College of Engineering, University of Isfahan, Iran.)
Increasing Efficiency of Gas Processing Plants
Further reading
Haring, H.W. (2008). Industrial Gases Processing. Weinheim, Germany: WILEY-VCH Verlag Gmbh & CO. KGaA
Kohl, A., & Nielsen, R. (1997). Gas Purification. 5TH Edition. Houston, Texas: Gulf Publishing Company
Chemical processes
Natural gas technology
Gas technologies | Natural-gas processing | [
"Chemistry"
] | 3,406 | [
"Chemical process engineering",
"Chemical processes",
"Natural gas technology",
"nan"
] |
8,862,357 | https://en.wikipedia.org/wiki/SendStation%20Systems | SendStation Systems is a manufacturer of computer and iPod accessories. The company was founded in 1997 in Frankfurt/Main, Germany by current President André Klein.
The name "SendStation" has its roots in a Macintosh-based turn-key video fax system the company created and sold between 1997 and 2000, and is a word play made up from to send, station and sensation. Back then (several years before public broadband internet access became available and affordable) the SendStation was the first system of its kind which allowed filmmakers, advertising agencies and industry clients to easily transfer full-screen-full-motion review copies of TV commercials within minutes around the globe, rather than using overnight couriers. Customers included companies like Audi, Wrigley and BBDO.
PocketDock
In 2003 SendStation entered the hardware market, and became one of the initial five, officially authorized accessory manufacturers for the Apple iPod. Its first hardware product, the PocketDock FireWire adapter, allowed iPod owners to connect the device via the then freshly introduced 30-pin iPod dock connector to a standard 6-pin FireWire cable for sync & charge. It was one of the best-selling add-ons on the young market for iPod accessories.
As USB 2.0 became standard and iPods with video capabilities emerged, additional models followed. In order of release (most recent on top):
PocketDock Line Out Mini USB
PocketDock AV (USB, Audio Line Out, Composite & S-Video)
PocketDock Line Out USB
PocketDock Combo (USB+FW)
PocketDock Line Out FW
PocketDock FW
Other products include iPod car chargers, iPod dock extenders, earbud cases and multiple types of adapters for Mac & PC.
References
External links
Companies established in 1997
Computer hardware companies
Computer peripheral companies
Computer companies of Germany | SendStation Systems | [
"Technology"
] | 366 | [
"Computer hardware companies",
"Computers"
] |
8,862,712 | https://en.wikipedia.org/wiki/Multimachine | The multimachine is an all-purpose open source machine tool that can be built inexpensively by a semi-skilled mechanic with common hand tools, from discarded car and truck parts, using only commonly available hand tools and no electricity. Its size can range from being small enough to fit in a closet to one hundred times that size. The multimachine can accurately perform all the functions of an entire machine shop by itself.
The multimachine was first developed as a personal project by Pat Delaney, then grew into an open source project organized via a Yahoo! group. The 2,600 member support group that has grown up around its creation is made up of engineers, machinists, and experimenters who have proven that the machine works. As an open-source machine tool that can be built cheaply on-site, the Multimachine could have many uses in developing countries. The multimachine group is currently focused on the humanitarian aspects of the multimachine, and on promulgating the concept of the multimachine as a means to create jobs and economic growth in developing countries.
The multimachine first became known to a wider audience as the result of the 2006 Open Source Gift Guide article on the Make magazine website, in which the multimachine was mentioned under the caption "Multimachine - Open Source machine tool".
Uses
As a general-purpose machine tool that includes the functions of a milling machine, drill press, and lathe, the multimachine can be used for many projects important for humanitarian and economic development in developing countries:
Agriculture: Building and repairing irrigation pumps and farm implements
Water supplies: Making and repairing water pumps and water-well drilling rigs.
Food supplies: Building steel-rolling-and-bending machines for making fuel efficient cook stoves and other cooking equipment
Transportation: Anything from making cart axles to rebuilding vehicle clutch, brake, and other parts.
Education: Building simple pipe-and-bar-bending machines to make school furniture, providing "hands on" training on student-built multimachines that they take with them when they leave school.
Job creation: A group of specialized but easily built multimachines can be combined to form a small, very low cost, metal working factory which could also serve as a trade school. Students could be taught a single skill on a specialized machine and be paid as a worker while learning other skills that they could take elsewhere.
Accuracy
The design goals of the multimachine were to create an easily built machine tool, made from "junk," that is nonetheless all-purpose and accurate enough for production work. It has been reported to be able to make cuts within a tenth (one ten-thousandth of an inch), which means that in at least some setups it can equal commercial machine tool accuracy.
In almost every kind of machining operation, either the work piece or the cutting tool turns. If enough flexibility is built into the parts of a machine tool involved in these functions, the resulting machine can do almost every kind of machining operation that will physically fit on it. The multimachine starts with the concept of 3-in-1 machine tools—basically a combination of metal lathe, mill and drill press—but adds many other functions. It can be a 10-in-1 (or even more) machine tool.
Construction
At a high-level, the multimachine is built using vehicle engine blocks combined in a LEGO-like fashion. It utilizes the cylinder bores and engine deck (where the cylinder head would mate to via the head gasket) to provide accurate surfaces. Since cylinder bores are bored exactly parallel to each other and at exact right angles to the cylinder head surface, multimachine accuracy begins at the factory where the engine block was built. In the most common version of the multimachine, one that has a roller bearing spindle, this precision is maintained during construction with simple cylinder re-boring of the #3 cylinder to the size of the roller bearing outside diameter (OD) and re-boring the #1 cylinder to fit the overarm OD. These cylinder-boring operations can be done in almost any engine shop and at low cost. An engine machine shop provides the most inexpensive and accurate machine work commonly done anywhere and guarantees that the spindle and overarm will be perfectly aligned and at an exact right angle to the face (head surface) of the main engine block that serves as the base of the machine. Use a piece of pipe made to fit the inner diameter of the bearings as the spindle. A three-bearing spindle is used because the "main" spindle bearings just "float" in the cylinder bore so that the third bearing is needed to "locate" the spindle, act as a thrust bearing, and support the heavy pulley. The multimachine uses a unique way of clamping the engine blocks together that is easily built, easily adjusted, and very accurate. The multimachine makes use of a concrete and steel construction technique that was heavily used in industry during the First World War and resurrected for this project.
References
External links
Open source machine website
Machine tools
Open-source hardware | Multimachine | [
"Engineering"
] | 1,050 | [
"Machine tools",
"Industrial machinery"
] |
8,863,274 | https://en.wikipedia.org/wiki/Encampment%20%28Chinese%20constellation%29 | The Encampment mansion () is one of the 28 mansions of the Chinese constellations. It is one of the northern mansions of the Black Tortoise.
Asterisms
References
Chinese constellations | Encampment (Chinese constellation) | [
"Astronomy"
] | 42 | [
"Chinese constellations",
"Constellations"
] |
8,863,917 | https://en.wikipedia.org/wiki/Wall%20%28Chinese%20constellation%29 | The Wall mansion () is one of the Twenty-eight mansions of the Chinese constellations. It is one of the northern mansions of the Black Tortoise.
Asterisms
References
Chinese constellations | Wall (Chinese constellation) | [
"Astronomy"
] | 42 | [
"Chinese constellations",
"Astronomy stubs",
"Constellations"
] |
8,864,153 | https://en.wikipedia.org/wiki/Energy%20policy%20of%20the%20European%20Union | The energy policy of the European Union focuses on energy security, sustainability, and integrating the energy markets of member states. An increasingly important part of it is climate policy. A key energy policy adopted in 2009 is the 20/20/20 objectives, binding for all EU Member States. The target involved increasing the share of renewable energy in its final energy use to 20%, reduce greenhouse gases by 20% and increase energy efficiency by 20%. After this target was met, new targets for 2030 were set at a 55% reduction of greenhouse gas emissions by 2030 as part of the European Green Deal. After the Russian invasion of Ukraine, the EU's energy policy turned more towards energy security in their REPowerEU policy package, which boosts both renewable deployment and fossil fuel infrastructure for alternative suppliers.
The EU Treaty of Lisbon of 2007 legally includes solidarity in matters of energy supply and changes to the energy policy within the EU. Prior to the Treaty of Lisbon, EU energy legislation has been based on the EU authority in the area of the common market and environment. However, in practice many policy competencies in relation to energy remain at national member state level, and progress in policy at European level requires voluntary cooperation by members states.
In 2007, the EU was importing 82% of its oil and 57% of its gas, which then made it the world's leading importer of these fuels. Only 3% of the uranium used in European nuclear reactors was mined in Europe. Russia, Canada, Australia, Niger and Kazakhstan were the five largest suppliers of nuclear materials to the EU, supplying more than 75% of the total needs in 2009. In 2015, the EU imports 53% of the energy it consumes.
The European Investment Bank took part in energy financing in Europe in 2022: a part of their REPowerEU package was to assist up to €115 billion in energy investment through 2027, in addition to regular lending operation in the sector. In 2022, the EIB sponsored €17 billion in energy investments throughout the European Union.
The history of energy markets in Europe started with the European Coal and Steel Community, which was created in 1951 to lessen hostilities between France and Germany by making them economically intertwined. The 1957 Treaty of Rome established the free movement of goods, but three decades later, integration of energy markets had yet to take place. The start of an internal market for gas and electricity took place in the 1990s.
History
Early days
The history of energy markets in Europe started with the European Coal and Steel Community, which was created in 1951 in the aftermath of World War II to lessen hostilities between France and Germany by making them economically intertwined. A second key moment was the formation in 1957 Euratom, to collaborate on nuclear energy. A year later, the Treaty of Rome established the free movement of goods, which was intended to create a single market also for energy. However, three decades later, integration of energy markets had yet to take place.
In the late 1980s, the European Commission proposed a set of policies (called directives in the EU context) on integrating the European market. One of the key ideas was that consumers would be able to buy electricity from outside of their own country. This plan encountered opposition from the Council of Ministers, as the policy sought to liberalise what was regarded as a natural monopoly. The less controversial parts of the directives—those on price transparency and transit right for grid operators—were adopted in 1990.
Start of an internal market
The 1992 Treaty of Maastricht, which founded the European Union, included no chapter specific on energy. Such a chapter had been rejected by member states who wanted to retain autonomy on energy, specifically those with larger energy reserves. A directive for an internal electricity market was passed in 1996 by the European Parliament, and another on the internal gas market two years later. The directive for the electricity market contained the requirement that network operation and energy generation should not done by a single (monopolistic) company. Having energy generation separate would allow for competition in that sector, whereas network operation would remain regulated.
Renewable energy and the 20/20/20 target
In 2001, the first Renewable Energy Directive was passed, in the context of the 1997 Kyoto Protocol against climate change. It included a target of doubling the share of renewable energy in the EU's energy mix from 6% to 12% by 2010. The increase for the electricity sector was even higher, with a goal of 22%. Two years later a directive was passed which increased the share of biofuels in transport.
These directives were replaced in 2009 with the 20-20-20 targets, which sought to increase the share of renewables to 20% by 2020. Additionally, greenhouse gas emissions needed to drop by 20% compared to 1990, and energy efficiency improved by 20%. In included mandatory targets for each member state, which differed by member state. While not all national government reached their targets, overall, the EU surpassed the three targets. Greenhouse gas emissions were 34% lower in 2020 than in 1990 for instance.
Energy Union
The Energy Union Strategy is a project of the European Commission to coordinate the transformation of European energy supply. It was launched in February 2015, with the aim of providing secure, sustainable, competitive, affordable energy.
Donald Tusk, President of the European Council, introduced the idea of an energy union when he was Prime Minister of Poland. Eurocommissioner Vice President Maroš Šefčovič called the Energy Union the biggest energy project since the European Coal and Steel Community. The EU's reliance on Russia for its energy, and the annexation of Crimea by Russia have been cited as strong reasons for the importance of this policy.
The European Council concluded on 19March 2015 that the EU is committed to building an Energy Union with a forward-looking climate policy on the basis of the commission's framework strategy, with five priority dimensions:
Energy security, solidarity and trust
A fully integrated European energy market
Energy efficiency contributing to moderation of demand
Decarbonising the economy
Research, innovation and competitiveness.
The strategy includes a minimum 10% electricity interconnection target for all member states by 2020, which the Commission hopes will put downward pressure onto energy prices, reduce the need to build new power plants, reduce the risk of black-outs or other forms of electrical grid instability, improve the reliability of renewable energy supply, and encourage market integration.
EU Member States agreed 25 January 2018 on the commission's proposal to invest €873 million in clean energy infrastructure. The projects are financed by CEF (Connecting Europe Facility).
€578 million for the construction of the Biscay Gulf France-Spain interconnection, a 280 km long off-shore section and a French underground land section. This new link will increase the interconnection capacity between both countries from 2.8 GW to 5 GW.
€70 million to construct the SüdOstLink, 580 km of high-voltage cables laid fully underground. The power line will create an urgently needed link between the wind power generated in the north and the consumption centres in the south of Germany.
€101 million for the CyprusGas2EU project to provide natural gas to Cyprus
European Green Deal
The European Green Deal, approved in 2020, is a set of policy initiatives by the European Commission with the overarching aim of making the European Union (EU) climate neutral in 2050. The plan is to review each existing law on its climate merits, and also introduce new legislation on the circular economy, building renovation, farming and innovation.
The president of the European Commission, Ursula von der Leyen, stated that the European Green Deal would be Europe's "man on the moon moment". Von der Leyen appointed Frans Timmermans as Executive Vice President of the European Commission for the European Green Deal. On 13 December 2019, the European Council decided to press ahead with the plan, with an opt-out for Poland. On 15 January 2020, the European Parliament voted to support the deal as well, with requests for higher ambition. A year later, the European Climate Law was passed, which legislated that greenhouse gas emissions should be 55% lower in 2030 compared to 1990. The Fit for 55 package is a large set of proposed legislation detailing how the European Union plans to reach this target, including major proposal for energy sectors such as renewable energy and transport.
After the Russian invasion of Ukraine, the EU launched REPowerEU to quickly reduce import dependency on Russia for oil and gas. While the policy proposal includes a substantial acceleration for renewable energy deployment, it also contains expansion of fossil fuel infrastructure from alternative suppliers.
The impact of inflation, particularly driven by surging energy prices, prompted around 35% of firms to spend between 25% and 50% more on energy in 2024, further encouraging investments aimed at reducing energy consumption. This aligns with the goals of the Green Deal, where energy efficiency improvements are seen as key to reducing both emissions and energy costs.
Earlier proposals
The possible principles of Energy Policy for Europe were elaborated at the commission's green paper A European Strategy for Sustainable, Competitive and Secure Energy on 8 March 2006. As a result of the decision to develop a common energy policy, the first proposals, Energy for a Changing World were published by the European Commission, following a consultation process, on 10 January 2007.
It is claimed that they will lead to a 'post-industrial revolution', or a low-carbon economy, in the European Union, as well as increased competition in the energy markets, improved security of supply, and improved employment prospects. The commission's proposals have been approved at a meeting of the European Council on 8 and 9 March 2007.
Key proposals include:
A cut of at least 20% in greenhouse gas emissions from all primary energy sources by 2020 (compared to 1990 levels), while pushing for an international agreement to succeed the Kyoto Protocol aimed at achieving a 30% cut by all developed nations by 2020.
A cut of up to 95% in carbon emissions from primary energy sources by 2050, compared to 1990 levels.
A minimum target of 10% for the use of biofuels by 2020.
That the energy supply and generation activities of energy companies should be 'unbundled' from their distribution networks to further increase market competition.
Improving energy relations with the EU's neighbours, including Russia.
The development of a European Strategic Energy Technology Plan to develop technologies in areas including renewable energy, energy conservation, low-energy buildings, fourth generation nuclear reactor, coal pollution mitigation, and carbon capture and sequestration (CCS).
Developing an Africa-Europe Energy partnership, to help Africa 'leap-frog' to low-carbon technologies and to help develop the continent as a sustainable energy supplier.
Many underlying proposals are designed to limit global temperature changes to no more than 2 °C above pre-industrial levels, of which 0.8 °C has already taken place and another 0.5–0.7 °C is already committed. 2 °C is usually seen as the upper temperature limit to avoid 'dangerous global warming'. Due to only minor efforts in global Climate change mitigation it is highly likely that the world will not be able to reach this particular target. The EU might then not only be forced to accept a less ambitious global target. Because the planned emissions reductions in the European energy sector (95% by 2050) are derived directly from the 2 °C target since 2007, the EU will have to revise its energy policy paradigm.
In 2014, negotiations about binding EU energy and climate targets until 2030 are set to start.
European Parliament voted in February 2014 in favour of binding 2030 targets on renewables, emissions and energy efficiency: a 40% cut in greenhouse gases, compared with 1990 levels; at least 30% of energy to come from renewable sources; and a 40% improvement in energy efficiency.
Current policies
Energy sources
Under the requirements of the Directive on Electricity Production from Renewable Energy Sources, which entered into force in October 2001, the member states are expected to meet "indicative" targets for renewable energy production. Although there is significant variation in national targets, the average is that 22% of electricity should be generated by renewables by 2010 (compared to 13,9% in 1997). The European Commission has proposed in its Renewable Energy Roadmap21 a binding target of increasing the level of renewable energy in the EU's overall mix from less than 7% today to 20% by 2020.
Europe spent €406 billion in 2011 and €545 billion in 2012 on importing fossil fuels. This is around three times more than the cost of the Greek bailout up to 2013. In 2012, wind energy avoided €9.6 billion of fossil fuel costs. EWEA recommends binding renewable energy target to support in replacing fossil fuels with wind energy in Europe by providing a stable regulatory framework. In addition, it recommends setting a minimum emission performance standard for all new-build power installations.
For over a decade, the European Investment Bank has managed the European Local Energy Assistance (ELENA) facility on behalf of the European Commission, which provides technical assistance to any private or public entity in order to help prepare energy-efficient and renewable energy investments in buildings or innovative urban transportation projects. The EU Modernisation Fund, formed in 2018 as part of the new EU Emissions Trading System (ETS) Directive and with direct engagement from the EIB12, targets such investments as well as energy efficiency and a fair transition across 10 Member States.
The European Investment Bank took part in energy financing in Europe in 2022: a part of their REPowerEU package was to assist up to €115 billion in energy investment through 2027, in addition to regular lending operation in the sector. The European Investment Bank Group has invested about €134 billion in the energy sector of the European Union during the last ten years (2010-2020), in addition to extra funding for renewable energy projects in various countries. These initiatives are currently assisting Europe in surviving the crisis brought on by the sudden interruption of Russian gas supply.
As part of the REPowerEU Plan, the European Union has significantly decreased its reliance on Russian gas by reducing imports from 45 percent in 2021 to 15 percent in 2023, while also achieving a near 20 percent reduction in overall gas usage. By March 31, EU natural gas storage levels reached over 58 percent, the highest for this period, supported by regulatory measures that mandate storage facilities to maintain at least 90 percent capacity by November each year. These strategies are part of the EU's broader efforts to diversify energy sources and increase sustainability, aligning with investments in renewable energy and efficiency enhancements across member states.
Energy markets
The EU promotes electricity market liberalisation and security of supply through Directive 2019/944
The 2004 Gas Security Directive has been intended to improve security of supply in the natural gas sector.
Energy efficiency
Rising energy costs led to a 5.6 percentage point increase in planned investments in energy efficiency, largely driven by SMEs, increasing from 52.3% to 57.9% in 2022.
Energy taxation
IPEEC
At the Heiligendamm Summit in June 2007, the G8 acknowledged an EU proposal for an international initiative on energy efficiency tabled in March 2007, and agreed to explore, together with the International Energy Agency, the most effective means to promote energy efficiency internationally. A year later, on 8 June 2008, the G8 countries, China, India, South Korea and the European Community decided to establish the International Partnership for Energy Efficiency Cooperation, at the Energy Ministerial meeting hosted by Japan in the frame of the 2008 G8 Presidency, in Aomori.
Buildings
Buildings account for around 40% of EU energy requirements and have been the focus of several initiatives. From 4 January 2006, the 2002 Directive on the energy performance of buildings requires member states to ensure that new buildings, as well as large existing buildings undergoing refurbishment, meet certain minimum energy requirements. It also requires that all buildings should undergo 'energy certification' prior to sale, and that boilers and air conditioning equipment should be regularly inspected.
As part of the EU's SAVE Programme, aimed at promoting energy efficiency and encouraging energy-saving behaviour, the Boiler Efficiency Directive specifies minimum levels of efficiency for boilers fired with liquid or gaseous fuels. Originally, from June 2007, all homes (and other buildings) in the UK would have to undergo Energy Performance Certification before they are sold or let, to meet the requirements of the European Energy Performance of Buildings Directive (Directive 2002/91/EC).
Transport
EU policies include the voluntary ACEA agreement, signed in 1998, to cut carbon dioxide emissions for new cars sold in Europe to an average of 140 grams of /km by 2008, a 25% cut from the 1995 level. Because the target was unlikely to be met, the European Commission published new proposals in February 2007, requiring a mandatory limit of 130 grams of /km for new cars by 2012, with 'complementary measures' being proposed to achieve the target of 120 grams of /km that had originally been expected.
In the area of fuels, the 2001 Biofuels Directive requires that 5,75% of all transport fossil fuels (petrol and diesel) should be replaced by biofuels by 31 December 2010, with an intermediate target of 2% by the end of 2005. In February 2007 the European Commission proposed that, from 2011, suppliers will have to reduce carbon emissions per unit of energy by 1% a year from 2010 levels, to result in a cut of 10% by 2020 Stricter fuel standards to combat climate change and reduce air pollution.
Flights
Airlines can be charged for their greenhouse gas emissions on flights to and from Europe according to a court ruling in October 2011.
Historically, EU aviation fuel was tax free and applied no VAT. Fuel taxation in the EU was banned in 2003 under the Energy Taxation Directive, except for domestic flights and on intra-EU flights on the basis of bilateral agreements. No such agreements exist.
In 2018 Germany applied 19% VAT on domestic airline tickets. Many other member states had 0% VAT. Unlike air travel, VAT is applied to bus and rail, which creates economic distortions, increasing demand for air travel relative to other forms of transport. This increases aviation emissions and constitutes a state aid subsidy. Air fuel tax 33 cents/litre equal to road traffic would give €9.5 billion. Applying a 15% VAT in all air traffics within and from Europe would be equal to €15 billion.
Industry
The European Union Emissions Trading Scheme, introduced in 2005 under the 2003 Emission Trading Directive, sets member state-level caps on greenhouse gas emissions for power plants and other large point sources.
Consumer goods
A further area of energy policy has been in the area of consumer goods, where energy labels were introduced to encourage consumers to purchase more energy-efficient appliances.
External energy relations
Beyond the bounds of the European Union, EU energy policy has included negotiating and developing wider international agreements, such as the Energy Charter Treaty, the Kyoto Protocol, the post-Kyoto regime and a framework agreement on energy efficiency; extension of the EC energy regulatory framework or principles to neighbours (Energy Community, Baku Initiative, Euro-Med energy cooperation) and the emission trading scheme to global partners; the promotion of research and the use of renewable energy.
The EU-Russia energy cooperation will be based on a new comprehensive framework agreement within the post-Partnership and Cooperation Agreement (PCA), which will be negotiated in 2007. The energy cooperation with other third energy producer and transit countries is facilitated with different tools, such as the PCAs, the existing and foreseen Memorandums of Understanding on Energy Cooperation (with Ukraine, Azerbaijan, Kazakhstan and Algeria), the Association Agreements with Mediterranean countries, the European Neighbourhood Policy Action Plans; Euromed energy cooperation; the Baku initiative; and the EU-Norway energy dialogue. For the cooperation with African countries, a comprehensive Africa-Europe Energy partnership would be launched at the highest level, with the integration of Europe's Energy and Development Policies.
For ensuring efficient follow-up and coherence in pursuing the initiatives and processes, for sharing information in case of an external energy crisis, and for assisting the EU's early response and reactions in case of energy security threats, the network of energy correspondents in the Member States was established in early 2007. After the Russian-Ukrainian Gas Crisis of 2009 the EU decided that the existing external measures regarding gas supply security should be supplemented by internal provisions for emergency prevention and response, such as enhancing gas storage and network capacity or the development of the technical prerequisites for reverse flow in transit pipelines.
Just Transition Fund
Just Transition Fund (JTF) was created in 2020 to boost investments in low-carbon energy. The fund was criticized for blanket ban on low-carbon nuclear power but also introduction of a loophole for fossil gas. Having the largest workforce dedicated to the coal industry, Poland—followed by Germany and Romania—is the fund's largest receptor. Amounting to €17.5 billion, the fund was approved by the European Parliament in May 2021.
Hydrogen economy
Green hydrogen is a significant component of the European Union's strategy to transition to sustainable energy and reduce carbon emissions. The European Commission has set a goal to produce 10 million tons of clean hydrogen annually within the EU by 2030. Additionally, the EU plans to import another 10 million tons per year from countries with cost-effective renewable electricity. However, some experts estimate that actual production in the EU might reach around 1 million tons per year by 2030.
The EU mandates the use of 3 million tons of hydrogen per year for the transport and maritime sectors. Challenges to achieving higher production levels include high costs and limited subsidies. In 2024, the European Hydrogen Bank announced a second auction with a budget of €1.2 billion, which is less than the initially proposed €3 billion. While there has been early interest, there is some hesitation regarding the signing of offtake contracts. Priority sectors for the use of green hydrogen include green steel production and ammonia. In contrast, sectors such as passenger road transport and home heating are given lower priority. The cost of producing green hydrogen ranges from $6 to $15 per kilogram. A subsidy model similar to that of the United States, which offers $3 per kilogram, would require €3 billion annually to support the production of 1 million tons of green hydrogen. Strategic recommendations suggest prioritizing the use of renewable electricity for displacing coal, powering electric vehicles, and operating heat pumps and green steel production before using it for green hydrogen production.
Solar anti-dumping levies
In 2013, a two-year investigation by the European Commission concluded that Chinese solar panel exporters were selling their products in the EU up to 88% below market prices, backed by state subsidies. In response, the European Council imposed tariffs on solar imported from China at an average rate of 47.6% beginning 6 June that year.
The Commission reviewed these measures in December 2016 and proposed to extend them for two years until March 2019. However, in January 2017, 18 out of 28 EU member states voted in favour of shortening the extension period. In February 2017, the commission announced its intention to extend its anti-dumping measures for a reduced period of 18 months.
Research and development
The European Union is active in the areas of energy research, development and promotion, via initiatives such as CEPHEUS (ultra-low energy housing), and programs under the umbrella titles of SAVE (energy saving) ALTENER (new and renewable energy sources), STEER (transport) and COOPENER (developing countries). Through Fusion for Energy, the EU is participating in the ITER project.
SET Plan
The Seventh Framework Programme research program that run until 2013 only reserved a moderate amount of funding for energy research, although energy did emerge as one of the key issues of the European Union. A large part of FP7 energy funding was devoted to fusion research, a technology that will not be able to help meet European climate and energy objectives until beyond 2050. The European Commission tried to redress this shortfall with the SET plan.
The SET plan initiatives included a European Wind Initiative, the Solar Europe Initiative, the Bioenergy Europe Initiative, the European electricity grid initiative and an inititaive for sustainable nuclear fission. The budget for the SET plan is estimated at €71.5 billion. The IEA raised its concern that demand-side technologies do not feature at all in the six priority areas of the SET Plan.
Public opinion
In a poll carried out for the European Commission in October and November 2005, 47% of the citizens questioned in the 27 countries of the EU (including the 2 states that joined in 2007) were in favour of taking decisions on key energy policy issues at a European level. 37% favoured national decisions and 8% that they be tackled locally.
A similar survey of 29,220 people in March and May 2006 indicated that the balance had changed in favour of national decisions in these areas (42% in favour), with 39% backing EU policy making and 12% preferring local decisions. There was significant national variation with this, with 55% in favour of European level decision making in the Netherlands, but only 15% in Finland.
A comprehensive public opinion survey was performed in May and June 2006. The authors propose following conclusions:
Energy issues are considered to be important but not at first glance.
EU citizens perceive great future promise in the use of renewable energies. Despite majority opposition, nuclear energy also has its place in the future energy mix.
Citizens appear to opt for changing the energy structure, enhancing research and development and guaranteeing the stability of the energy field rather than saving energy as the way to meet energy challenges.
The possible future consequences of energy issues do not generate deep fears in Europeans' minds.
Europeans appear to be fairly familiar with energy issues, although their knowledge seems somewhat vague.
Energy issues touch everybody and it is therefore hard to distinguish clear groups with differing perceptions. Nevertheless, rough distinction between groups of citizens is sketched.
Example European countries
Germany
In September 2010, the German government adopted a set of ambitious goals to transform their national energy system and to reduce national greenhouse gas emissions by 80 to 95% by 2050 (relative to 1990). This transformation became known as the Energiewende. Subsequently, the government decided to the phase-out the nation's fleet of nuclear reactors, to be complete by 2022. As of 2014, the country is making steady progress on this transition.
See also
CHP Directive
Directorate-General for Energy
Energy Charter Treaty
Energy Community
Energy diplomacy
Energy in Europe
EU Energy Efficiency Directive 2012
European Climate Change Programme
European Commissioner for Energy
European countries by electricity consumption per person
European countries by fossil fuel use (% of total energy)
European Ecodesign Directive
European Pollutant Emission Register (EPER)
European Union energy label
Global strategic petroleum reserves
Internal Market in Electricity Directive
INOGATE
List of electricity interconnection level
Renewable energy in the European Union
Special economic zone
Transport in Europe
References
External links
European information campaign on the opening of the energy markets and on energy consumers' right.
European Strategic Energy Technology Plan, Towards A Low Carbon Future.
Eurostat – Statistics Explained – all articles on energy
ManagEnergy, for energy efficiency and renewable energies at the local and regional level.
BBC Q&A: EU energy proposals
2006 Energy Green Paper
Collective Energy Security: A New Approach for Europe
Berlin Forum on Fossil Fuels.
Netherlands Environmental Assessment Agency – Meeting the European Union 2 °C climate target: global and regional emission implications
German Institute for International and Security Affairs – Perspectives for the European Union's External Energy Policy
Wiley Interdisciplinary Reviews -- (open access)Dupont, C., et al. (2024). Three decades of EU climate policy: Racing toward climate neutrality? WIREs Climate Change, 15(1), e863.
The Liberalisation of the Power Industry in the European Union and its Impact on Climate Change – A Legal Analysis of the Internal Market in Electricity.
In the media
8 Sep 2008 New Europe (neurope.eu) : Energy security and Europe.
10 Jan 2007, Reuters: EU puts climate change at heart of energy policy
14 Dec 2006, opendemocracy.net: Russia, Germany and European energy policy
20 Nov 2006, eupolitix.com: Barroso calls for strong EU energy policy
Energy economics
European Green Deal | Energy policy of the European Union | [
"Environmental_science"
] | 5,731 | [
"Energy economics",
"Environmental social science"
] |
8,864,159 | https://en.wikipedia.org/wiki/Molecular%20vibration | A molecular vibration is a periodic motion of the atoms of a molecule relative to each other, such that the center of mass of the molecule remains unchanged. The typical vibrational frequencies range from less than 1013 Hz to approximately 1014 Hz, corresponding to wavenumbers of approximately 300 to 3000 cm−1 and wavelengths of approximately 30 to 3 μm.
For a diatomic molecule A−B, the vibrational frequency in s−1 is given by , where k is the force constant in dyne/cm or erg/cm2 and μ is the reduced mass given by . The vibrational wavenumber in cm−1 is where c is the speed of light in cm/s.
Vibrations of polyatomic molecules are described in terms of normal modes, which are independent of each other, but each normal mode involves simultaneous vibrations of different parts of the molecule. In general, a non-linear molecule with N atoms has normal modes of vibration, but a linear molecule has modes, because rotation about the molecular axis cannot be observed. A diatomic molecule has one normal mode of vibration, since it can only stretch or compress the single bond.
A molecular vibration is excited when the molecule absorbs energy, ΔE, corresponding to the vibration's frequency, ν, according to the relation , where h is the Planck constant. A fundamental vibration is evoked when one such quantum of energy is absorbed by the molecule in its ground state. When multiple quanta are absorbed, the first and possibly higher overtones are excited.
To a first approximation, the motion in a normal vibration can be described as a kind of simple harmonic motion. In this approximation, the vibrational energy is a quadratic function (parabola) with respect to the atomic displacements and the first overtone has twice the frequency of the fundamental. In reality, vibrations are anharmonic and the first overtone has a frequency that is slightly lower than twice that of the fundamental. Excitation of the higher overtones involves progressively less and less additional energy and eventually leads to dissociation of the molecule, because the potential energy of the molecule is more like a Morse potential or more accurately, a Morse/Long-range potential.
The vibrational states of a molecule can be probed in a variety of ways. The most direct way is through infrared spectroscopy, as vibrational transitions typically require an amount of energy that corresponds to the infrared region of the spectrum. Raman spectroscopy, which typically uses visible light, can also be used to measure vibration frequencies directly. The two techniques are complementary and comparison between the two can provide useful structural information such as in the case of the rule of mutual exclusion for centrosymmetric molecules.
Vibrational excitation can occur in conjunction with electronic excitation in the ultraviolet-visible region. The combined excitation is known as a vibronic transition, giving vibrational fine structure to electronic transitions, particularly for molecules in the gas state.
Simultaneous excitation of a vibration and rotations gives rise to vibration–rotation spectra.
Number of vibrational modes
For a molecule with atoms, the positions of all nuclei depend on a total of 3 coordinates, so that the molecule has 3 degrees of freedom including translation, rotation and vibration. Translation corresponds to movement of the center of mass whose position can be described by 3 cartesian coordinates.
A nonlinear molecule can rotate about any of three mutually perpendicular axes and therefore has 3 rotational degrees of freedom. For a linear molecule, rotation about the molecular axis does not involve movement of any atomic nucleus, so there are only 2 rotational degrees of freedom which can vary the atomic coordinates.
An equivalent argument is that the rotation of a linear molecule changes the direction of the molecular axis in space, which can be described by 2 coordinates corresponding to latitude and longitude. For a nonlinear molecule, the direction of one axis is described by these two coordinates, and the orientation of the molecule about this axis provides a third rotational coordinate.
The number of vibrational modes is therefore 3 minus the number of translational and rotational degrees of freedom, or for linear and for nonlinear molecules.
Vibrational coordinates
The coordinate of a normal vibration is a combination of changes in the positions of atoms in the molecule. When the vibration is excited the coordinate changes sinusoidally with a frequency , the frequency of the vibration.
Internal coordinates
Internal coordinates are of the following types, illustrated with reference to the planar molecule ethylene,
Stretching: a change in the length of a bond, such as C–H or C–C
Bending: a change in the angle between two bonds, such as the HCH angle in a methylene group
Rocking: a change in angle between a group of atoms, such as a methylene group and the rest of the molecule
Wagging: a change in angle between the plane of a group of atoms, such as a methylene group and a plane through the rest of the molecule
Twisting: a change in the angle between the planes of two groups of atoms, such as a change in the angle between the two methylene groups
Out-of-plane: a change in the angle between any one of the C–H bonds and the plane defined by the remaining atoms of the ethylene molecule. Another example is in BF3 when the boron atom moves in and out of the plane of the three fluorine atoms.
In a rocking, wagging or twisting coordinate the bond lengths within the groups involved do not change. The angles do. Rocking is distinguished from wagging by the fact that the atoms in the group stay in the same plane.
In ethylene there are 12 internal coordinates: 4 C–H stretching, 1 C–C stretching, 2 H–C–H bending, 2 CH2 rocking, 2 CH2 wagging, 1 twisting. Note that the H–C–C angles cannot be used as internal coordinates as well as the H–C–H angle because the angles at each carbon atom cannot all increase at the same time.
Note that these coordinates do not correspond to normal modes (see ). In other words, they do not correspond to particular frequencies or vibrational transitions.
Vibrations of a methylene group (−CH2−) in a molecule for illustration
Within the CH2 group, commonly found in organic compounds, the two low mass hydrogens can vibrate in six different ways which can be grouped as 3 pairs of modes: 1. symmetric and asymmetric stretching, 2. scissoring and rocking, 3. wagging and twisting. These are shown here:
(These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms).
Symmetry-adapted coordinates
Symmetry–adapted coordinates may be created by applying a projection operator to a set of internal coordinates. The projection operator is constructed with the aid of the character table of the molecular point group. For example, the four (un-normalized) C–H stretching coordinates of the molecule ethene are given by
where are the internal coordinates for stretching of each of the four C–H bonds.
Illustrations of symmetry–adapted coordinates for most small molecules can be found in Nakamoto.
Normal coordinates
The normal coordinates, denoted as Q, refer to the positions of atoms away from their equilibrium positions, with respect to a normal mode of vibration. Each normal mode is assigned a single normal coordinate, and so the normal coordinate refers to the "progress" along that normal mode at any given time. Formally, normal modes are determined by solving a secular determinant, and then the normal coordinates (over the normal modes) can be expressed as a summation over the cartesian coordinates (over the atom positions). The normal modes diagonalize the matrix governing the molecular vibrations, so that each normal mode is an independent molecular vibration. If the molecule possesses symmetries, the normal modes "transform as" an irreducible representation under its point group. The normal modes are determined by applying group theory, and projecting the irreducible representation onto the cartesian coordinates. For example, when this treatment is applied to CO2, it is found that the C=O stretches are not independent, but rather there is an O=C=O symmetric stretch and an O=C=O asymmetric stretch:
symmetric stretching: the sum of the two C–O stretching coordinates; the two C–O bond lengths change by the same amount and the carbon atom is stationary.
asymmetric stretching: the difference of the two C–O stretching coordinates; one C–O bond length increases while the other decreases.
When two or more normal coordinates belong to the same irreducible representation of the molecular point group (colloquially, have the same symmetry) there is "mixing" and the coefficients of the combination cannot be determined a priori. For example, in the linear molecule hydrogen cyanide, HCN, The two stretching vibrations are
principally C–H stretching with a little C–N stretching;
principally C–N stretching with a little C–H stretching;
The coefficients a and b are found by performing a full normal coordinate analysis by means of the Wilson GF method.
Newtonian mechanics
Perhaps surprisingly, molecular vibrations can be treated using Newtonian mechanics to calculate the correct vibration frequencies. The basic assumption is that each vibration can be treated as though it corresponds to a spring. In the harmonic approximation the spring obeys Hooke's law: the force required to extend the spring is proportional to the extension. The proportionality constant is known as a force constant, k. The anharmonic oscillator is considered elsewhere.
By Newton's second law of motion this force is also equal to a reduced mass, μ, times acceleration.
Since this is one and the same force the ordinary differential equation follows.
The solution to this equation of simple harmonic motion is
A is the maximum amplitude of the vibration coordinate Q. It remains to define the reduced mass, μ. In general, the reduced mass of a diatomic molecule, AB, is expressed in terms of the atomic masses, mA and mB, as
The use of the reduced mass ensures that the centre of mass of the molecule is not affected by the vibration. In the harmonic approximation the potential energy of the molecule is a quadratic function of the normal coordinate. It follows that the force-constant is equal to the second derivative of the potential energy.
When two or more normal vibrations have the same symmetry a full normal coordinate analysis must be performed (see GF method). The vibration frequencies, νi, are obtained from the eigenvalues, λi, of the matrix product GF. G is a matrix of numbers derived from the masses of the atoms and the geometry of the molecule. F is a matrix derived from force-constant values. Details concerning the determination of the eigenvalues can be found in.
Quantum mechanics
In the harmonic approximation the potential energy is a quadratic function of the normal coordinates. Solving the Schrödinger wave equation, the energy states for each normal coordinate are given by
where n is a quantum number that can take values of 0, 1, 2, ... In molecular spectroscopy where several types of molecular energy are studied and several quantum numbers are used, this vibrational quantum number is often designated as v.
The difference in energy when n (or v) changes by 1 is therefore equal to , the product of the Planck constant and the vibration frequency derived using classical mechanics. For a transition from level n to level n+1 due to absorption of a photon, the frequency of the photon is equal to the classical vibration frequency (in the harmonic oscillator approximation).
See quantum harmonic oscillator for graphs of the first 5 wave functions, which allow certain selection rules to be formulated. For example, for a harmonic oscillator transitions are allowed only when the quantum number n changes by one,
but this does not apply to an anharmonic oscillator; the observation of overtones is only possible because vibrations are anharmonic. Another consequence of anharmonicity is that transitions such as between states and have slightly less energy than transitions between the ground state and first excited state. Such a transition gives rise to a hot band. To describe vibrational levels of an anharmonic oscillator, Dunham expansion is used.
Intensities
In an infrared spectrum the intensity of an absorption band is proportional to the derivative of the molecular dipole moment with respect to the normal coordinate. Likewise, the intensity of Raman bands depends on the derivative of polarizability with respect to the normal coordinate. There is also a dependence on the fourth-power of the wavelength of the laser used.
See also
Coherent anti-Stokes Raman spectroscopy (CARS)
Eckart conditions
Fermi resonance
GF method
Infrared spectroscopy of metal carbonyls
Lennard-Jones potential
Near-infrared spectroscopy
Nuclear resonance vibrational spectroscopy
Resonance Raman spectroscopy
Transition dipole moment
References
Further reading
External links
Free Molecular Vibration code developed by Zs. Szabó and R. Scipioni
Molecular vibration and absorption
small explanation of vibrational spectra and a table including force constants.
Character tables for chemically important point groups
Chemical physics
Spectroscopy | Molecular vibration | [
"Physics",
"Chemistry"
] | 2,691 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Molecular physics",
"Instrumental analysis",
"Molecular vibration",
"nan",
"Spectroscopy",
"Chemical physics"
] |
1,595,585 | https://en.wikipedia.org/wiki/Pi3%20Orionis | {{DISPLAYTITLE:Pi3 Orionis}}
Pi3 Orionis (π3 Orionis, abbreviated Pi3 Ori, π3 Ori), also named Tabit , is a star in the equatorial constellation of Orion. At an apparent visual magnitude of 3.16, it is readily visible to the naked eye and is the brightest star in the lion's hide (or shield) that Orion is holding. As measured using the parallax technique, it is distant from the Sun.
Nomenclature
π3 Orionis (Latinised to Pi3 Orionis) is the system's Bayer designation.
It bore the traditional name of 'Tabit', from the Arabic الثابت al-thābit 'the endurer (the fixed/constant one)'. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Tabit for this star on 5 September 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Banner of Three Stars, refers to an asterism consisting of π3 Orionis, ο1 Orionis, ο2 Orionis, 6 Orionis, π1 Orionis, π2 Orionis, π4 Orionis, π5 Orionis and π6 Orionis. Consequently, the Chinese name for Pi3 Orionis itself is (), "the Sixth Star of Banner of Three Stars".
According to Richard Hinckley Allen: Star Names – Their Lore and Meaning, this star, together with ο1 Orionis, ο2 Orionis, π1 Orionis, π2 Orionis, π4 Orionis, π5 Orionis, π6 Orionis and 6 Orionis (are all of the 4th to the 5th magnitudes and in a vertical line), indicate the lion's skinwere but Al Tizini said that they were the Persians' Al Tāj, "the Crown", or "Tiara", of their kings; and the Arabians' Al Kumm, "the Sleeve", of the garment in which they dressed the Giant, the skin being omitted. Ulugh Beg called them Al Dhawāib, "Anything Pendent"; and the Borgian globe had the same, perhaps originated it. Al Sufi's title was Manica, a Latin term for a protecting Gauntlet; and Grotius gave a lengthy dissertation on the Mantile, which some anonymous person applied to them, figured as a cloth thrown over the Giant's arm.
Properties
Pi3 Orionis is most likely single; a nearby star is probably an optical companion.
It is a main-sequence star of spectral type F6 V. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. Compared to the Sun, it has about 129% of the mass, 132% of the radius, and nearly 3 times the luminosity. This energy is being radiated from the star's outer atmosphere at an effective temperature of , giving it the yellow-white glow of an F-type star.
Although a periodicity of 73.26 days has been observed in the star's radial velocity, it seems likely to be bound more to stellar activity than to a planetary object in close orbit. No substellar companion has been detected so far around Tabit and the McDonald Observatory team has set limits to the presence of one or more planets with masses between 0.84 and 46.7 Jupiter masses and average separations spanning between 0.05 and 5.2 astronomical units. Thus, so far it appears that planets could easily orbit in the habitable zone without any complications caused by a gravitationally perturbing body.
See also
List of star systems within 25–30 light-years
References
External links
Orion (constellation)
Tabit
Orionis, Pi3
Orionis, 01
F-type main-sequence stars
Suspected variables
033449
0178
1543
030652
BD+05 0762 | Pi3 Orionis | [
"Astronomy"
] | 841 | [
"Constellations",
"Orion (constellation)"
] |
1,595,626 | https://en.wikipedia.org/wiki/Interagency%20GPS%20Executive%20Board | The Interagency GPS Executive Board (IGEB) was an agency of the United States federal government that sought to integrate the needs and desires of various governmental agencies into formal Global Positioning System Planning. GPS was administered by the Department of Defense, but had grown to service a wide variety of constituents. The majority of GPS uses are now non-military, so this board was fundamental in ensuring the needs of non-military users.
In 2004, the IGEB was superseded by the National Executive Committee for Space-Based Positioning, Navigation and Timing (PNT), established by presidential order.
External links
From the IGEB Era at the PNT
Global Positioning System
Defunct agencies of the United States government | Interagency GPS Executive Board | [
"Technology",
"Engineering"
] | 143 | [
"Global Positioning System",
"Wireless locating",
"Aircraft instruments",
"Aerospace engineering"
] |
1,595,681 | https://en.wikipedia.org/wiki/Lie%20theory | In mathematics, the mathematician Sophus Lie ( ) initiated lines of study involving integration of differential equations, transformation groups, and contact of spheres that have come to be called Lie theory. For instance, the latter subject is Lie sphere geometry. This article addresses his approach to transformation groups, which is one of the areas of mathematics, and was worked out by Wilhelm Killing and Élie Cartan.
The foundation of Lie theory is the exponential map relating Lie algebras to Lie groups which is called the Lie group–Lie algebra correspondence. The subject is part of differential geometry since Lie groups are differentiable manifolds. Lie groups evolve out of the identity (1) and the tangent vectors to one-parameter subgroups generate the Lie algebra. The structure of a Lie group is implicit in its algebra, and the structure of the Lie algebra is expressed by root systems and root data.
Lie theory has been particularly useful in mathematical physics since it describes the standard transformation groups: the Galilean group, the Lorentz group, the Poincaré group and the conformal group of spacetime.
Elementary Lie theory
The one-parameter groups are the first instance of Lie theory. The compact case arises through Euler's formula in the complex plane. Other one-parameter groups occur in the split-complex number plane as the unit hyperbola
and in the dual number plane as the line
In these cases the Lie algebra parameters have names: angle, hyperbolic angle, and slope. These species of angle are useful for providing polar decompositions which describe sub-algebras of 2 x 2 real matrices.
There is a classical 3-parameter Lie group and algebra pair: the quaternions of unit length which can be identified with the 3-sphere. Its Lie algebra is the subspace of quaternion vectors. Since the commutator ij − ji = 2k, the Lie bracket in this algebra is twice the cross product of ordinary vector analysis.
Another elementary 3-parameter example is given by the Heisenberg group and its Lie algebra.
Standard treatments of Lie theory often begin with the classical groups.
History and scope
Early expressions of Lie theory are found in books composed by Sophus Lie with Friedrich Engel and Georg Scheffers from 1888 to 1896.
In Lie's early work, the idea was to construct a theory of continuous groups, to complement the theory of discrete groups that had developed in the theory of modular forms, in the hands of Felix Klein and Henri Poincaré. The initial application that Lie had in mind was to the theory of differential equations. On the model of Galois theory and polynomial equations, the driving conception was of a theory capable of unifying, by the study of symmetry, the whole area of ordinary differential equations.
According to historian Thomas W. Hawkins, it was Élie Cartan that made Lie theory what it is:
While Lie had many fertile ideas, Cartan was primarily responsible for the extensions and applications of his theory that have made it a basic component of modern mathematics. It was he who, with some help from Weyl, developed the seminal, essentially algebraic ideas of Killing into the theory of the structure and representation of semisimple Lie algebras that plays such a fundamental role in present-day Lie theory. And although Lie envisioned applications of his theory to geometry, it was Cartan who actually created them, for example through his theories of symmetric and generalized spaces, including all the attendant apparatus (moving frames, exterior differential forms, etc.)
Lie's three theorems
In his work on transformation groups, Sophus Lie proved three theorems relating the groups and algebras that bear his name. The first theorem exhibited the basis of an algebra through infinitesimal transformations. The second theorem exhibited structure constants of the algebra as the result of commutator products in the algebra. The third theorem showed these constants are anti-symmetric and satisfy the Jacobi identity. As Robert Gilmore wrote:
Lie's three theorems provide a mechanism for constructing the Lie algebra associated with any Lie group. They also characterize the properties of a Lie algebra. ¶ The converses of Lie’s three theorems do the opposite: they supply a mechanism for associating a Lie group with any finite dimensional Lie algebra ... Taylor's theorem allows for the construction of a canonical analytic structure function φ(β,α) from the Lie algebra. ¶ These seven theorems – the three theorems of Lie and their converses, and Taylor's theorem – provide an essential equivalence between Lie groups and algebras.
Aspects of Lie theory
Lie theory is frequently built upon a study of the classical linear algebraic groups. Special branches include Weyl groups, Coxeter groups, and buildings. The classical subject has been extended to Groups of Lie type.
In 1900 David Hilbert challenged Lie theorists with his Fifth Problem presented at the International Congress of Mathematicians in Paris.
See also
Baker–Campbell–Hausdorff formula
Glossary of Lie groups and Lie algebras
List of Lie groups topics
Lie group integrator
Notes and references
John A. Coleman (1989) "The Greatest Mathematical Paper of All Time", The Mathematical Intelligencer 11(3): 29–38.
Further reading
M.A. Akivis & B.A. Rosenfeld (1993) Élie Cartan (1869–1951), translated from Russian original by V.V. Goldberg, chapter 2: Lie groups and Lie algebras, American Mathematical Society .
P. M. Cohn (1957) Lie Groups, Cambridge Tracts in Mathematical Physics.
J. L. Coolidge (1940) A History of Geometrical Methods, pp 304–17, Oxford University Press (Dover Publications 2003).
Robert Gilmore (2008) Lie groups, physics, and geometry: an introduction for physicists, engineers and chemists, Cambridge University Press .
F. Reese Harvey (1990) Spinors and calibrations, Academic Press, .
.
Heldermann Verlag Journal of Lie Theory
Differential equations
History of mathematics | Lie theory | [
"Mathematics"
] | 1,211 | [
"Lie groups",
"Mathematical structures",
"Mathematical objects",
"Differential equations",
"Equations",
"Algebraic structures"
] |
1,595,817 | https://en.wikipedia.org/wiki/Energy%20demand%20management | Energy demand management, also known as demand-side management (DSM) or demand-side response (DSR), is the modification of consumer demand for energy through various methods such as financial incentives and behavioral change through education.
Usually, the goal of demand-side management is to encourage the consumer to use less energy during peak hours, or to move the time of energy use to off-peak times such as nighttime and weekends. Peak demand management does not necessarily decrease total energy consumption, but could be expected to reduce the need for investments in networks and/or power plants for meeting peak demands. An example is the use of energy storage units to store energy during off-peak hours and discharge them during peak hours.
A newer application for DSM is to aid grid operators in balancing variable generation from wind and solar units, particularly when the timing and magnitude of energy demand does not coincide with the renewable generation. Generators brought on line during peak demand periods are often fossil fuel units. Minimizing their use reduces emissions of carbon dioxide and other pollutants.
The term DSM was coined following the time of the 1973 energy crisis and 1979 energy crisis. Governments of many countries mandated performance of various programs for demand management. An early example is the National Energy Conservation Policy Act of 1978 in the U.S., preceded by similar actions in California and Wisconsin. Demand-side management was introduced publicly by Electric Power Research Institute (EPRI) in the 1980s. Nowadays, DSM technologies become increasingly feasible due to the integration of information and communications technology and the power system, new terms such as integrated demand-side management (IDSM), or smart grid.
Operation
The American electric power industry originally relied heavily on foreign energy imports, whether in the form of consumable electricity or fossil fuels that were then used to produce electricity. During the time of the energy crises in the 1970s, the federal government passed the Public Utility Regulatory Policies Act (PURPA), hoping to reduce dependence on foreign oil and to promote energy efficiency and alternative energy sources. This act forced utilities to obtain the cheapest possible power from independent power producers, which in turn promoted renewables and encouraged the utility to reduce the amount of power they need, hence pushing forward agendas for energy efficiency and demand management.
Electricity use can vary dramatically on short and medium time frames, depending on current weather patterns. Generally the wholesale electricity system adjusts to changing demand by dispatching additional or less generation. However, during peak periods, the additional generation is usually supplied by less efficient ("peaking") sources. Unfortunately, the instantaneous financial and environmental cost of using these "peaking" sources is not necessarily reflected in the retail pricing system. In addition, the ability or willingness of electricity consumers to adjust to price signals by altering demand (elasticity of demand) may be low, particularly over short time frames. In many markets, consumers (particularly retail customers) do not face real-time pricing at all, but pay rates based on average annual costs or other constructed prices.
Energy demand management activities attempt to bring the electricity demand and supply closer to a perceived optimum, and help give electricity end users benefits for reducing their demand. In the modern system, the integrated approach to demand-side management is becoming increasingly common. IDSM automatically sends signals to end-use systems to shed load depending on system conditions. This allows for very precise tuning of demand to ensure that it matches supply at all times, reduces capital expenditures for the utility. Critical system conditions could be peak times, or in areas with levels of variable renewable energy, during times when demand must be adjusted upward to avoid over-generation or downward to help with ramping needs.
In general, adjustments to demand can occur in various ways: through responses to price signals, such as permanent differential rates for evening and day times or occasional highly priced usage days, behavioral changes achieved through home area networks, automated controls such as with remotely controlled air-conditioners, or with permanent load adjustments with energy efficient appliances.
Logical foundations
Demand for any commodity can be modified by actions of market players and government (regulation and taxation). Energy demand management implies actions that influence demand for energy. DSM was originally adopted in electricity, but today it is applied widely to utilities including water and gas as well.
Reducing energy demand is contrary to what both energy suppliers and governments have been doing during most of the modern industrial history. Whereas real prices of various energy forms have been decreasing during most of the industrial era, due to economies of scale and technology, the expectation for the future is the opposite. Previously, it was not unreasonable to promote energy use as more copious and cheaper energy sources could be anticipated in the future or the supplier had installed excess capacity that would be made more profitable by increased consumption.
In centrally planned economies subsidizing energy was one of the main economic development tools. Subsidies to the energy supply industry are still common in some countries.
Contrary to the historical situation, energy prices and availability are expected to deteriorate. Governments and other public actors, if not the energy suppliers themselves, are tending to employ energy demand measures that will increase the efficiency of energy consumption.
Types
Energy efficiency: Using less power to perform the same tasks. This involves a permanent reduction of demand by using more efficient load-intensive appliances such as water heaters, refrigerators, or washing machines.
Demand response: Any reactive or preventative method to reduce, flatten or shift demand. Historically, demand response programs have focused on peak reduction to defer the high cost of constructing generation capacity. However, demand response programs are now being looked to assist with changing the net load shape as well, load minus solar and wind generation, to help with integration of variable renewable energy. Demand response includes all intentional modifications to consumption patterns of electricity of end user customers that are intended to alter the timing, level of instantaneous demand, or the total electricity consumption. Demand response refers to a wide range of actions which can be taken at the customer side of the electricity meter in response to particular conditions within the electricity system (such as peak period network congestion or high prices), including the aforementioned IDSM.
Dynamic demand: Advance or delay appliance operating cycles by a few seconds to increase the diversity factor of the set of loads. The concept is that by monitoring the power factor of the power grid, as well as their own control parameters, individual, intermittent loads would switch on or off at optimal moments to balance the overall system load with generation, reducing critical power mismatches. As this switching would only advance or delay the appliance operating cycle by a few seconds, it would be unnoticeable to the end user. In the United States, in 1982, a (now-lapsed) patent for this idea was issued to power systems engineer Fred Schweppe. This type of dynamic demand control is frequently used for air-conditioners. One example of this is through the SmartAC program in California.
Distributed energy resources: Distributed generation, also distributed energy, on-site generation (OSG) or district/decentralized energy is electrical generation and storage performed by a variety of small, grid-connected devices referred to as distributed energy resources (DER). Conventional power stations, such as coal-fired, gas and nuclear powered plants, as well as hydroelectric dams and large-scale solar power stations, are centralized and often require electric energy to be transmitted over long distances. By contrast, DER systems are decentralized, modular and more flexible technologies, that are located close to the load they serve, albeit having capacities of only 10 megawatts (MW) or less. These systems can comprise multiple generation and storage components; in this instance they are referred to as hybrid power systems. DER systems typically use renewable energy sources, including small hydro, biomass, biogas, solar power, wind power, and geothermal power, and increasingly play an important role for the electric power distribution system. A grid-connected device for electricity storage can also be classified as a DER system, and is often called a distributed energy storage system (DESS). By means of an interface, DER systems can be managed and coordinated within a smart grid. Distributed generation and storage enables collection of energy from many sources and may lower environmental impacts and improve security of supply.
Scale
Broadly, demand side management can be classified into four categories: national scale, utility scale, community scale, and individual household scale.
National scale
Energy efficiency improvement is one of the most important demand side management strategies. Efficiency improvements can be implemented nationally through legislation and standards in housing, building, appliances, transport, machines, etc.
Utility scale
During peak demand time, utilities are able to control storage water heaters, pool pumps and air conditioners in large areas to reduce peak demand, e.g. Australia and Switzerland. One of the common technologies is ripple control: high frequency signal (e.g. 1000 Hz) is superimposed to normal electricity (50 or 60 Hz) to switch on or off devices.
In more service-based economies, such as Australia, electricity network peak demand often occurs in the late afternoon to early evening (4pm to 8pm). Residential and commercial demand is the most significant part of these types of peak demand. Therefore, it makes great sense for utilities (electricity network distributors) to manage residential storage water heaters, pool pumps, and air conditioners.
Community scale
Other names can be neighborhood, precinct, or district. Community central heating systems have been existing for many decades in regions of cold winters. Similarly, peak demand in summer peak regions need to be managed, e.g. Texas & Florida in the U.S., Queensland and New South Wales in Australia. Demand side management can be implemented in community scale to reduce peak demand for heating or cooling. Another aspect is to achieve Net zero-energy building or community.
Managing energy, peak demand and bills in community level may be more feasible and viable, because of the collective purchasing power, the bargaining power, more options in energy efficiency or storage, more flexibility and diversity in generating and consuming energy at different times, e.g. using PV to compensate day time consumption or for energy storage.
Household scale
In areas of Australia, more than 30% (2016) of households have rooftop photo-voltaic systems. It is useful for them to use free energy from the sun to reduce energy import from the grid. Further, demand side management can be helpful when a systematic approach is considered: the operation of photovoltaic, air conditioner, battery energy storage systems, storage water heaters, building performance and energy efficiency measures.
Examples
Queensland, Australia
The utility companies in the state of Queensland, Australia have devices fitted onto certain household appliances such as air conditioners or into household meters to control water heater, pool pumps etc. These devices would allow energy companies to remotely cycle the use of these items during peak hours. Their plan also includes improving the efficiency of energy-using items and giving financial incentives to consumers who use electricity during off-peak hours, when it is less expensive for energy companies to produce.
Another example is that with demand side management, Southeast Queensland households can use electricity from rooftop photo-voltaic system to heat up water.
Toronto, Canada
In 2008, Toronto Hydro, the monopoly energy distributor of Ontario, had over 40,000 people signed up to have remote devices attached to air conditioners which energy companies use to offset spikes in demand. Spokeswoman Tanya Bruckmueller says that this program can reduce demand by 40 megawatts during emergency situations.
Indiana, US
The Alcoa Warrick Operation is participating in MISO as a qualified demand response resource, which means it is providing demand response in terms of energy, spinning reserve, and regulation service.
Brazil
Demand-side management can apply to electricity system based on thermal power plants or to systems where renewable energy, as hydroelectricity, is predominant but with a complementary thermal generation, for instance, in Brazil.
In Brazil's case, despite the generation of hydroelectric power corresponds to more than 80% of the total, to achieve a practical balance in the generation system, the energy generated by hydroelectric plants supplies the consumption below the peak demand. Peak generation is supplied by the use of fossil-fuel power plants. In 2008, Brazilian consumers paid more than U$1 billion for complementary thermoelectric generation not previously programmed.
In Brazil, the consumer pays for all the investment to provide energy, even if a plant sits idle. For most fossil-fuel thermal plants, the consumers pay for the "fuels" and other operation costs only when these plants generate energy. The energy, per unit generated, is more expensive from thermal plants than from hydroelectric. Only a few of the Brazilian's thermoelectric plants use natural gas, so they pollute significantly more than hydroelectric plants. The power generated to meet the peak demand has higher costs—both investment and operating costs—and the pollution has a significant environmental cost and potentially, financial and social liability for its use. Thus, the expansion and the operation of the current system is not as efficient as it could be using demand side management. The consequence of this inefficiency is an increase in energy tariffs that is passed on to the consumers.
Moreover, because electric energy is generated and consumed almost instantaneously, all the facilities, as transmission lines and distribution nets, are built for peak consumption. During the non-peak periods their full capacity is not utilized.
The reduction of peak consumption can benefit the efficiency of the electric systems, like the Brazilian system, in various ways: as deferring new investments in distribution and transmission networks, and reducing the necessity of complementary thermal power operation during peak periods, which can diminish both the payment for investment in new power plants to supply only during the peak period and the environmental impact associated with greenhouse gas emission.
Issues
Some people argue that demand-side management has been ineffective because it has often resulted in higher utility costs for consumers and less profit for utilities.
One of the main goals of demand side management is to be able to charge the consumer based on the true price of the utilities at that time. If consumers could be charged less for using electricity during off-peak hours, and more during peak hours, then supply and demand would theoretically encourage the consumer to use less electricity during peak hours, thus achieving the main goal of demand side management.
See also
Alternative fuel
Battery-to-grid
Dynamic demand (electric power)
Demand response
Duck curve
Energy conservation
Energy intensity
Energy storage as a service (ESaaS)
Grid energy storage
GridLAB-D
List of energy storage projects
Load profile
Load management
Time of Use
Notes
References
.
.
Works cited
External links
Demand-Side Management Programme IEA
Energy subsidies in the European Union: A brief overview
Managing Energy Demand seminar Bern, nov 4 2009
UK Demand Side Response
Market failure
Electric power distribution
Energy economics
Demand management | Energy demand management | [
"Environmental_science"
] | 3,002 | [
"Energy economics",
"Environmental social science"
] |
1,595,873 | https://en.wikipedia.org/wiki/Cantor%E2%80%93Dedekind%20axiom | In mathematical logic, the Cantor–Dedekind axiom is the thesis that the real numbers are order-isomorphic to the linear continuum of geometry. In other words, the axiom states that there is a one-to-one correspondence between real numbers and points on a line.
This axiom became a theorem proved by Emil Artin in his book Geometric Algebra. More precisely, Euclidean spaces defined over the field of real numbers satisfy the axioms of Euclidean geometry, and, from the axioms of Euclidean geometry, one can construct a field that is isomorphic to the real numbers.
Analytic geometry was developed from the Cartesian coordinate system introduced by René Descartes. It implicitly assumed this axiom by blending the distinct concepts of real numbers and points on a line, sometimes referred to as the real number line. Artin's proof, not only makes this blend explicitly, but also that analytic geometry is strictly equivalent with the traditional synthetic geometry, in the sense that exactly the same theorems can be proved in the two frameworks.
Another consequence is that Alfred Tarski's proof of the decidability of first-order theories of the real numbers could be seen as an algorithm to solve any first-order problem in Euclidean geometry.
See also
Cantor's theorem
References
Ehrlich, P. (1994). "General introduction". Real Numbers, Generalizations of the Reals, and Theories of Continua, vi–xxxii. Edited by P. Ehrlich, Kluwer Academic Publishers, Dordrecht
Bruce E. Meserve (1953)
B.E. Meserve (1955)
Real numbers
Mathematical axioms | Cantor–Dedekind axiom | [
"Mathematics"
] | 340 | [
"Real numbers",
"Mathematical logic",
"Mathematical objects",
"Mathematical axioms",
"Mathematical logic stubs",
"Numbers"
] |
1,595,922 | https://en.wikipedia.org/wiki/Photosensitivity | Photosensitivity is the amount to which an object reacts upon receiving photons, especially visible light. In medicine, the term is principally used for abnormal reactions of the skin, and two types are distinguished, photoallergy and phototoxicity. The photosensitive ganglion cells in the mammalian eye are a separate class of light-detecting cells from the photoreceptor cells that function in vision.
Skin reactions
Human medicine
Sensitivity of the skin to a light source can take various forms. People with particular skin types are more sensitive to sunburn. Particular medications make the skin more sensitive to sunlight; these include most of the tetracycline antibiotics, heart drugs amiodarone, and sulfonamides.
Some dietary supplements, such as St. John's Wort, include photosensitivity as a possible side effect.
Particular conditions lead to increased light sensitivity. Patients with systemic lupus erythematosus experience skin symptoms after sunlight exposure; some types of porphyria are aggravated by sunlight. A rare hereditary condition xeroderma pigmentosum (a defect in DNA repair) is thought to increase the risk of UV-light-exposure-related cancer by increasing photosensitivity.
Veterinary medicine
Photosensitivity occurs in multiple species including sheep, bovine, and horses. They are classified as primary if an ingested plant contains a photosensitive substance, like hypericin in St John's wort poisoning and ingestion of biserrula (Biserrula pelecinus) in sheep, or buckwheat plants (green or dried) in horses.
In hepatogenous photosensitization, the photosensitzing substance is phylloerythrin, a normal end-product of chlorophyll metabolism. It accumulates in the body because of liver damage, reacts with UV light on the skin, and leads to free radical formation. These free radicals damage the skin, leading to ulceration, necrosis, and sloughing. Non-pigmented skin is most commonly affected.
See also
Digital camera ISO
Bergaptene
Heliotropism
Photophobia
Solar urticaria
Snow blindness
Photosensitizer
Notes
External links
Sensor sensitivity (ISO) in digital cameras
Skin physiology
Clinical pharmacology | Photosensitivity | [
"Chemistry"
] | 473 | [
"Pharmacology",
"Clinical pharmacology"
] |
1,596,063 | https://en.wikipedia.org/wiki/Mollifier | In mathematics, mollifiers (also known as approximations to the identity) are particular smooth functions, used for example in distribution theory to create sequences of smooth functions approximating nonsmooth (generalized) functions, via convolution. Intuitively, given a (generalized) function, convolving it with a mollifier "mollifies" it, that is, its sharp features are smoothed, while still remaining close to the original.
They are also known as Friedrichs mollifiers after Kurt Otto Friedrichs, who introduced them.
Historical notes
Mollifiers were introduced by Kurt Otto Friedrichs in his paper , which is considered a watershed in the modern theory of partial differential equations. The name of this mathematical object has a curious genesis, and Peter Lax tells the story in his commentary on that paper published in Friedrichs' "Selecta". According to him, at that time, the mathematician Donald Alexander Flanders was a colleague of Friedrichs; since he liked to consult colleagues about English usage, he asked Flanders for advice on naming the smoothing operator he was using. Flanders was a modern-day puritan, nicknamed by his friends Moll after Moll Flanders in recognition of his moral qualities: he suggested calling the new mathematical concept a "mollifier" as a pun incorporating both Flanders' nickname and the verb 'to mollify', meaning 'to smooth over' in a figurative sense.
Previously, Sergei Sobolev had used mollifiers in his epoch making 1938 paper, which contains the proof of the Sobolev embedding theorem: Friedrichs himself acknowledged Sobolev's work on mollifiers, stating "These mollifiers were introduced by Sobolev and the author...".
It must be pointed out that the term "mollifier" has undergone linguistic drift since the time of these foundational works: Friedrichs defined as "mollifier" the integral operator whose kernel is one of the functions nowadays called mollifiers. However, since the properties of a linear integral operator are completely determined by its kernel, the name mollifier was inherited by the kernel itself as a result of common usage.
Definition
Modern (distribution based) definition
Let be a smooth function on , , and put for .
Then is a mollifier if it satisfies the following three requirements:
it is compactly supported,
,
,
where is the Dirac delta function, and the limit must be understood as taking place in the space of Schwartz distributions. The function may also satisfy further conditions of interest; for example, if it satisfies
for all ,
then it is called a positive mollifier, and if it satisfies
for some infinitely differentiable function ,
then it is called a symmetric mollifier.
Notes on Friedrichs' definition
Note 1. When the theory of distributions was still not widely known nor used, property above was formulated by saying that the convolution of the function with a given function belonging to a proper Hilbert or Banach space converges as ε → 0 to that function: this is exactly what Friedrichs did. This also clarifies why mollifiers are related to approximate identities.
Note 2. As briefly pointed out in the "Historical notes" section of this entry, originally, the term "mollifier" identified the following convolution operator:See , paragraph 2, "Integral operators".
where and is a smooth function satisfying the first three conditions stated above and one or more supplementary conditions as positivity and symmetry.
Concrete example
Consider the bump function of a variable in defined by
where the numerical constant ensures normalization. This function is infinitely differentiable, non analytic with vanishing derivative for . can be therefore used as mollifier as described above: one can see that defines a positive and symmetric mollifier.
Properties
All properties of a mollifier are related to its behaviour under the operation of convolution: we list the following ones, whose proofs can be found in every text on distribution theory.
Smoothing property
For any distribution , the following family of convolutions indexed by the real number
where denotes convolution, is a family of smooth functions.
Approximation of identity
For any distribution , the following family of convolutions indexed by the real number converges to
Support of convolution
For any distribution ,
,
where indicates the support in the sense of distributions, and indicates their Minkowski addition.
Applications
The basic application of mollifiers is to prove that properties valid for smooth functions are also valid in nonsmooth situations.
Product of distributions
In some theories of generalized functions, mollifiers are used to define the multiplication of distributions. Given two distributions and , the limit of the product of the smooth function obtained from one operand via mollification, with the other operand defines, when it exists, their product in various theories of generalized functions:
.
"Weak=Strong" theorems
Mollifiers are used to prove the identity of two different kind of extension of differential operators: the strong extension and the weak extension. The paper by Friedrichs which introduces mollifiers illustrates this approach.
Smooth cutoff functions
By convolution of the characteristic function of the unit ball with the smooth function '' (defined as in with ), one obtains the function
which is a smooth function equal to on , with support contained in . This can be seen easily by observing that if and then . Hence for ,
.
One can see how this construction can be generalized to obtain a smooth function identical to one on a neighbourhood of a given compact set, and equal to zero in every point whose distance from this set is greater than a given . Such a function is called a (smooth) cutoff function; these are used to eliminate singularities of a given (generalized) function via multiplication. They leave unchanged the value of the multiplicand on a given set, but modify its support. Cutoff functions are used to construct smooth partitions of unity.
See also
Approximate identity
Bump function
Convolution
Distribution (mathematics)
Generalized function
Kurt Otto Friedrichs
Non-analytic smooth function
Sergei Sobolev
Weierstrass transform
Notes
References
. The first paper where mollifiers were introduced.
. A paper where the differentiability of solutions of elliptic partial differential equations is investigated by using mollifiers.
. A selection from Friedrichs' works with a biography and commentaries of David Isaacson, Fritz John, Tosio Kato, Peter Lax, Louis Nirenberg, Wolfgag Wasow, Harold Weitzner.
.
.
. The paper where Sergei Sobolev proved his embedding theorem, introducing and using integral operators very similar to mollifiers, without naming them.
Functional analysis
Smooth functions
Schwartz distributions | Mollifier | [
"Mathematics"
] | 1,396 | [
"Functional analysis",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
1,596,114 | https://en.wikipedia.org/wiki/Complement%20membrane%20attack%20complex | The membrane attack complex (MAC) or terminal complement complex (TCC) is a complex of proteins typically formed on the surface of pathogen cell membranes as a result of the activation of the host's complement system, and as such is an effector of the immune system. Antibody-mediated complement activation leads to MAC deposition on the surface of infected cells. Assembly of the MAC leads to pores that disrupt the cell membrane of target cells, leading to cell lysis and death.
The MAC is composed of the complement components C5b, C6, C7, C8 and several C9 molecules.
A number of proteins participate in the assembly of the MAC. Freshly activated C5b binds to C6 to form a C5b-6 complex, then to C7 forming the C5b-6-7 complex. The C5b-6-7 complex binds to C8, which is composed of three chains (alpha, beta, and gamma), thus forming the C5b-6-7-8 complex. C5b-6-7-8 subsequently binds to C9 and acts as a catalyst in the polymerization of C9.
Structure and function
MAC is composed of a complex of four complement proteins (C5b, C6, C7, and C8) that bind to the outer surface of the plasma membrane, and many copies of a fifth protein (C9) that hook up to one another, forming a ring in the membrane. C6-C9 all contain a common MACPF domain. This region is homologous to cholesterol-dependent cytolysins from Gram-positive bacteria.
The ring structure formed by C9 is a pore in the membrane that allows free diffusion of molecules in and out of the cell. If enough pores form, the cell is no longer able to survive.
If the pre-MAC complexes of C5b-7, C5b-8 or C5b-9 do not insert into a membrane, they can form inactive complexes with Protein S (sC5b-7, sC5b-8 and sC5b-9). These fluid phase complexes do not bind to cell membranes and are ultimately scavenged by clusterin and vitronectin, two regulators of complement.
Initiation: C5-C7
The membrane attack complex is initiated when the complement protein C5 convertase cleaves C5 into C5a and C5b. All three pathways of the complement system (classical, lectin and alternative pathways) initiate the formation of MAC.
Another complement protein, C6, binds to C5b.
The C5bC6 complex is bound by C7.
This junction alters the configuration of the protein molecules exposing a hydrophobic site on C7 that allows the C7 to insert into the phospholipid bilayer of the pathogen.
Polymerization: C8-C9
Similar hydrophobic sites on C8 and C9 molecules are exposed when they bind to the complex, so they can also insert into the bilayer.
C8 is a complex made of the two proteins C8-beta and C8 alpha-gamma.
C8 alpha-gamma has the hydrophobic area that inserts into the bilayer. C8 alpha-gamma induces the polymerization of 10-16 molecules of C9 into a pore-forming structure known as the membrane attack complex.
MAC has a hydrophobic external face allowing it to associate with the lipid bilayer.
MAC has a hydrophilic internal face to allow the passage of water.
Multiple molecules of C9 can join spontaneously in concentrated solution to form polymers of C9. These polymers can also form a tube-like structure.
Inhibition
CD59 acts to inhibit the complex. This exists on body cells to protect them from MAC.
A rare condition, paroxysmal nocturnal haemoglobinuria, results in red blood cells that lack CD59. These cells can, therefore, be lysed by MAC. Inhibition of MAC has been shown to reduce inflammation and neuroaxonal loss at 72 hours post-Traumatic Brain Injury (TBI) event, potentially preventing neurological damage, especially in cases with acquired sepsis or respiratory failure.
Pathology
Deficiencies of C5 to C9 components do not lead to a generalized susceptibility to infections but only to an increased susceptibility to Neisseria infections, since Neisseria have a thin cell wall and little to no glycocalyx.
See also
Terminal complement pathway deficiency
Paroxysmal nocturnal haemoglobinuria
Perforin
Pore-forming toxin
References
External links
Complement system
Immune system
Immunology | Complement membrane attack complex | [
"Biology"
] | 958 | [
"Immune system",
"Organ systems",
"Immunology"
] |
1,596,177 | https://en.wikipedia.org/wiki/Aetites | In the magical tradition of Europe and the Near East (see: Magic in the Greco-Roman world), the aetites (singular in Latin) or aetite (anglicized) is a stone used to promote childbirth. It is also called an eagle-stone, aquiline, or aquilaeus. The stone is said to prevent spontaneous abortion and premature delivery, while shortening labor and birth for a full-term birth.
From Theophrastus onwards, the belief is also recorded that the stone had the ability to "give birth" to other stones, based on the crystals found within. This fed into the belief that at least some minerals could be gendered into male and female forms.
Mineralogy
The aetites is a limonite or siderite concretionary nodules or geodes possessing inside a small loose stone rattle when shaken. An official publication of the United States Bureau of Mines in 1920 defined an aetite:
The American Geosciences Institute defines the eaglestone as "a concretionary nodule of clay ironstone about the size of a walnut that the ancients believed an eagle takes to her nest to facilitate egg-laying."
Ancient medicine
According to Pedanius Dioscorides (5.160), the aetite should be fastened to the left arm to protect the fetus; at the time of birth, it should be moved to the hip area to ease delivery. He also recommends them for the treatment of epilepsy, and says that when mixed with meat they will "betray a thief".
Pliny the Elder describes four types of aetites in his Natural History and outlines their magico-medical use:
Pliny says that the stone is found in the nests of eagles, who cannot propagate without them.
The fourth-century magico-medical text Cyranides also claims that the aetite worn as an amulet can prevent miscarriage caused by female demons such as Gello.
Jewish medical practice
Jewish women used birthing stones, and the Talmud refers to the "preserving stone," worn as an amulet even during Shabbat to prevent miscarriage. Although medieval sources point to the eagle-stone, the identification is not certain. Rabbis in medieval France and Germany, and a Polish talmudist in the 16th century, describe the stone as hollow, with a smaller stone inside: "the stone within a stone represented a fetus in the womb." One medieval French source says that the stone "is pierced through the middle, and is round, about as large and heavy as a medium sized egg, glassy in appearance, and is to be found in the fields."
Medicine to 1700
The aetite, to be carried by pregnant women on their right side, is mentioned by Ruberto Bernardi in his 1364 book of popular medical lore. The Italian Renaissance philosopher Marsilio Ficino ascribes the aetite's ability to ease childbirth to the astrological influences of the planet Venus and the Moon. In 1494, Isabella d'Este, the marchioness of Mantua, expressed her confidence in the power of these stones.
The aetite appears in a Spanish work on natural magic by Hernando Castrillo, first published in 1636. Alvaro Alonso Barba's work on metallurgy (Madrid, 1640) touts the efficacy of the aetites, advising that the stone be tied to the left arm to prevent spontaneous abortion, and to the right arm for the opposite effect. The work was widely reviewed, reprinted and translated.
The 1660 book Occult Physick said the aetite
Aetite, along with hematite, was the subject of a 1665 book by J.L. Bausch, municipal physician () of Schweinfurt and founder of the German National Academy of Sciences Leopoldina. Bausch, however, cautions that empty promises of the stone's powers exceed the limits of both medicine and nature. Thomas Browne affirmed the stone's application to obstetrics in his Pseudodoxia Epidemica (1672), but doubted the story about eagles.
The stones were expensive; in Scotland, Anna Balfour included her stone as a bequest in a will, and English women borrowed and shared these stones to use as amulets in pregnancy.
Selected bibliography
Harris, Nichola Erin, The idea of lapidary medicine, 2009, Rutgers University, Ph.D. dissertation (book forthcoming), available online as PDF
Stol, Marten. Birth in Babylonia and the Bible. Styx Publications, 2000. Limited preview online.
Thorndike, Lynn. A History of Magic and Experimental Science.
References
European folklore
Traditional medicine
History of ancient medicine
Mineralogy
Magic (supernatural)
Mythological substances
Mythological medicines and drugs | Aetites | [
"Physics",
"Chemistry"
] | 973 | [
"Magic items",
"Mythological substances",
"Physical objects",
"Matter"
] |
1,596,221 | https://en.wikipedia.org/wiki/Chemical%20test | In chemistry, a chemical test is a qualitative or quantitative procedure designed to identify, quantify, or characterise a chemical compound or chemical group.
Purposes
Chemical testing might have a variety of purposes, such as to:
Determine if, or verify that, the requirements of a specification, regulation, or contract are met
Decide if a new product development program is on track: Demonstrate proof of concept
Demonstrate the utility of a proposed patent
Determine the interactions of a sample with other known substances
Determine the composition of a sample
Provide standard data for other scientific, medical, and Quality assurance functions
Validate suitability for end-use
Provide a basis for Technical communication
Provide a technical means of comparison of several options
Provide evidence in legal proceedings
Biochemical tests
Clinistrips quantitatively test for sugar in urine
The Kastle-Meyer test tests for the presence of hemoglobin
Salicylate testing is a category of drug testing that is focused on detecting salicylates such as acetylsalicylic acid for either biochemical or medical purposes.
The Phadebas test tests for the presence of saliva for forensic purposes
Iodine solution tests for starch
The Van Slyke determination tests for specific amino acids
The Zimmermann test tests for ketosteroids
Seliwanoff's test differentiates between aldose and ketose sugars
Test for lipids: add ethanol to sample, then shake; add water to the solution, and shake again. If fat is present, the product turns milky white.
The Sakaguchi test detects the presence of arginine in protein
The Hopkins–Cole reaction tests for the presence of tryptophan in proteins
The nitroprusside reaction tests for the presence of free thiol groups of cysteine in proteins
The Sullivan reaction tests for the presence of cysteine and cystine in proteins
The Acree–Rosenheim reaction tests for the presence of tryptophan in proteins
The Pauly reaction tests for the presence of tyrosine or histidine in proteins
Heller's test tests for the presence of albumin in urine
Gmelin's test tests for the presence of bile pigments in urine
Hay's test tests for the presence of bile pigments in urine
Reducing sugars
Barfoed's test tests for reducing polysacchorides or disaccharides
Benedict's reagent tests for reducing sugars or aldehydes
Fehling's solution tests for reducing sugars or aldehydes, similar to Benedict's reagent
Molisch's test tests for carbohydrates
Nylander's test tests for reducing sugars
Rapid furfural test distinguishes between glucose and fructose
Proteins and polypeptides
The bicinchoninic acid assay tests for proteins
The Biuret test tests for proteins and polypeptides
Bradford protein assay measures protein quantitatively
The Phadebas amylase test determines alpha-amylase activity
Organic tests
The carbylamine reaction tests for primary amines
The esterification reaction tests for the presence of alcohol and/or carboxylic acids
The Griess test tests for organic nitrite compounds
The 2,4-dinitrophenylhydrazine tests for carbonyl compounds
The iodoform reaction tests for the presence of methyl ketones, or compounds which can be oxidized to methyl ketones
The Schiff test detects aldehydes
Tollens' reagent tests for aldehydes (known as the silver mirror test)
The Zeisel determination tests for the presence of esters or ethers
Lucas' reagent is used to distinguish between primary, secondary and tertiary alcohols.
The bromine test is used to test for the presence of unsaturation and phenols.
Inorganic tests
Barium chloride tests for sulfates
Acidified silver nitrate solution tests for halide ions
The Beilstein test tests for halides qualitatively
The bead test tests for certain metals
The Carius halogen method measures halides quantitatively.
Chemical tests for cyanide test for the presence of cyanide, CN−
Copper sulfate tests for the presence of water
Flame tests test for metals
The Gilman test tests for the presence of a Grignard reagent
The Kjeldahl method quantitatively determines the presence of nitrogen
Nessler's reagent tests for the presence of ammonia
Ninhydrin tests for ammonia or primary amines
Phosphate tests test for phosphate
The sodium fusion test tests for the presence of nitrogen, sulfur, and halides in a sample
The Zerewitinoff determination tests for any acidic hydrogen
The Oddy test tests for acid, aldehydes, and sulfides
Gunzberg's test tests for the presence of hydrochloric acid
Kelling's test tests for the presence of lactic acid
See also
Independent test organization
Medical test
Test method
References
Analytical chemistry
Measurement | Chemical test | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,004 | [
"Physical quantities",
"Quantity",
"Chemical tests",
"Measurement",
"Size",
"nan"
] |
1,596,317 | https://en.wikipedia.org/wiki/Habitat | In ecology, habitat refers to the array of resources, physical and biotic factors that are present in an area, such as to support the survival and reproduction of a particular species. A species habitat can be seen as the physical manifestation of its ecological niche. Thus "habitat" is a species-specific term, fundamentally different from concepts such as environment or vegetation assemblages, for which the term "habitat-type" is more appropriate.
The physical factors may include (for example): soil, moisture, range of temperature, and light intensity. Biotic factors include the availability of food and the presence or absence of predators. Every species has particular habitat requirements, habitat generalist species are able to thrive in a wide array of environmental conditions while habitat specialist species require a very limited set of factors to survive. The habitat of a species is not necessarily found in a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body.
Habitat types are environmental categorizations of different environments based on the characteristics of a given geographical area, particularly vegetation and climate. Thus habitat types do not refer to a single species but to multiple species living in the same area. For example, terrestrial habitat types include forest, steppe, grassland, semi-arid or desert. Fresh-water habitat types include marshes, streams, rivers, lakes, and ponds; marine habitat types include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents.
Habitat types may change over time. Causes of change may include a violent event (such as the eruption of a volcano, an earthquake, a tsunami, a wildfire or a change in oceanic currents); or change may occur more gradually over millennia with alterations in the climate, as ice sheets and glaciers advance and retreat, and as different weather patterns bring changes of precipitation and solar radiation. Other changes come as a direct result of human activities, such as deforestation, the plowing of ancient grasslands, the diversion and damming of rivers, the draining of marshland and the dredging of the seabed. The introduction of alien species can have a devastating effect on native wildlife – through increased predation, through competition for resources or through the introduction of pests and diseases to which the indigenous species have no immunity.
Definition and etymology
The word "habitat" has been in use since about 1755 and derives from the Latin habitāre, to inhabit, from habēre, to have or to hold. Habitat can be defined as the natural environment of an organism, the type of place in which it is natural for it to live and grow. It is similar in meaning to a biotope; an area of uniform environmental conditions associated with a particular community of plants and animals.
Environmental factors
The chief environmental factors affecting the distribution of living organisms are temperature, humidity, climate, soil and light intensity, and the presence or absence of all the requirements that the organism needs to sustain it. Generally speaking, animal communities are reliant on specific types of plant communities.
Some plants and animals have habitat requirements which are met in a wide range of locations. The small white butterfly Pieris rapae for example is found on all the continents of the world apart from Antarctica. Its larvae feed on a wide range of Brassicas and various other plant species, and it thrives in any open location with diverse plant associations. The large blue butterfly Phengaris arion is much more specific in its requirements; it is found only in chalk grassland areas, its larvae feed on Thymus species, and because of complex life cycle requirements it inhabits only areas in which Myrmica ants live.
Disturbance is important in the creation of biodiverse habitat types. In the absence of disturbance, a climax vegetation cover develops that prevents the establishment of other species. Wildflower meadows are sometimes created by conservationists but most of the flowering plants used are either annuals or biennials and disappear after a few years in the absence of patches of bare ground on which their seedlings can grow. Lightning strikes and toppled trees in tropical forests allow species richness to be maintained as pioneering species move in to fill the gaps created. Similarly, coastal habitat types can become dominated by kelp until the seabed is disturbed by a storm and the algae swept away, or shifting sediment exposes new areas for colonisation. Another cause of disturbance is when an area may be overwhelmed by an invasive introduced species which is not kept under control by natural enemies in its new habitat.
Types
Terrestrial
Terrestrial habitat types include forests, grasslands, wetlands and deserts. Within these broad biomes are more specific habitat types with varying climate types, temperature regimes, soils, altitudes and vegetation. Many of these habitat types grade into each other and each one has its own typical communities of plants and animals. A habitat-type may suit a particular species well, but its presence or absence at any particular location depends to some extent on chance, on its dispersal abilities and its efficiency as a colonizer.
Arid
Arid habitats are those where there is little available water. The most extreme arid habitats are deserts. Desert animals have a variety of adaptations to survive the dry conditions. Some frogs live in deserts, creating moist habitat types underground and hibernating while conditions are adverse. Couch's spadefoot toad (Scaphiopus couchii) emerges from its burrow when a downpour occurs and lays its eggs in the transient pools that form; the tadpoles develop with great rapidity, sometimes in as little as nine days, undergo metamorphosis, and feed voraciously before digging a burrow of their own.
List of arid habitat types
Desert
Fog desert
Polar desert
Steppe
Savanna
Wetland and riparian
Other organisms cope with the drying up of their aqueous habitat in other ways. Vernal pools are ephemeral ponds that form in the rainy season and dry up afterwards. They have their specially-adapted characteristic flora, mainly consisting of annuals, the seeds of which survive the drought, but also some uniquely adapted perennials. Animals adapted to these extreme habitat types also exist; fairy shrimps can lay "winter eggs" which are resistant to desiccation, sometimes being blown about with the dust, ending up in new depressions in the ground. These can survive in a dormant state for as long as fifteen years. Some killifish behave in a similar way; their eggs hatch and the juvenile fish grow with great rapidity when the conditions are right, but the whole population of fish may end up as eggs in diapause in the dried up mud that was once a pond.
Examples of wetland and riparian habitat types
Bog
Marsh
Fen
Flooded grasslands and savannas
Floodplain
Shrub swamp
Swamp
Vernal pool
Wet meadow
Forest
Examples of forest habitat types
Boreal forest
Cloud forest
Peat swamp forest
Temperate coniferous forest
Temperate deciduous forest
Temperate rain forest
Thorn forest
Tropical dry forest
Tropical moist forest
Tropical rain forest
Woodland
Freshwater
Freshwater habitat types include rivers, streams, lakes, ponds, marshes and bogs. They can be divided into running waters (rivers, streams) and standing waters (lakes, ponds, marshes, bogs). Although some organisms are found across most of these habitat types, the majority have more specific requirements. The water velocity, its temperature and oxygen saturation are important factors, but in river systems, there are fast and slow sections, pools, bayous and backwaters which provide a range of habitat types. Similarly, aquatic plants can be floating, semi-submerged, submerged or grow in permanently or temporarily saturated soils besides bodies of water. Marginal plants provide important habitat for both invertebrates and vertebrates, and submerged plants provide oxygenation of the water, absorb nutrients and play a part in the reduction of pollution.
Marine
Marine habitats include brackish water, estuaries, bays, the open sea, the intertidal zone, the sea bed, reefs and deep / shallow water zones. Further variations include rock pools, sand banks, mudflats, brackish lagoons, sandy and pebbly beaches, and seagrass beds, all supporting their own flora and fauna. The benthic zone or seabed provides a home for both static organisms, anchored to the substrate, and for a large range of organisms crawling on or burrowing into the surface. Some creatures float among the waves on the surface of the water, or raft on floating debris, others swim at a range of depths, including organisms in the demersal zone close to the seabed, and myriads of organisms drift with the currents and form the plankton.
List of marine habitat types
Abyssal plain
Aphotic zone
Benthic zone
Cold seep
Coral reef
Demersal zone
Estuary
Hydrothermal vent
Intertidal zone
Kelp forest
Littoral zone
Oceanic trench
Photic zone
Seagrass meadow
Mangrove swamp
Seamount
Tide pool
Urban
Many animals and plants have taken up residence in urban environments. They tend to be adaptable generalists and use the town's features to make their homes. Rats and mice have followed man around the globe, pigeons, peregrines, sparrows, swallows and house martins use the buildings for nesting, bats use roof space for roosting, foxes visit the garbage bins and squirrels, coyotes, raccoons and skunks roam the streets. About 2,000 coyotes are thought to live in and around Chicago. A survey of dwelling houses in northern European cities in the twentieth century found about 175 species of invertebrate inside them, including 53 species of beetle, 21 flies, 13 butterflies and moths, 13 mites, 9 lice, 7 bees, 5 wasps, 5 cockroaches, 5 spiders, 4 ants and a number of other groups. In warmer climates, termites are serious pests in the urban habitat; 183 species are known to affect buildings and 83 species cause serious structural damage.
Microhabitat types
A microhabitat is the small-scale physical requirements of a particular organism or population. Every habitat includes large numbers of microhabitat types with subtly different exposure to light, humidity, temperature, air movement, and other factors. The lichens that grow on the north face of a boulder are different from those that grow on the south face, from those on the level top, and those that grow on the ground nearby; the lichens growing in the grooves and on the raised surfaces are different from those growing on the veins of quartz. Lurking among these miniature "forests" are the microfauna, species of invertebrate, each with its own specific habitat requirements.
There are numerous different microhabitat types in a wood; coniferous forest, broad-leafed forest, open woodland, scattered trees, woodland verges, clearings, and glades; tree trunk, branch, twig, bud, leaf, flower, and fruit; rough bark, smooth bark, damaged bark, rotten wood, hollow, groove, and hole; canopy, shrub layer, plant layer, leaf litter, and soil; buttress root, stump, fallen log, stem base, grass tussock, fungus, fern, and moss. The greater the structural diversity in the wood, the greater the number of microhabitat types that will be present. A range of tree species with individual specimens of varying sizes and ages, and a range of features such as streams, level areas, slopes, tracks, clearings, and felled areas will provide suitable conditions for an enormous number of biodiverse plants and animals. For example, in Britain it has been estimated that various types of rotting wood are home to over 1700 species of invertebrate.
For a parasitic organism, its habitat is the particular part of the outside or inside of its host on or in which it is adapted to live. The life cycle of some parasites involves several different host species, as well as free-living life stages, sometimes within vastly different microhabitat types. One such organism is the trematode (flatworm) Microphallus turgidus, present in brackish water marshes in the southeastern United States. Its first intermediate host is a snail and the second, a glass shrimp. The final host is the waterfowl or mammal that consumes the shrimp.
Extreme habitat types
Although the vast majority of life on Earth lives in mesophyllic (moderate) environments, a few organisms, most of them microbes, have managed to colonise extreme environments that are unsuitable for more complex life forms. There are bacteria, for example, living in Lake Whillans, half a mile below the ice of Antarctica; in the absence of sunlight, they must rely on organic material from elsewhere, perhaps decaying matter from glacier melt water or minerals from the underlying rock. Other bacteria can be found in abundance in the Mariana Trench, the deepest place in the ocean and on Earth; marine snow drifts down from the surface layers of the sea and accumulates in this undersea valley, providing nourishment for an extensive community of bacteria.
Other microbes live in environments lacking in oxygen, and are dependent on chemical reactions other than photosynthesis. Boreholes drilled into the rocky seabed have found microbial communities apparently based on the products of reactions between water and the constituents of rocks. These communities have not been studied much, but may be an important part of the global carbon cycle. Rock in mines two miles deep also harbour microbes; these live on minute traces of hydrogen produced in slow oxidizing reactions inside the rock. These metabolic reactions allow life to exist in places with no oxygen or light, an environment that had previously been thought to be devoid of life.
The intertidal zone and the photic zone in the oceans are relatively familiar habitat types. However the vast bulk of the ocean is inhospitable to air-breathing humans, with scuba divers limited to the upper or so. The lower limit for photosynthesis is and below that depth the prevailing conditions include total darkness, high pressure, little oxygen (in some places), scarce food resources and extreme cold. This habitat is very challenging to research, and as well as being little-studied, it is vast, with 79% of the Earth's biosphere being at depths greater than . With no plant life, the animals in this zone are either detritivores, reliant on food drifting down from surface layers, or they are predators, feeding on each other. Some organisms are pelagic, swimming or drifting in mid-ocean, while others are benthic, living on or near the seabed. Their growth rates and metabolisms tend to be slow, their eyes may be very large to detect what little illumination there is, or they may be blind and rely on other sensory inputs. A number of deep sea creatures are bioluminescent; this serves a variety of functions including predation, protection and social recognition. In general, the bodies of animals living at great depths are adapted to high pressure environments by having pressure-resistant biomolecules and small organic molecules present in their cells known as piezolytes, which give the proteins the flexibility they need. There are also unsaturated fats in their membranes which prevent them from solidifying at low temperatures.
Hydrothermal vents were first discovered in the ocean depths in 1977. They result from seawater becoming heated after seeping through cracks to places where hot magma is close to the seabed. The under-water hot springs may gush forth at temperatures of over and support unique communities of organisms in their immediate vicinity. The basis for this teeming life is chemosynthesis, a process by which microbes convert such substances as hydrogen sulfide or ammonia into organic molecules. These bacteria and Archaea are the primary producers in these ecosystems and support a diverse array of life. About 350 species of organism, dominated by molluscs, polychaete worms and crustaceans, had been discovered around hydrothermal vents by the end of the twentieth century, most of them being new to science and endemic to these habitat types.
Besides providing locomotion opportunities for winged animals and a conduit for the dispersal of pollen grains, spores and seeds, the atmosphere can be considered to be a habitat-type in its own right. There are metabolically active microbes present that actively reproduce and spend their whole existence airborne, with hundreds of thousands of individual organisms estimated to be present in a cubic meter of air. The airborne microbial community may be as diverse as that found in soil or other terrestrial environments, however, these organisms are not evenly distributed, their densities varying spatially with altitude and environmental conditions. Aerobiology has not been studied much, but there is evidence of nitrogen fixation in clouds, and less clear evidence of carbon cycling, both facilitated by microbial activity.
There are other examples of extreme habitat types where specially adapted lifeforms exist; tar pits teeming with microbial life; naturally occurring crude oil pools inhabited by the larvae of the petroleum fly; hot springs where the temperature may be as high as and cyanobacteria create microbial mats; cold seeps where the methane and hydrogen sulfide issue from the ocean floor and support microbes and higher animals such as mussels which form symbiotic associations with these anaerobic organisms; salt pans that harbour salt-tolerant bacteria, archaea and also fungi such as the black yeast Hortaea werneckii and basidiomycete Wallemia ichthyophaga; ice sheets in Antarctica which support fungi Thelebolus spp., glacial ice with a variety of bacteria and fungi; and snowfields on which algae grow.
Habitat change
Whether from natural processes or the activities of man, landscapes and their associated habitat types change over time. There are the slow geomorphological changes associated with the geologic processes that cause tectonic uplift and subsidence, and the more rapid changes associated with earthquakes, landslides, storms, flooding, wildfires, coastal erosion, deforestation and changes in land use. Then there are the changes in habitat types brought on by alterations in farming practices, tourism, pollution, fragmentation and climate change.
Loss of habitat is the single greatest threat to any species. If an island on which an endemic organism lives becomes uninhabitable for some reason, the species will become extinct. Any type of habitat surrounded by a different habitat is in a similar situation to an island. If a forest is divided into parts by logging, with strips of cleared land separating woodland blocks, and the distances between the remaining fragments exceeds the distance an individual animal is able to travel, that species becomes especially vulnerable. Small populations generally lack genetic diversity and may be threatened by increased predation, increased competition, disease and unexpected catastrophe. At the edge of each forest fragment, increased light encourages secondary growth of fast-growing species and old growth trees are more vulnerable to logging as access is improved. The birds that nest in their crevices, the epiphytes that hang from their branches and the invertebrates in the leaf litter are all adversely affected and biodiversity is reduced. Habitat fragmentation can be ameliorated to some extent by the provision of wildlife corridors connecting the fragments. These can be a river, ditch, strip of trees, hedgerow or even an underpass to a highway. Without the corridors, seeds cannot disperse and animals, especially small ones, cannot travel through the hostile territory, putting populations at greater risk of local extinction.
Habitat disturbance can have long-lasting effects on the environment. Bromus tectorum is a vigorous grass from Europe which has been introduced to the United States where it has become invasive. It is highly adapted to fire, producing large amounts of flammable detritus and increasing the frequency and intensity of wildfires. In areas where it has become established, it has altered the local fire regimen to such an extant that native plants cannot survive the frequent fires, allowing it to become even more dominant. A marine example is when sea urchin populations "explode" in coastal waters and destroy all the macroalgae present. What was previously a kelp forest becomes an urchin barren that may last for years and this can have a profound effect on the food chain. Removal of the sea urchins, by disease for example, can result in the seaweed returning, with an over-abundance of fast-growing kelp.
Fragmentation
Destruction
Habitat protection
The protection of habitat types is a necessary step in the maintenance of biodiversity because if habitat destruction occurs, the animals and plants reliant on that habitat suffer. Many countries have enacted legislation to protect their wildlife. This may take the form of the setting up of national parks, forest reserves and wildlife reserves, or it may restrict the activities of humans with the objective of benefiting wildlife. The laws may be designed to protect a particular species or group of species, or the legislation may prohibit such activities as the collecting of bird eggs, the hunting of animals or the removal of plants. A general law on the protection of habitat types may be more difficult to implement than a site specific requirement. A concept introduced in the United States in 1973 involves protecting the critical habitat of endangered species, and a similar concept has been incorporated into some Australian legislation.
International treaties may be necessary for such objectives as the setting up of marine reserves. Another international agreement, the Convention on the Conservation of Migratory Species of Wild Animals, protects animals that migrate across the globe and need protection in more than one country. Even where legislation protects the environment, a lack of enforcement often prevents effective protection. However, the protection of habitat types needs to take into account the needs of the local residents for food, fuel and other resources. Faced with hunger and destitution, a farmer is likely to plough up a level patch of ground despite it being the last suitable habitat for an endangered species such as the San Quintin kangaroo rat, and even kill the animal as a pest. In the interests of ecotourism it is desirable that local communities are educated on the uniqueness of their flora and fauna.
Monotypic habitat
A monotypic habitat type is a concept sometimes used in conservation biology, in which a single species of animal or plant is the only species of its type to be found in a specific habitat and forms a monoculture. Even though it might seem such a habitat type is impoverished in biodiversity as compared with polytypic habitat types, this is not necessarily the case. Monocultures of the exotic plant Hydrilla support a similarly rich fauna of invertebrates as a more varied habitat. The monotypic habitat occurs in both botanical and zoological contexts. Some invasive species may create monocultural stands that prevent other species from growing there. A dominant colonization can occur from retardant chemicals exuded, nutrient monopolization, or from lack of natural controls, such as herbivores or climate, that keep them in balance with their native habitat types. The yellow starthistle, Centaurea solstitialis is a botanical monotypic habitat example of this, currently dominating over in California alone. The non-native freshwater zebra mussel, Dreissena polymorpha, that colonizes areas of the Great Lakes and the Mississippi River watershed, is a zoological monotypic habitat example; the predators or parasites that control it in its home-range in Russia are absent.
See also
: the loss of habitat
Notes and references
External links
Ecology
Landscape ecology
Systems ecology | Habitat | [
"Biology",
"Environmental_science"
] | 4,797 | [
"Environmental social science",
"Ecology",
"Systems ecology"
] |
1,596,341 | https://en.wikipedia.org/wiki/Gliese%20710 | Gliese 710, or HIP 89825, is an orange star in the constellation Serpens Cauda. It is projected to pass near the Sun in about 1.29 million years at a predicted minimum distance of 0.051 parsecs— (about 1.6 trillion km)—about 1/25th of the current distance to Proxima Centauri. Such a distance would make for a similar brightness to the brightest planets, optimally reaching an apparent visual magnitude of about −2.7. The star's proper motion will peak around one arcminute per year, a rate of apparent motion that would be noticeable over a human lifespan. This is a timeframe, based on data from Data Release 3 from the Gaia spacecraft, well within the parameters of current models which cover the next 15 million years.
Description
Gliese 710 currently is from Earth in the constellation Serpens and has a below naked-eye visual magnitude of 9.69. A stellar classification of K7 Vk means it is a small main-sequence star mostly generating energy through the thermonuclear fusion of hydrogen at its core. (The suffix 'k' indicates that the spectrum shows absorption lines from interstellar matter.) Stellar mass is about 57% of the Sun's mass with an estimated 58% of the Sun's radius. It is suspected to be a variable star that may vary in magnitude from 9.65 to 9.69. As of 2020, no planets have been detected orbiting it.
Computing and details of the closest approach
In their 2010 work, Bobylev et al. suggested that Gliese 710 has an 86% chance of passing through the Oort cloud, assuming the Oort cloud to be a spheroid around the Sun with semiminor and semimajor axes of 80,000 and 100,000 AU, respectively. The distance of closest approach of Gliese 710 is generally difficult to compute precisely as it depends sensitively on its current position and velocity; Bobylev et al. estimated that Gliese 710 would pass within () of the Sun. At the time, there was even a 1-in-10,000 chance of the star penetrating into the region (d < 1,000 AU) where the influence of the passing star on Kuiper belt objects would be significant.
Results from new calculations that include input data from Gaia DR3 indicate that the flyby of Gliese 710 to the Solar System will on average be closer at () in time, but with considerably less uncertainty. The effects of such an encounter on the orbit of the Pluto–Charon system (and therefore, on the classical trans-Neptunian belt) are negligible, but Gliese 710 will traverse the outer Oort cloud (inside 100,000 AU or 0.48 pc) and reach the outskirts of the inner Oort cloud (inward of 20,000 AU).
Gliese 710 has the potential to perturb the Oort cloud in the outer Solar System, exerting enough force to send showers of comets into the inner Solar System for millions of years, triggering visibility of about ten naked-eye comets per year, and possibly causing an impact event. According to Filip Berski and Piotr Dybczyński, this event will be "the strongest disrupting encounter in the future and history of the Solar System." Earlier dynamic models indicated that the net increase in cratering rate due to the passage of Gliese 710 would be no more than 5%. They had originally estimated that the closest approach would happen in 1.36 million years when the star will approach within () of the Sun. Gaia DR2 later found the minimum perihelion distance to be or 13,900 ± 3,200 AU, about 1.281 million years from now.
Table of parameters of predictions of Gliese 710 encounter with Sun
In popular culture
In 2022, the final track on popular Australian psychedelic rock band King Gizzard & The Lizard Wizard's album Ice, Death, Planets, Lungs, Mushrooms and Lava was entitled Gliese 710.
See also
List of nearest stars
Notes
References
External links
SolStation.com
VizieR variable star database
Wikisky image of HD 168442 (Gliese 710)
BD-01 3474
168442
089825
0710
K-type main-sequence stars
Serpens
TIC objects | Gliese 710 | [
"Astronomy"
] | 916 | [
"Constellations",
"Serpens"
] |
1,596,497 | https://en.wikipedia.org/wiki/Battle%20of%20the%20sexes%20%28game%20theory%29 | In game theory, the battle of the sexes is a two-player coordination game that also involves elements of conflict. The game was introduced in 1957 by R. Duncan Luce and Howard Raiffa in their classic book, Games and Decisions. Some authors prefer to avoid assigning sexes to the players and instead use Players 1 and 2, and some refer to the game as Bach or Stravinsky, using two concerts as the two events. The game description here follows Luce and Raiffa's original story.
Imagine that a man and a woman hope to meet this evening, but have a choice between two events to attend: a prize fight and a ballet. The man would prefer to go to prize fight. The woman would prefer the ballet. Both would prefer to go to the same event rather than different ones. If they cannot communicate, where should they go?
The payoff matrix labeled "Battle of the Sexes (1)" shows the payoffs when the man chooses a row and the woman chooses a column. In each cell, the first number represents the man's payoff and the second number the woman's.
This standard representation does not account for the additional harm that might come from not only going to different locations, but going to the wrong one as well (e.g. the man goes to the ballet while the woman goes to the prize fight, satisfying neither). To account for this, the game would be represented in "Battle of the Sexes (2)", where in the top right box, the players each have a payoff of 1 because they at least get to attend their favored events.
Equilibrium analysis
This game has two pure strategy Nash equilibria, one where both players go to the prize fight, and another where both go to the ballet. There is also a mixed strategy Nash equilibrium, in which the players randomize using specific probabilities. For the payoffs listed in Battle of the Sexes (1), in the mixed strategy equilibrium the man goes to the prize fight with probability 3/5 and the woman to the ballet with probability 3/5, so they end up together at the prize fight with probability 6/25 = (3/5)(2/5) and together at the ballet with probability 6/25 = (2/5)(3/5). Because a pure strategy is a degenerate case of a mixed strategy, the two pure strategy Nash equilibria are also part of the set of mixed strategy Nash equilibria. As a result, there are a total of three mixed strategy Nash equilibria in the Battle of the Sexes.
This presents an interesting case for game theory since each of the Nash equilibria is deficient in some way. The two pure strategy Nash equilibria are unfair; one player consistently does better than the other. The mixed strategy Nash equilibrium is inefficient: the players will miscoordinate with probability 13/25, leaving each player with an expected return of 6/5 (less than the payoff of 2 from each's less favored pure strategy equilibrium). It remains unclear how expectations would form that would result in a particular equilibrium being played out.
One possible resolution of the difficulty involves the use of a correlated equilibrium. In its simplest form, if the players of the game have access to a commonly observed randomizing device, then they might decide to correlate their strategies in the game based on the outcome of the device. For example, if the players could flip a coin before choosing their strategies, they might agree to correlate their strategies based on the coin flip by, say, choosing ballet in the event of heads and prize fight in the event of tails. Notice that once the results of the coin flip are revealed neither player has any incentives to alter their proposed actions if they believe the other will not. The result is that perfect coordination is always achieved and, prior to the coin flip, the expected payoffs for the players are exactly equal. It remains true, however, that even if there is a correlating device, the Nash equilibria in which the players ignore it will remain; correlated equilibria require both the existence of a correlating device and the expectation that both players will use it to make their decision.
Notes
References
Fudenberg, D. and Tirole, J. (1991) Game theory, MIT Press. (see Chapter 1, section 2.4)
External links
GameTheory.net
Cooperative Solution with Nash Function by Elmer G. Wiens
Non-cooperative games | Battle of the sexes (game theory) | [
"Mathematics"
] | 933 | [
"Game theory",
"Non-cooperative games"
] |
1,596,638 | https://en.wikipedia.org/wiki/Proof%20procedure | In logic, and in particular proof theory, a proof procedure for a given logic is a systematic method for producing proofs in some proof calculus of (provable) statements.
Types of proof calculi used
There are several types of proof calculi. The most popular are natural deduction, sequent calculi (i.e., Gentzen-type systems), Hilbert systems, and semantic tableaux or trees. A given proof procedure will target a specific proof calculus, but can often be reformulated so as to produce proofs in other proof styles.
Completeness
A proof procedure for a logic is complete if it produces a proof for each provable statement. The theorems of logical systems are typically recursively enumerable, which implies the existence of a complete but usually extremely inefficient proof procedure; however, a proof procedure is only of interest if it is reasonably efficient.
Faced with an unprovable statement, a complete proof procedure may sometimes succeed in detecting and signalling its unprovability. In the general case, where provability is only a semidecidable property, this is not possible, and instead the procedure will diverge (not terminate).
See also
Automated theorem proving
Proof complexity
Deductive system
References
Willard Quine 1982 (1950). Methods of Logic. Harvard Univ. Press.
Proof theory | Proof procedure | [
"Mathematics"
] | 279 | [
"Mathematical logic",
"Proof theory"
] |
1,596,651 | https://en.wikipedia.org/wiki/Isobutylene | Isobutylene (or 2-methylpropene) is a hydrocarbon with the chemical formula . It is a four-carbon branched alkene (olefin), one of the four isomers of butylene. It is a colorless flammable gas, and is of considerable industrial value.
Production
Polymer and chemical grade isobutylene is typically obtained by dehydrating tertiary butyl alcohol (TBA) or catalytic dehydrogenation of isobutane (Catofin or similar processes). Gasoline additives methyl tert-butyl ether (MTBE) and ethyl tert-butyl ether (ETBE), respectively, are produced by reacting methanol or ethanol with isobutylene contained in butene streams from olefin steam crackers or refineries, or with isobutylene from dehydrated TBA. Isobutylene is not isolated from the olefin or refinery butene stream before the reaction, as separating the ethers from the remaining butenes is simpler. Isobutylene can also be produced in high purities by "back-cracking" MTBE or ETBE at high temperatures and then separating the isobutylene by distillation from methanol.
Isobutylene is a byproduct in the ethenolysis of diisobutene to prepare neohexene:
(CH3)3C-CH=C(CH3)2 + → (CH3)3C-CH= + (CH3)2C=
Uses
Isobutylene is used in the production of a variety of products. It is alkylated with butane to produce isooctane or dimerized to diisobutylene (DIB) and then hydrogenated to make isooctane, a fuel additive. Isobutylene is also used in the production of methacrolein. Polymerization of isobutylene produces butyl rubber (polyisobutylene or PIB). Antioxidants such as butylated hydroxytoluene (BHT) and butylated hydroxyanisole (BHA) are produced by Friedel-Crafts alkylation of phenols with isobutylene.
tert-Butylamine is produced commercially by amination of isobutylene using zeolite catalysts:
Applications are found in the calibration of photoionization detectors.
Safety
Isobutylene is a highly flammable gas.
See also
Butyl rubber
Polythene
Polybutene
Perfluoroisobutene
References
External links
Alkenes | Isobutylene | [
"Chemistry"
] | 552 | [
"Organic compounds",
"Alkenes"
] |
1,596,746 | https://en.wikipedia.org/wiki/Kleptoplasty | Kleptoplasty or kleptoplastidy is a process in symbiotic relationships whereby plastids, notably chloroplasts from algae, are sequestered by the host. The word is derived from Kleptes (κλέπτης) which is Greek for thief. The alga is eaten normally and partially digested, leaving the plastid intact. The plastids are maintained within the host, temporarily continuing photosynthesis and benefiting the host.
Etymology
The word kleptoplasty is derived from Ancient Greek (), meaning , and (), originally meaning formed or moulded, and used in biology to mean a plastid.
Process
Kleptoplasty is a process in symbiotic relationships whereby plastids, notably chloroplasts from algae, are sequestered by the host. The alga is eaten normally and partially digested, leaving the plastid intact. The plastids are maintained within the host, temporarily continuing photosynthesis and benefiting the host. The term was coined in 1990 to describe chloroplast symbiosis.
Occurrence
Kleptoplasty has been acquired in various independent clades of eukaryotes, namely single-celled protists of the SAR supergroup and the Euglenozoa phylum, and some marine invertebrate animals.
In protists
Foraminifera
Some species of the foraminiferan genera Bulimina, Elphidium, Haynesina, Nonion, Nonionella, Nonionellina, Reophax, and Stainforthia sequester diatom chloroplasts.
Dinoflagellates
The stability of transient plastids varies considerably across plastid-retaining species. In the dinoflagellates Gymnodinium spp. and Pfisteria piscicida, kleptoplastids are photosynthetically active for only a few days, while kleptoplastids in Dinophysis spp. can be stable for 2 months. In other dinoflagellates, kleptoplasty has been hypothesized to represent either a mechanism permitting functional flexibility, or perhaps an early evolutionary stage in the permanent acquisition of chloroplasts.
Ciliates
Mesodinium rubrum is a ciliate that steals chloroplasts from the cryptomonad Geminigera cryophila. M. rubrum participates in additional endosymbiosis by transferring its plastids to its predators, the dinoflagellate planktons belonging to the genus Dinophysis.
Karyoklepty is a related process in which the nucleus of the prey cell is kept by the host as well. This was first described in 2007 in M. rubrum.
Euglenozoa
The first and only case of kleptoplasty within Euglenozoa belongs to the species Rapaza viridis, the earliest diverging lineage of Euglenophyceae. This microorganism requires a constant supply of a strain of Tetraselmis microalgae, which it ingests to extract chloroplasts. The kleptoplasts are then progressively transformed into ones that resemble the permanent chloroplasts of the remaining Euglenophyceae. Cells of Rapaza viridis can survive for up to 35 days with these kleptoplasts.
Kleptoplasty is considered the mode of nutrition of the euglenophycean common ancestor. It is hypothesized that kleptoplasty allowed for various events of horizontal gene transfer that eventually allowed the establishment of permanent chloroplasts in the remaining Euglenophyceae.
Animals
Rhabdocoel flatworms
Two species of rhabdocoel marine flatworms, Baicalellia solaris and Pogaina paranygulgus, make use of kleptoplasty. The group was previously classified as having algal endosymbionts, though it was already discovered that the endosymbionts did not contain nuclei.
While consuming diatoms, B. solaris and P. paranygulus, in a process not yet discovered, extract plastids from their prey, incorporating them subepidermally, while separating and digesting the frustule and remainder of the diatom. In B. solaris the extracted plastids, or kleptoplasts, continue to exhibit functional photosynthesis for a short period of roughly 7 days. As the two groups are not sister taxa, and the trait is not shared among groups more closely related, there is evidence that kleptoplasty evolved independently within the two taxa.
Sea slugs (gastropods)
Sacoglossa
Sea slugs in the clade Sacoglossa practise kleptoplasty. Several species of Sacoglossan sea slugs capture intact, functional chloroplasts from algal food sources, retaining them within specialized cells lining the mollusc's digestive diverticula. The longest known kleptoplastic association, which can last up to ten months, is found in Elysia chlorotica, which acquires chloroplasts by eating the alga Vaucheria litorea, storing the chloroplasts in the cells that line its gut. Juvenile sea slugs establish the kleptoplastic endosymbiosis when feeding on algal cells, sucking out the cell contents, and discarding everything except the chloroplasts. The chloroplasts are phagocytosed by digestive cells, filling extensively branched digestive tubules, providing their host with the products of photosynthesis. It is not resolved, however, whether the stolen plastids actively secrete photosynthate or whether the slugs profit indirectly from slowly degrading kleptoplasts.
Due to this unusual ability, the sacoglossans are sometimes referred to as "solar-powered sea slugs," though the actual benefit from photosynthesis on the survival of some of the species that have been analyzed seems to be marginal at best. In fact, some species may even die in the presence of the carbon dioxide-fixing kleptoplasts as a result of elevated levels of reactive oxygen species.
Changes in temperature have been shown to negatively affect kleptoplastic abilities in sacoglossans. Rates of photosynthetic efficiency as well as kleptoplast abundance have been shown to decrease in correlation to a decrease in temperature. The patterns and rate of these changes, however, varies between different species of sea slug.
Nudibranchia
Some species of another group of sea slugs, nudibranchs such as Pteraeolidia ianthina, sequester whole living symbiotic zooxanthellae within their digestive diverticula, and thus are similarly "solar-powered".
See also
Horizontal gene transfer
Kleptoprotein
References
External links
Algae
Ecology terminology
Endosymbiotic events | Kleptoplasty | [
"Biology"
] | 1,509 | [
"Ecology terminology",
"Endosymbiotic events",
"Symbiosis",
"Algae"
] |
1,596,969 | https://en.wikipedia.org/wiki/Categorification | In mathematics, categorification is the process of replacing set-theoretic theorems with category-theoretic analogues. Categorification, when done successfully, replaces sets with categories, functions with functors, and equations with natural isomorphisms of functors satisfying additional properties. The term was coined by Louis Crane.
The reverse of categorification is the process of decategorification. Decategorification is a systematic process by which isomorphic objects in a category are identified as equal. Whereas decategorification is a straightforward process, categorification is usually much less straightforward. In the representation theory of Lie algebras, modules over specific algebras are the principal objects of study, and there are several frameworks for what a categorification of such a module should be, e.g., so called (weak) abelian categorifications.
Categorification and decategorification are not precise mathematical procedures, but rather a class of possible analogues. They are used in a similar way to the words like 'generalization', and not like 'sheafification'.
Examples
One form of categorification takes a structure described in terms of sets, and interprets the sets as isomorphism classes of objects in a category. For example, the set of natural numbers can be seen as the set of cardinalities of finite sets (and any two sets with the same cardinality are isomorphic). In this case, operations on the set of natural numbers, such as addition and multiplication, can be seen as carrying information about coproducts and products of the category of finite sets. Less abstractly, the idea here is that manipulating sets of actual objects, and taking coproducts (combining two sets in a union) or products (building arrays of things to keep track of large numbers of them) came first. Later, the concrete structure of sets was abstracted away – taken "only up to isomorphism", to produce the abstract theory of arithmetic. This is a "decategorification" – categorification reverses this step.
Other examples include homology theories in topology. Emmy Noether gave the modern formulation of homology as the rank of certain free abelian groups by categorifying the notion of a Betti number. See also Khovanov homology as a knot invariant in knot theory.
An example in finite group theory is that the ring of symmetric functions is categorified by the category of representations of the symmetric group. The decategorification map sends the Specht module indexed by partition to the Schur function indexed by the same partition,
essentially following the character map from a favorite basis of the associated Grothendieck group to a representation-theoretic favorite basis of the ring of symmetric functions. This map reflects how the structures are similar; for example
have the same decomposition numbers over their respective bases, both given by Littlewood–Richardson coefficients.
Abelian categorifications
For a category , let be the Grothendieck group of .
Let be a ring which is free as an abelian group, and let be a basis of such that the multiplication is positive in , i.e.
with
Let be an -module. Then a (weak) abelian categorification of consists of an abelian category , an isomorphism , and exact endofunctors such that
the functor lifts the action of on the module , i.e. , and
there are isomorphisms , i.e. the composition decomposes as the direct sum of functors in the same way that the product decomposes as the linear combination of basis elements .
See also
Combinatorial proof, the process of replacing number theoretic theorems by set-theoretic analogues.
Higher category theory
Higher-dimensional algebra
Categorical ring
References
Further reading
A blog post by one of the above authors (Baez): https://golem.ph.utexas.edu/category/2008/10/what_is_categorification.html.
Category theory
Algebraic topology | Categorification | [
"Mathematics"
] | 849 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Mathematical relations"
] |
1,596,974 | https://en.wikipedia.org/wiki/Directive%20on%20the%20legal%20protection%20of%20biotechnological%20inventions | Directive 98/44/EC of the European Parliament and of the Council of 6 July 1998 on the legal protection of biotechnological inventions
is a European Union directive in the field of patent law, made under the internal market
provisions of the Treaty of Rome. It was intended to harmonise the laws of Member States regarding the patentability
of biotechnological inventions, including plant varieties (as legally defined) and human genes.
Content
The Directive is divided into the following five chapters:
Patentability (Chapter I)
Scope of Protection (Chapter II)
Compulsory cross-licensing (Chapter III)
Deposit, access and re-deposit of biological material (Chapter IV)
Final Provisions (entering into force) (Chapter V)
Timeline
The original proposal was adopted by the European Commission in 1988. The procedure for its adoption was slowed down by primarily ethical issues regarding the patentability of living matter. The European Parliament eventually rejected the joint text from the final Conciliation meeting at 3rd reading on 1 March 1995 so the first directive process did not yield a directive.
On 13 December 1995, the Commission adopted a new proposal was nearly identical to the rejected version, was changed again, but the Parliament put aside its ethical concerns on patenting of human genes in on 12 July 1998 in its second reading and adopted the Common Position of the Council, so in the second legislative process, the directive was adopted. The drafts person of the Parliament for this second procedure was Willi Rothley and the vote with the most yes votes was Amendment 9 from the Greens which got 221 against 294 votes out of 532 members voting
with 17 abstentions but 314 yes votes would have been required to reach the required an absolute majority to adopt it.
On 6 July 1998, a final version was adopted. Its code is 98/44/EC.
The Kingdom of the Netherlands brought Case C-377/98 before the European Court of Justice against the adoption of the directive with six different pleas but the Court granted none of them.
Nevertheless, the ECJ decision does not preclude a further test of the validity of the directive on the ground that it is inconsistent with the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS). Art. 27.1 TRIPS provides that patents are only to be granted with respect to 'inventions'. The directive, however, provides that "biological material which is isolated from its natural environment ... may be the subject of an invention even if it previously occurred in nature." It is clearly arguable that merely isolating a human gene or protein from its natural environment is not an activity that can come within the meaning of the word 'invention'. The Danish Council of Bioethics in its Patenting Human Genes and Stem Cells Report noted that "In the members' view, it cannot be said with any reasonableness that a sequence or partial sequence of a gene ceases to be part of the human body merely because an identical copy of the sequence is isolated from or produced outside of the human body." TRIPS applies to the European Community as it is a member of the World Trade Organization (WTO) in its own right and accordingly must ensure "the conformity of its laws, regulations and administrative procedures with obligations as provided" by the WTO.
On 14 January 2002, the Commission submitted an assessment of the implications for basic genetic engineering research of failure to publish, or late publication of, papers on subjects which could be patentable as required under Article 16(b) of this directive.
Campaigning and lobbying
According to SmithKline Beecham lobbyist Simon Gentry, the company allocated 30 million ECU for a pro-Directive campaign. Part of this campaign was direct support of patient charities and organisations. On the day of the July 1997 vote, a number of people in wheelchairs from these groups demonstrated outside the main hall in Strasbourg, chanting the pharmaceutical industry's slogan, "No Patents, No Cure" in an emotional appeal to Parliamentarians to vote for the Directive.
Implementation
As of 15 January 2007, all of the 27 EU member states had implemented the Directive.
See also
Biological patent
Patent law of the European Union
G 2/06, decision of the Enlarged Board of Appeal of the European Patent Office (EPO) of 25 November 2008, relating to (non-patentability of) inventions involving the use and destruction of human embryos.
References
External links
EU Legislation summary
Text of the directive (pdf)
Text of directive with headers (html)
Report from the Commission to the Council and the European Parliament, Development and implications of patent law in the field of biotechnology and genetic engineering, 2005-07-14
FAQ from the Commission Directorate-General for the Internal Market
Reports and other documents concerning European Commission activities in the area of biotechnological inventions from the Commission Directorate-General for the Internal Market
Issues from the UK Intellectual Property Office
Some biotechnological inventions registered at the European Patent Office (unofficial site)
Paper
Baruch Brody, 2007. "Intellectual Property and Biotechnology: The European Debate,” Kennedy Institute of Ethics Journal 17(2): 69–110.
Biological patent law
Patent law of the European Union
1998 in law
1998 in the European Union
Life sciences industry | Directive on the legal protection of biotechnological inventions | [
"Biology"
] | 1,047 | [
"Biotechnology law",
"Biological patent law",
"Life sciences industry"
] |
1,597,062 | https://en.wikipedia.org/wiki/Cyclorama%20%28theater%29 | In theater and film, a cyclorama (abbreviated cyc in the U.S., Canada, and the UK) is a large curtain or wall, often concave, positioned at the back of the apse. It often encircles or partially encloses the stage to form a background. The world "cyclorama" stems from the Greek words "kyklos", meaning circle, and "orama", meaning view. It was popularized in the German theater of the 19th century and continues in common usage today in theaters throughout the world. It can be made of unbleached canvas (larger versions) or muslin (smaller versions), filled scrim (popularized on Broadway in the 20th century), or seamless translucent plastic (often referred to as "Opera Plastic"). Traditionally it is hung at 0% fullness (flat). When possible, it is stretched on the sides and weighted on the bottom to create a flat and even surface. As seams tend to interrupt the smooth surface of the cyclorama, it is usually constructed from extra-wide material. Cycloramas are also used in photography, architecture, and are useful to artists if referring to painted backdrops or walls.
In photography, cycloramas or cycs also refer to curving backdrops which are white to create the illusion of no background, or green for chroma keying.
An infinity cyclorama (found particularly in television and in film stills studios) is a cyc which curves smoothly at the bottom to meet the studio floor, so that with careful lighting and the corner-less joint, the illusion that the studio floor continues to infinity can be achieved. An example of this would be in Apple's advertisement series Get a Mac, in which two actors stand in front of a cyclorama representing a Mac and a PC. Popular TV show The Mandolorian uses a cutting edge cyclorama to create a 3D render of the fictional planet the scene is set in, allowing for actors to feel immersed while filming.
Cycloramas are often used to create the illusion of a sky onstage. By varying the equipment, intensity, color and patterns used, a lighting designer can achieve many varied looks. A cyclorama can be front lit or, if it is constructed of translucent and seamless material, backlit directly or indirectly with the addition of a white "bounce" drop. To achieve the illusion of extra depth, often desirable if one is re-creating a sky, the cyclorama can be paired with a "sharkstooth scrim" backdrop. A dark or black scrim, by absorbing the extraneous light which is commonly reflected off the floor of the stage can further achieve deeper colors on the cyclorama. Cycloramas are also often illuminated during dance concerts to match the mood of a song.
Occasionally, the cyc may be painted with a decorative or pictorial scene to fit a specific show; these are generally referred to as backdrops.
Examples of Use in Stage Production
One example of the documented use of the cyclorama was made by Irene Sharaff for a Broadway production of Alice in Wonderland in the year 1932. In this production, Sharaff painted a cyc backdrop.
The 2022 Broadway production of The Lion King features the use of a cyc as a colorful background to highlight the actors on stage.
Lighting Designer Donald Holder uses the cyc in his work for productions of The Lion King and South Pacific, both of which he has received Tony Awards for.
See also
Cyclorama Building, Boston
Stage lighting
Striplight
Theater drapes and stage curtains
References
Sources
Scenic design
Stage lighting | Cyclorama (theater) | [
"Engineering"
] | 759 | [
"Scenic design",
"Design"
] |
1,597,180 | https://en.wikipedia.org/wiki/Rock%20mass%20classification | Rock mass classification systems are used for various engineering design and stability analysis. These are based on empirical relations between rock mass parameters and engineering applications, such as tunnels, slopes, foundations, and excavatability. The first rock mass classification system in geotechnical engineering was proposed in 1946 for tunnels with steel set support.
Design methods
In engineering in rock, three design strategies can be distinguished: analytical, empirical, and numerical. Empirical, i.e. rock mass classification, methods are extensively used for feasibility and pre-design studies, and often also for the final design.
Objectives
The objectives of rock mass classifications are (after Bieniawski 1989):
Identify the most significant parameters influencing the behaviour of a rock mass.
Divide a particular rock mass formulation into groups of similar behaviour – rock mass classes of varying quality.
Provide a basis of understanding the characteristics of each rock mass class
Relate the experience of rock conditions at one site to the conditions and experience encountered at others
Derive quantitative data and guidelines for engineering design
Provide common basis for communication between engineers and geologists
Benefits
The main benefits of rock mass classifications:
Improve the quality of site investigations by calling for the minimum input data as classification parameters.
Provide quantitative information for design purposes.
Enable better engineering judgement and more effective communication on a project.
Provide a basis for understanding the characteristics of each rock mass
Rock mass classification systems
Systems for tunneling: Quantitative
Rock Mass Rating (RMR)
Q-system
Geological Strength Index
Mining rock mass rating (MRMR)
Other systems: Qualitative
Size Strength classification
Systems for slope engineering
Slope Mass Rating (SMR), Continuous Slope Mass Rating and Graphical Slope Mass Rating
Rock mass classification system for rock slopes
Slope Stability Probability Classification (SSPC)
Q-slope
Earlier systems
Rock load classification method
The Rock load classification method is one of the first methodologies for rock mass classification for engineering. Karl von Terzaghi developed the methodology for tunnels supported by steel sets in the 1940s. By many regarded as obsolete as ideas about rock and rock mass mechanical behavior have since further developed and the methodology is not suitable for modern tunneling methods using shotcrete and rock bolts.
Reference: also in Soil Mechanics Series 25, publication 418. Harvard University, Graduate School of Engineering.
Stand-up time classification
The Stand-up time classification by Lauffer is often regarded as the origin of the New Austrian Tunnelling Method (NATM). The original system as developed by Lauffer is nowadays by many regarded as obsolete but his ideas are incorporated in modern rock mechanics science, such as the relation between the span of a tunnel and the stand-up time, and notably in the New Austrian Tunnelling Method.
Reference:
Rock Quality Designation
The Rock Quality Designation index was developed by Deere in the 1960s to classify the quality of a rock core based on the integrety of borehole cores. Nowadays the classification system itself is not very often used, but the determination of the RQD as index for rock core quality is standard practice in any geotechnical rock drilling, and is used in many, more recent, rock mass classification systems, such as RMR and Q-system (see above).
Rock Structure Rating (RSR)
The Rock Structure Rating system is a quantitative method for describing quality of a rock mass and appropriate ground support, in particular, for steel-rib support, developed by Wickham, Tiedemann and Skinner in the 1970s.
See also
Slope Mass Rating
Rock mechanics
Geotechnical investigation
Geotechnical engineering
ISRM classification
Slope stability, Slope stability analysis
Classification of rocks
References
Further reading
Rocks | Rock mass classification | [
"Physics"
] | 716 | [
"Rocks",
"Physical objects",
"Matter"
] |
1,597,220 | https://en.wikipedia.org/wiki/International%20Society%20for%20Rock%20Mechanics | The International Society for Rock Mechanics - ISRM was founded in Salzburg in 1962 as a result of the enlargement of the "Salzburger Kreis". Its foundation is mainly owed to Prof. Leopold Müller who acted as President of the Society until September 1966. The ISRM is a non-profit scientific association supported by the fees of the members and grants that do not impair its free action. In 2021 the Society had 6,800 members and 49 National Groups.
The field of Rock Mechanics is taken to include all studies relative to the physical and mechanical behaviour of rocks and rock masses and the applications of this knowledge for the better understanding of geological processes and in the fields of Engineering.
The main objectives and purposes of the Society are:
to encourage international collaboration and exchange of ideas and information between Rock Mechanics practitioners;
to encourage teaching, research, and advancement of knowledge in Rock Mechanics;
to promote high standards of professional practice among rock engineers so that civil, mining and petroleum engineering works might be safer, more economic and less disruptive to the environment.
The main activities carried out by the Society in order to achieve its objectives are:
to hold International Congresses at intervals of four years;
to sponsor International and Regional Symposia, organised by the National Groups the Society;
to publish a News Journal to provide information about technology related to Rock Mechanics and up-to-date news on activities being carried out in the Rock Mechanics community;
to operate Commissions for studying scientific and technical matters of concern to the Society;
to award the Rocha Medal for an outstanding doctoral thesis, every year, and the Müller Award in recognition of distinguished contributions to the profession of Rock Mechanics and Rock Engineering, once every four years;
to cooperate with other international scientific associations.
The Society is ruled by a Council, consisting of representatives of the National Groups, the Board and the Past Presidents. The current President is Prof. Resat Ulusay, from Turkey.
The ISRM Secretariat has been headquartered in Lisbon, Portugal, at the Laboratório Nacional de Engenharia Civil - LNEC since 1966, date of the first ISRM Congress, when Prof. Manuel Rocha was elected as President of the Society.
ISRM is a member of the Federation of International Geo-Engineering Societies.
References
External links
ISRM website
Federation of International Geo-Engineering Societies
Geotechnical organizations
Rock mechanics
International organisations based in Portugal
International learned societies based in Europe | International Society for Rock Mechanics | [
"Engineering"
] | 487 | [
"Geotechnical organizations",
"Civil engineering organizations"
] |
1,597,299 | https://en.wikipedia.org/wiki/Comparison%20of%20instant%20messaging%20protocols | The following is a comparison of instant messaging protocols. It contains basic general information about the protocols.
Table of instant messaging protocols
See also
Comparison of cross-platform instant messaging clients
Comparison of Internet Relay Chat clients
Comparison of LAN messengers
Comparison of software and protocols for distributed social networking
LAN messenger
Secure instant messaging
Comparison of user features of messaging platforms
References
Instant messaging protocols
Instant messaging
Instant messaging protocols
Instant messaging protocols | Comparison of instant messaging protocols | [
"Technology"
] | 79 | [
"Computing-related lists",
"Lists of network protocols",
"Online services comparisons",
"Computing comparisons",
"Instant messaging",
"Instant messaging protocols"
] |
1,597,312 | https://en.wikipedia.org/wiki/Tau%20Bo%C3%B6tis%20b | Tau Boötis b, or more precisely Tau Boötis Ab, is an extrasolar planet approximately 51 light-years away. The planet and its host star is one of the planetary systems selected by the International Astronomical Union as part of NameExoWorlds, their public process for giving proper names to exoplanets and their host star (where no proper name already exists). The process involved public nomination and voting for the new names, and the IAU planned to announce the new names in mid-December 2015. However, the IAU annulled the vote as the winning name was judged not to conform with the IAU rules for naming exoplanets.
Discovery
Discovered in 1996, the planet is one of the first extrasolar planets found. It was discovered orbiting the star Tau Boo (HR 5185) by Paul Butler and his team (San Francisco Planet Search Project) using the highly successful radial velocity method. Since the star is visually bright and the planet is massive, it produces a very strong velocity signal of 469 ± 5 metres per second, which was quickly confirmed by Michel Mayor and Didier Queloz from data collected over 15 years. It was later confirmed also by the AFOE Planet Search Team.
Orbit and mass
Tau Boötis b is rather massive, with a minimum mass over four times that of Jupiter. It orbits the star in a so-called "torch orbit", at a distance from the star less than one seventh that of Mercury's from the Sun. One orbital revolution takes only 3 days 7.5 hours to complete. Because τ Boo is hotter and larger than the Sun and the planet's orbit is so small, it is assumed to be hot. Assuming the planet is perfectly grey with no Greenhouse effect or tidal effects, and a Bond albedo of 0.1, the temperature would be close to 1600 K. Although it has not been detected directly, it is certain that the planet is a gas giant.
As Tau Boötis b is more massive than most known "hot Jupiters", it was speculated that it was originally a brown dwarf, a failed star, which could have lost most of its atmosphere from the heat of its larger companion star. However, this seems very unlikely. Still, such a process has actually been detected on the famous transiting planet HD 209458 b.
In December 1999, a group led by A. C. Cameron had announced that they had detected reflected light from the planet. They calculated that the orbit of the planet has an inclination of 29° and thus the absolute mass of the planet would be about 8.5 times that of Jupiter. They also suggested that the planet is blue in color. Unfortunately, their observations could not be confirmed and were later proved to be spurious.
A better estimate came from the assumption of tidal lock with the star, which rotates at 40 degrees; fixing the planet's mass between 6 and 7 Jupiter masses. In 2007, magnetic field detection confirmed this estimate.
In 2012 two teams independently distinguished the radial velocity of the planet from the radial velocity of the star by observing the shifting of the spectral lines of carbon monoxide. This enabled calculation of the inclination of the planet's orbit and hence the planet's mass. One team found an inclination of 44.5±1.5 degrees and a mass of . The other team found an inclination of 47−6+7 and a mass of .
Characteristics
The temperature of Tau Boötis b probably inflates its radius higher (1.2 times) than Jupiter's. Since no reflected light has been detected, the planet's albedo must be less than 0.37. The albedo constraint was tightened to less than 0.12 by 2021. At 1600 K, it is (like Mastika) supposed to be hotter than HD 209458 b (formerly predicted 1392K) and possibly even Smertrios (predicted 1540 K from higher albedo 0.3, then actually measured at 2300 K). Tau Boötis b's predicted Sudarsky class is V; which is supposed to yield a highly reflective albedo of 0.55.
It has been a candidate for "near-infrared characterization.... with the VLTI Spectro-Imager". When its atmosphere was measured in 2011, "the new observations indicated an atmosphere with a temperature that falls higher up. This result is the exact opposite of the temperature inversion – an increase in temperature with height – found for other hot Jupiter exoplanets". In 2014, direct detection of water vapor in atmosphere of the planet was announced. Later atmosphere characterization in 2021 have resulted in measured carbon abundance similar to that of Jupiter, in the form of 0.35% carbon monoxide volume admixture to hydrogen-helium atmosphere. The upper limit only of water below 2 parts per million (0.72% of that expected for solar composition) was estimated.
In 2020, a radio emission in the 14-30 MHz band was detected from the Tau Boötis system, likely associated with cyclotron radiation from the poles of Tau Boötis b. This could be a signature of the planet's magnetic field.
See also
51 Pegasi b
70 Virginis b
Galieo
Taphao Thong
Saffar
YZ Ceti another extra solar planet with evidence of magnetic fields
HAT-P-11b another extra solar planet with evidence of magnetic fields
HD 209458 b another extra solar planet with evidence of magnetic fields
List of exoplanets discovered before 2000
References
External links
The Planet Around Tau Bootes
AFOE observations of tau Boötis
Boötes
Hot Jupiters
Giant planets
Exoplanets discovered in 1996
Exoplanets detected by radial velocity
Articles containing video clips | Tau Boötis b | [
"Astronomy"
] | 1,167 | [
"Boötes",
"Constellations"
] |
1,597,686 | https://en.wikipedia.org/wiki/Media%20Dispatch%20Protocol | The Media Dispatch Protocol (MDP) was developed by the Pro-MPEG Media Dispatch Group to provide an open standard for secure, automated, and tapeless delivery of audio, video and associated data files. Such files typically range from low-resolution content for the web to HDTV and high-resolution digital intermediate files for cinema production.
MDP is essentially a middleware protocol that decouples the technical details of how delivery occurs from the business logic that requires delivery. For example, a TV post-production company might have a contract to deliver a programme to a broadcaster. An MDP agent allows users be able to deal with company and programme names, rather than with filenames and network endpoints. It can also provide a delivery service as part of a service oriented architecture.
MDP acts as a communication layer between business logic and low-level file transfer mechanisms, providing a way to securely communicate and negotiate transfer-specific metadata about file packages, delivery routing, deadlines, and security information, and to manage and coordinate file transfers in progress, whilst hooking all this information to project, company and job identifiers.
MDP works by implementing a 'dispatch transaction' layer by which means agents negotiate and agree the details of the individual file transfers required for the delivery, and control, monitor and report on the progress of the transfers. At the heart of the protocol is the 'Manifest' - an XML document that encapsulates the information about the transaction.
MDP is based on existing open technologies such as XML, HTTP and TLS. The protocol is specified in a layered way to allow the adoption of new technologies (e.g. Web Services protocols such as SOAP and WSDL) as required.
Since early 2005, multiple implementations based on draft versions of the Media Dispatch Protocol have been in use, both for technical testing, and, since April 2005, for real-world production work. The experience with these implementations, both at the engineering level, and at the practical production level, has been rolled into the 1.0rcX specification.
A newer, and more complete, open-source reference implementation is now available on SourceForge.
Media Dispatch Protocol (MDP) has been standardized by a SMPTE Working Group under the S22 Committee. This work has been published as SMPTE 2032-1-2007 (MDP specification), 2032-2-2007 (MDP/XML/HTTP mapping specification) and 2032-3-2007 (MDP Target pull profile specification). MDP is also supported by SMPTE Engineering Guideline EG 2032-4-2007 covering the use of MDP.
External links
Pro-MPEG homepage
Sourceforge project page
SMPTE standards page
Broadcast engineering
Film and video technology
Network file transfer protocols
SMPTE standards | Media Dispatch Protocol | [
"Engineering"
] | 574 | [
"Broadcast engineering",
"Electronic engineering"
] |
1,597,688 | https://en.wikipedia.org/wiki/Cytorrhysis | Cytorrhysis is the permanent and irreparable damage to the cell wall after the complete collapse of a plant cell due to the loss of internal positive pressure (hydraulic turgor pressure). Positive pressure within a plant cell is required to maintain the upright structure of the cell wall. Desiccation (relative water content of less than or equal to 10%) resulting in cellular collapse occurs when the ability of the plant cell to regulate turgor pressure is compromised by environmental stress. Water continues to diffuse out of the cell after the point of zero turgor pressure, where internal cellular pressure is equal to the external atmospheric pressure, has been reached, generating negative pressure within the cell. That negative pressure pulls the center of the cell inward until the cell wall can no longer withstand the strain. The inward pressure causes the majority of the collapse to occur in the central region of the cell, pushing the organelles within the remaining cytoplasm against the cell walls. Unlike in plasmolysis (a phenomenon that does not occur in nature), the plasma membrane maintains its connections with the cell wall both during and after cellular collapse.
Cytorrhysis of plant cells can be induced in laboratory settings if they are placed in a hypertonic solution where the size of the solutes in the solution inhibit flow through the pores in the cell wall matrix. Polyethylene glycol is an example of a solute with a high molecular weight that is used to induce cytorrhysis under experimental conditions. Environmental stressors which can lead to occurrences of cytorrhysis in a natural setting include intense drought, freezing temperatures, and pathogens such as the rice blast fungus (Magnaporthe grisea).
Mechanisms of avoidance
Desiccation tolerance refers to the ability of a cell to successfully rehydrate without irreparable damage to the cell wall following severe dehydration. Avoiding cellular damage due to metabolic, mechanical, and oxidative stresses associated with desiccation are obstacles that must be overcome in order to maintain desiccation tolerance. Many of the mechanisms utilized for drought tolerance are also utilized for desiccation tolerance, however the terms desiccation tolerance and drought tolerance should not be interchanged as the possession of one does not necessarily correlate with possession of the other. High desiccation tolerance is a trait typically observed in bryophytes, which includes the hornwort, liverwort and moss plant groups but it has also been observed in angiosperms to a lesser extent. Collectively these plants are known as resurrection plants.
Resurrection plants
Many resurrection plants use constitutive and inducible mechanisms to deal with drought and then later desiccation stress. Protective proteins such as cyclophilins, dehydrins, and LEA proteins are maintained at levels within a desiccation resistant species typically only seen during drought stress for desiccation sensitive species, providing a greater protective buffer as inducible mechanisms are activated. Some species also continuously produce anthocyanins and other polyphenols. An increase in the hormone ABA is typically associated with activation of inducible metabolic pathways. Production of sugars (predominantly sucrose), aldehyde dehydrogenases, heat shock factors, and other LEA proteins are upregulated after activation to further stabilize cellular structures and function. Composition of the cell wall structure is altered to increase flexibility so folding can take place without irreparably damaging the structure of the cell wall. Sugars are utilized as water substitutes by maintaining hydrogen bonds within the cell membrane. Photosynthesis is shut down to limit production of reactive oxygen species and then eventually all metabolic are drastically reduced, the cell effectively becoming dormant until rehydration.
References
Plant physiology | Cytorrhysis | [
"Biology"
] | 764 | [
"Plant physiology",
"Plants"
] |
1,597,736 | https://en.wikipedia.org/wiki/Joint%20European%20standard%20for%20size%20labelling%20of%20clothes | The joint European standard for size labelling of clothes, formally known as the EN 13402 Size designation of clothes, is a European standard for labelling clothes sizes. The standard is based on body dimensions measured in centimetres, and as such, and its aim is to make it easier for people to find clothes in sizes that fit them.
The standard aims to replace older clothing size systems that were in popular use before the year 2007, but the degree of its adoption has varied between countries. For bras, gloves and children's clothing it is already the de facto standard in most of Europe. Few other countries are known to have followed suit.
The Spanish Ministry of Health and Consumer Affairs has commissioned a study to categorize female body types with a view to harmonising Spanish clothing sizes with EN-13402.
Background
There are three approaches towards size-based labelling of clothes:
Body dimensions The label states the range of body measurements for which the product was designed. (For example: a bike helmet label stating "head girth: 56–60 cm" or shoes labeled "foot length: 280 mm")
Product dimensions The label states characteristic dimensions of the product. (For example: a jeans label stating the inner leg length of the jeans in centimeters or inches, but not the inner leg measurement of the intended wearer)
Ad hoc sizes or vanity sizes The label states a size number or code with no obvious relationship to any measurement. (For example: Size 12, XL)
Traditionally, clothes have been labelled using many different ad hoc size systems.
This approach has led to a number of problems:
For many types of garments, size cannot be adequately described by a single number, because a good fit requires a match between two (or sometimes three) independent body dimensions. This is a common issue in sizing jeans.
Ad hoc sizes have changed with time, due to changing demographics and increasing rates of obesity. This has been characterised in media as vanity sizing.
Scalar ad hoc sizes based on 1950s anthropometric studies are no longer adequate, as changes in nutrition and lifestyle have shifted the distribution of body dimensions.
Mail order requires accurate methods for predicting the best-fitting size.
Country-specific and vendor-specific labels incur additional costs.
Therefore, in 1996, the European standards committee CEN/TC 248/WG 10 started the process of designing a new, modern system of labelling clothes sizes, resulting in the standard EN 13402 "Size designation of clothes".
It is based on:
body dimensions
the metric system (SI)
data from new anthropometric studies of the European population performed in the late 1990s
similar existing international standards (ISO 3635, etc.)
EN 13402-1: Terms, definitions and body measurement procedure
The first part of the standard defines the list of body dimensions to be used for designating clothing sizes, together with an anatomical explanations and measurement guidelines. All body dimensions (excluding one's body mass) are measured in centimetres, preferably without clothes on, or with the underwear the wearer expects to be wearing underneath the garment.
The standard also defines a pictogram that can be used in language-neutral labels to indicate one or several of the following body dimensions.
Head girth Maximum horizontal girth (circumference) of the head, measured above the ears
Neck girth Girth of the neck measured with the tape measure passed 2 cm below the Adam's apple, and at the level of the 7th cervical vertebra
Chest girth (♂ men) Maximum horizontal girth measured during normal breathing, with the subject standing erect and the tape measure passed over the shoulder blades (scapulae), under the armpits (axillae), and across the chest
Bust girth (♀ women) Maximum horizontal girth measured during normal breathing with the subject standing erect and the tape measure passed horizontally under the armpits (axillae) and across the bust prominence (preferably measured with moderate tension over a brassiere that is expected to be worn underneath)
Underbust girth (♀ women) Horizontal girth of the body measured just below the breasts
Waist girth Girth of the natural waistline between the top of the hip bones (iliac crests) and the lower ribs, measured with the subject breathing normally and standing erect with the abdomen relaxed
Hip girth (♀ women) Horizontal girth measured round the buttocks at the level of maximum circumference
Height Vertical distance between the crown of the head and the soles of the feet, measured with the subject standing erect, without shoes and with the feet together (for infants not yet able to stand upright: length of the body measured in a straight line from the crown of the head to the soles of the feet)
Inside leg length Distance between the crotch and the soles of the feet, measured in a straight vertical line with the subject erect, feet slightly apart, and the weight of the body equally distributed on both legs
Arm length Distance, measured using the tape measure, from the armscye/shoulder line intersection (acromion), over the elbow, to the far end of the prominent wrist bone (ulna), with the subject's right fist clenched and placed on the hip, and with the arm bent at 90°
Hand girth Maximum girth measured over the knuckles (metacarpals) of the open right hand, fingers together and thumb excluded
Foot length Horizontal distance between perpendiculars, in contact, with the end of the most prominent toe and the most prominent part of the heel, measured with the subject standing barefoot and the weight of the body equally distributed on both feet
Body mass Measured with a suitable balance in kilograms.
EN 13402-2: Primary and secondary dimensions
The second part of the standard defines for each type of garment one "primary dimension". This is the body measure according to which the product must be labelled Where men's garments use the chest girth, women's clothes are designed for a certain bust girth.
For some types of garment, a single measure may not be sufficient to select the right product. In these cases, one or two "secondary dimensions" can be added to the label.
The following table shows the primary (in bold) and secondary dimensions listed in the standard, leaving out the redundant words girth, length and size for better overview.
EN 13402–3: Measurements and intervals
The third part of the standard defines preferred numbers of primary and secondary body dimensions.
The product should not be labelled with the average body dimension for which the garment was designed (i.e., not "height: 176 cm."). Instead, the label should show the range of body dimensions from half the step size below to half the step size above the design size (e.g., "height: 172–180 cm.").
For heights, for example, the standard recommends generally to use the following design dimensions, with a step size of 8 cm:
For trousers, the recommended step size for height is 4 cm:
The standard defines similar tables for other dimensions and garments, only some of which are shown here.
Men
The standard sizes and ranges for chest and waist girth are defined in steps of 4 cm:
drop = waist girth − chest girth.
Example: While manufacturers will typically design clothes for chest girth = 100 cm such that it fits waist girth = 88 cm, they may also want to combine that chest girth with neighbouring waist girth step sizes 84 cm or 92 cm, to cover these drop types (−16 cm and −8 cm) as well.
The standard also suggests that neck girth can be associated with chest girth:
The standard further suggests that arm length can be associated with height:
Women
Dress sizes
The standard sizes and ranges for bust, waist and hip girth are mostly based on a step of 4 cm, for larger sizes 5 cm (hip) or 6 cm (bust and waist):
Bra sizes
The European standard EN 13402 also defines bra sizes based on the "bust girth" and the "underbust girth". Bras are labeled with the under bust girth (rounded to the nearest multiple of 5 cm), followed by a letter code that indicates the "cup size" defined below, according to this table defined by the standard.
The standard sizes for brassiere are based on a step of 5 cm:
The secondary dimension cup size can be expressed in terms of the difference
cup size = bust girth − underbust girth
and can be labelled compactly using a letter code appended to the underbust girth:
Example 1 Bra size 70B is suitable for women with underbust girth 68–72 cm and bust girth from 82–84 cm to 86–88 cm.
Example 2 A woman with an underbust girth of 89 cm and a bust girth of 108 cm has cup size 19 cm (= 108 cm – 89 cm) or "D". Her underbust girth rounded to the nearest multiple of 5 cm is 90 cm. Therefore, her bra size according to the standard is 90D.
Letter codes
For clothes where a larger step size is sufficient, the standard also defines a letter code. This code represents the bust girth for women and the chest girth for men. The standard does not define such a code for children.
Each range combines two adjacent size steps. The ranges could be extended below XXS or above 3XL if necessary.
EN 13402-4: Coding system
The fourth part of the standard is still under review. It will define a compact coding system for clothes sizes. This was originally intended primarily for industry use in databases and as a part of stock-keeping identifiers and catalogue ordering numbers, but later users have also expressed a desire to use compact codes for customer communication. Writing out all the centimetre figures of all the primary and secondary measures from EN 13402-2 can – in some cases – require up to 12 digits. The full list of centimetre figures on the pictogram contains a lot of redundancy and the same information can be squeezed into fewer characters with lookup tables. EN 13402-4 will define such tables.
An earlier draft of this part of the standard attempted to list all in-use combinations of EN 13402-3 measures and assigned a short 2- or 3-digit code to each. Some of the industry representatives involved in the standardization process considered this approach too restrictive. Others argued that the primary dimension in centimetres should be a prominent part of the code. Therefore, this proposal, originally expected to be adopted in 2005, was rejected.
Since then, several new proposals have been presented to the CEN working group. One of these, tabled by the European Association of National Organisations of Textile Traders (AEDT), proposes a 5-character alphanumeric code, consisting of the 3-digit centimetre figure of the primary body dimension, followed by one or two letters that code a secondary dimension, somewhat like the system already defined for bra sizes. For example, an item designed for 100 cm bust girth, 104 cm hip girth and 176 cm height could bear the compact size code "100BG". This proposal was agreed upon in 2006, but later disregarded. A paper by Bogusławska-Bączek published in 2010 showed that there were still significant difficulties in identifying clothing sizes.
Related links
Clothing sizes
Shoe size, hereunder the international Mondopoint
US standard clothing size
Vanity sizing
References
External links
All change for clothes sizes – press release by the British Standards Institution (11 March 2002)
Dress size harmonization – press release by the British Standards Institution (24 October 2003)
John Scrimshaw: One size really might fit all. Fashion Business International, March 2004.
Karryn Miller: Sizing a headache for globalising apparel industry. just-style, 27 July 2010.
BodyDim: program for calculating out EN13402 values
European clothing standard EN 13402 pictogram generator
Sizes in clothing
13402
Metrication
Fashion design | Joint European standard for size labelling of clothes | [
"Physics",
"Mathematics",
"Engineering"
] | 2,471 | [
"Sizes in clothing",
"Fashion design",
"Physical quantities",
"Quantity",
"Size",
"Design"
] |
1,597,853 | https://en.wikipedia.org/wiki/Xanadu%20Houses | The Xanadu Houses were a series of experimental homes built to showcase examples of computers and automation in the home in the United States. The architectural project began in 1979, and during the early 1980s three houses were built in different parts of the United States: one each in Kissimmee, Florida; Wisconsin Dells, Wisconsin; and Gatlinburg, Tennessee. The houses included novel construction and design techniques, and became popular tourist attractions during the 1980s.
The Xanadu Houses were notable for their easy, fast, and cost-effective construction as self-supporting monolithic domes of polyurethane foam without using concrete. They were ergonomically designed, and contained some of the earliest home automation systems. The Kissimmee Xanadu, designed by Roy Mason, was the most popular, and at its peak was attracting 1000 visitors every day. The Wisconsin Dells and Gatlinburg houses were closed and demolished in the early 1990s; the Kissimmee Xanadu House was closed in 1996 and demolished in October 2005.
History
Early development
Bob Masters was an early pioneer of houses built of rigid insulation. Before conceiving the Xanadu House concept, Masters designed and created inflatable balloons to be used in the construction of houses. He was inspired by architect Stan Nord Connolly's Kesinger House in Denver, Colorado, one of the earliest homes built from insulation. Masters built his first balloon-constructed house exterior in 1969 in less than three days during a turbulent snowstorm, using the same methods later used to build the Xanadu houses.
Masters was convinced that these dome-shaped homes built of foam could work for others, so he decided to create a series of show homes in the United States. Masters's business partner Tom Gussel chose the name "Xanadu" for the homes, a reference to Xanadu, the summer capital of Yuan, which is prominently featured in Samuel Taylor Coleridge's famous poem Kubla Khan. The first Xanadu House opened in Wisconsin Dells, Wisconsin. It was designed by architect Stewart Gordon and constructed by Masters in 1979. It was in area, and featured a geodesic greenhouse. 100,000 people visited the new attraction in its first summer.
Popularity
The most popular Xanadu house was the second house, designed by architect Roy Mason. Masters met Mason in 1980 at a futures conference in Toronto. Mason had worked on a similar project prior to his involvement in the creation of the Kissimmee Xanadu House — an "experimental school" on a hill in Virginia which was also a foam structure. Both Mason and Masters were influenced by other experimental houses and building concepts which emphasized ergonomics, usability, and energy efficiency. These included apartments designed by architect Kisho Kurokawa featuring detachable building modules and more significant designs including a floating habitat made of fiberglass designed by Jacques Beufs for living on water surfaces, concepts for living underwater by architect Jacques Rougerie and the Don Metz house built in the 1970s which took advantage of the earth as insulation. Fifty years before Xanadu House, another house from the 1933 Homes of Tomorrow Exhibition at the Century of Progress Exposition in Chicago introduced air conditioning, forced air heating, circuit breakers and electric eye doors.
Mason believed Xanadu House would alter people's views of houses as little more than inanimate, passive shelters against the elements. "No one's really looked at the house as a total organic system", said Mason, who was also the architecture editor of The Futurist magazine. "The house can have intelligence and each room can have intelligence." The estimated cost of construction for one home was $300,000. Roy Mason also planned a low cost version which would cost $80,000, to show that homes using computers do not have to be expensive. The low cost Xanadu was never built. Approximately 1,000 homes were built using this type of construction.
The Walt Disney Company opened Epcot Center in Florida on October 1, 1982 (originally envisioned as the Experimental Prototype Community of Tomorrow). Masters, fellow Aspen High School teacher, Erik V Wolter, and Mason decided to open a Xanadu House several miles away in Kissimmee. It eventually opened in 1983, after several years of research into the concepts Xanadu would use. It was over in size, considerably larger than the average house because it was built as a showcase. At its peak in the 1980s, under the management of Wolter, more than 1,000 people visited the new Kissimmee attraction every day. A third Xanadu House was built in Gatlinburg, Tennessee. Shortly after the Xanadu Houses were built and opened as visitor attractions, tourism companies began to advertise them as the "home of the future" in brochures encouraging people to visit.
Demise
By the early 1990s, the Xanadu houses began to lose popularity because the technology they used was quickly becoming obsolete, and as a result the houses in Wisconsin and Tennessee were demolished, while the Xanadu House in Kissimmee continued to operate as a public visitor attraction until it was closed in 1996. It was consequently put up for sale in 1997 and was sold for office and storage use. By 2001, the Kissimmee house had suffered greatly from mold and mildew throughout the interior due to a lack of maintenance since being used as a visitor attraction, it was put up for sale again for an asking price of US$2 million. By October 2005, the last of the Xanadu houses had been demolished, following years of abandonment and use by the homeless.
The Kissimmee house was featured in the 2007 movie Urban Explorers: Into the Darkness. It showed the house in disrepair with doors wide open, mold growing everywhere and a homeless man living inside. The "explorers" walked through the house filming the decay firsthand as the homeless man slept in a chair on the main floor. At the end of the segment, the man wakes up and threatens the "explorers" telling them to leave his home.
Design
Construction
Construction of the Xanadu house in Kissimmee, Florida, began with the pouring of a concrete slab base and the erection of a tension ring in diameter to anchor the domed roof of what would become the "Great Room" of the house. A pre-shaped vinyl balloon was formed and attached to the ring, and then inflated by air pressure from large fans. Once the form was fully inflated, its surface was sprayed with quick-hardening polyurethane plastic foam. The foam, produced by the sudden mixture of two chemicals that expand on contact to 30 times their original volume, hardened almost instantly. Repeated spraying produced a five-to-six-inch-thick structurally sound shell within a few hours. Once the foam cured, the plastic balloon form was removed to be used again. Once the second dome was completed and the balloon form removed, the two rooms were joined by wire mesh which was also sprayed with foam to form a connecting gallery or hall. This process was repeated until the house was complete. Window, skylight, and door openings were cut and the frames foamed into place. Finally, the interior of the entire structure was sprayed with a coating of fireproof material that also provided a smooth, easy-to-clean finish for walls and ceilings. The exterior was given a coat of white elastomeric paint as the final touch.
Interior
A Xanadu House was ergonomically designed, with future occupants in mind. It used curved walls, painted concrete floors rather than carpets, a light color scheme featuring cool colors throughout, and an open-floor plan linking rooms together without the use of doors. It had at least two entrances, and large porthole-type windows. The interior of the house was cave-like, featuring cramped rooms and low ceilings, although it is not clear whether these accounts describe the same Xanadu House with a thirty-foot dome. The interiors used a cream color for the walls, and a pale green for the floor.
The Xanadu house in Kissimmee, Florida used an automated system controlled by Commodore microcomputers. The house had fifteen rooms; of these the kitchen, party room, health spa, and bedrooms all used computers and other electronic equipment heavily in their design. The automation concepts which Xanadu House used are based on original ideas conceived in the 1950s and earlier. The Xanadu Houses aimed to bring the original concepts into a finished and working implementation. Inside the house, there was an electronic tour guide for the benefit of visitors, and the family room featured video screens that displayed computer-graphics art. These art displays were constantly changing, being displayed on video screens as opposed to static mediums.
The home also featured fire and security systems, along with a master bath that included adjustable weather conditions and a solar-heated steam bath.
At the center of the house was the "great room", the largest in the house. It featured a large false tree supporting the roof, and also acted as part of the built-in heating system. The great room also included a fountain, small television set, and a video projector. Nearby was the dining room, featuring a glass table with a curved seat surrounding it; behind the seats was a large window covering the entire wall. The family room featured walls covered with television monitors and other electronic equipment. The entertainment center in the family room was described as an "electronic hearth" by the home's builders. It was planned as a gathering place for family members and relatives along the same lines as a traditional hearth with a fireplace.
The kitchen was automated by "autochef", an electronic dietitian which planned well-balanced meals. Meals could be cooked automatically at a set date and time. If new food was required, it could either be obtained via tele-shopping through the computer system or from Xanadu's own greenhouse. The kitchen's computer terminal could also be used for the household calendar, records, and home bookkeeping.
The Xanadu homes also suggested a way to do business at home with the office room and the use of computers for electronic mail, access to stock and commodities trading, and news services.
Computers in the master bedroom allowed for other parts of the house to be controlled. This eliminated chores such as having to go downstairs to turn off the coffee pot after one had gone to bed. The children's bedroom featured the latest in teaching microcomputers and "videotexture" windows, whose realistic computer-generated landscapes could shift in a flash from scenes of real places anywhere in the world to imaginary scenes. The beds at the right of the room retreated into the wall to save space and cut down on clutter; the study niches were just the right size for curling up all alone with a pocket computer game or a book.
In the spa, people could relax in a whirlpool, sun sauna, and environmentally-controlled habitat, and exercise with the assistance of spa monitors. One of the advantages of using computers in the home includes security. In Xanadu House, a HAL-type voice spoke when someone entered to make the intruder think someone was home.
Concerns over energy consumption
An initial concern was that electricity costs would be excessive, since several computers would be operating continuously. Mason figured that a central computer could control the energy consumption of all the other computers in the house.
See also
Buckminster Fuller's Dymaxion house
House of Innovation
Sanzhi UFO houses
Notes
References
Further reading
External links
Last of the Xanadus at RoadsideAmerica.com
Architectural theory
Domes
Houses in Osceola County, Florida
Houses in Sevier County, Tennessee
Houses in Wisconsin
House types
Postmodern architecture in the United States
Modernist architecture in Florida
Modernist architecture in Tennessee
Modernist architecture in Wisconsin
Expressionist architecture
Futurist architecture
Organic architecture | Xanadu Houses | [
"Engineering"
] | 2,422 | [
"Architectural theory",
"Architecture"
] |
1,597,882 | https://en.wikipedia.org/wiki/Timeline%20of%20jet%20power | This article outlines the important developments in the history of the development of the air-breathing (duct) jet engine. Although the most common type, the gas turbine powered jet engine, was certainly a 20th-century invention, many of the needed advances in theory and technology leading to this invention were made well before this time.
The jet engine was clearly an idea whose time had come. Frank Whittle submitted his first patent in 1930. By the late 1930s there were six teams chasing development, three in Germany, two in the UK and one in Hungary. By 1942 they had been joined by another half dozen British companies, three more in the United States based on British technology, and early efforts in the Soviet Union and Japan based on British and German designs respectively. For some time after the World War II, British designs dominated, but by the 1950s there were many competitors, particularly in the US with its huge arms-buying programme.
Prehistoric times
Ordovician period: the first known cephalopods: they swim by a natural built-in reciprocating hydrojet.
Ancient times
1st century AD: Aeolipile described by Hero of Alexandria – steam jet/rocket engine on a bearing
The leadup (1791–1929)
1791: John Barber receives British patent #1833 for A Method for Rising Inflammable Air for the Purposes of Producing Motion and Facilitating Metallurgical Operations. In the patent he describes a gas turbine and various applications for it including propulsion of ships, barges and boats by reaction.
1884: Charles Algernon Parsons patents the steam turbine. In the patent application he notes that the turbine could be driven "in reverse" to act as a compressor. He suggests using a compressor to feed air into a furnace, and a turbine to extract power to run the compressor. Although intended for factory use, he is clearly describing the gas turbine.
1887: Gustaf de Laval introduces nozzles design of small steam turbines.
1900: Sanford Alexander Moss publishes a paper on turbocompressors. He builds and runs a testbed example in 1903.
1903: Ægidius Elling builds a gas turbine using a centrifugal compressor which runs under its own power. By most definitions, this is the first working gas turbine.
1906: The Armengaud-Lemale gas turbine tested in France. This was a relatively large machine which included a 25 stage centrifugal compressor designed by Auguste Rateau. The gas turbine could sustain its own air compression but was too inefficient to produce useful work.
1907: Victor Kavarodine builds the first pulsejet engine based on his 1906 patent.
1908: René Lorin patents a design for the ramjet engine.
1908: Georges Marconnet patents the first valveless pulsejet and suggests its use in aircraft.
1910: Romanian inventor Henri Coandă builds the Coandă-1910 which he exhibits at the International Aeronautic Salon in Paris. It uses a ducted fan for propulsion instead of a propeller. Years later he claimed that it burned fuel in the duct and was thus a motorjet, but historians debate this claim, and his claims that the aircraft flew in December 1910 before crashing and burning.
1915: Albert Fonó devised a solution for increasing the range of artillery, comprising a gun-launched projectile which was to be united with a ramjet propulsion unit. This was to make it possible to obtain a long range with low initial muzzle velocities, allowing heavy shells to be fired from relatively lightweight guns.
1916: Auguste Rateau suggests using exhaust-powered compressors to improve high-altitude performance, the first example of the turbocharger.
1917: Sanford Alexander Moss starts work on turbochargers at General Electric, which goes on to be the world leader in this technology.
1917: James Stocker Harris patents a "Motor Jet" design on behalf of his brother inlaw Robert Alexander Raveau Bolton.
1920: W.J. Stern reports to the Royal Air Force that there is no future for the turbine engine in aircraft. He bases his argument on the extremely low efficiency of existing compressor designs. Stern's paper is so convincing there is little official interest in gas turbine engines anywhere, although this does not last long.
1921: Maxime Guillaume patents the axial-flow turbine engine. It uses multiple stages in both the compressor and turbine, combined with a single very large combustion chamber. Although sightly different in form, the design is significantly similar to future jet engines in operation.
1923: Edgar Buckingham at the United States National Bureau of Standards publishes a report on jets, coming to the same conclusion as W.J. Stern, that the turbine engine is not efficient enough. In particular he notes that a jet would use five times as much fuel as a piston engine.
1926: Alan Arnold Griffith publishes his groundbreaking paper Aerodynamic Theory of Turbine Design, changing the low confidence in jet engines. In it he demonstrates that existing compressors are "flying stalled", and that major improvements can be made by redesigning the blades from a flat profile into an airfoil, going on to mathematically demonstrate that a practical engine is definitely possible and showing how to build a turboprop.
1927: Aurel Stodola publishes his "Steam and Gas Turbines" - basic reference for jet propulsion engineers in the USA.
1927: A testbed single-shaft turbocompressor based on Griffith's blade design is tested at the Royal Aircraft Establishment. Known as Anne, the tests are successful and plans are made to build a complete compressor-turbine assembly known as Betty.
1929: Frank Whittle's thesis on future aircraft design is published. In it he talks about the needs for high-speed flight and the use of turbojets as the only reasonable solution to the problem of propeller efficiency.
1929: Boris Stechkin publishes first theory of supersonic ramjet, based on compressible fluid theory.
First turbojet engines (1930–38)
1930: Whittle presents a complete jet engine design to the Air Ministry. They pass the paper to Alan Griffith at the Royal Aircraft Establishment, who says the idea is impracticable, pointing out a mathematical error, noting the low efficiency of his design, and stating that Whittle's use of a centrifugal compressor would make his proposal useless for aircraft applications.
1930: Whittle receives official notice that the Air Ministry is not interested in his concepts, and that they do not even feel that it is worthy of making secret. He is devastated, but friends in the Royal Air Force convince him to patent the idea anyway. This turns out to be a major stroke of luck, because if the Air Ministry had made the idea secret, they would have become the official owners of the rights to the concept. In his patent, Whittle cleverly hedges his bets, and describes an engine with two axial compressor stages and one centrifugal, thus anticipating both routes forward.
1930: Schmidt patents a pulsejet engine in Germany.
1931: Secondo Campini patents his motorjet engine, referring to it as a thermojet. (A motorjet is a crude form of hybrid jet engine in which the compressor is powered by a piston engine, rather than a turbine.)
1933: Hans von Ohain writes his thesis at the University of Göttingen, describing an engine similar to Frank Whittle's with the exception that it uses a centrifugal "fan" as the turbine as well as the compressor. This design is a dead-end; no "centrifugal-turbine" jet engine will ever be built.
1933: Yuri Pobedonostsev and Igor Merkulov tests hydrogen powered GIRD-04 ramjet engine. First supersonic flight of a jet propelled object achieved with artillery-launched ramjets later that year.
1934: von Ohain hires a local mechanic, Max Hahn, to build his a prototype of his engine design at Hahn's garage.
1934: Secondo Campini starts work on the Campini Caproni CC.2, based on his "thermojet" engine.
1935: Whittle allows his patent to lapse after finding himself unable to pay the £5 renewal fee. Soon afterward he is approached by ex-RAF officers Rolf Dudley-Williams and James Collingwood Tinling with a proposal to set up a company to develop his design and Power Jets, Ltd is created.
1935 Virgilio Leret Ruiz is granted a patent (submitted January 1935, granted March 1935) in Madrid for a 'continuous reaction turbocompressor, for propulsion of aircraft, and in general all types of vehicles'. Work commenced at the Hispano-Suiza factory in 1936, months after Leret's execution by Francoist forces.
1936: von Ohain is introduced to Ernst Heinkel by a former professor. After being grilled by Heinkel engineers for hours, they conclude his idea is genuine. Heinkel hires von Ohain and Hahn, setting them up at their Rostock-area factory.
1936: Junkers starts work on axial-flow turboprop designs under the direction of Herbert Wagner and Adolf Müeller.
1936: Junkers Motoren (Jumo) is merged with Junkers, formerly separate companies.
1936: French engineer René Leduc, having independently re-discovered René Lorin's design, successfully demonstrates the world's first operating ramjet. The Armée de l'Air orders a prototype aircraft, the Leduc 010, a few months later.
April, 1937: Whittle's experimental centrifugal engine is tested at the British Thomson-Houston plant in Rugby
September, 1937: The Heinkel HeS 1 experimental hydrogen fuelled centrifugal engine is tested at Hirth.
September, 1937: von Ohain's Heinkel HeS 1 is converted to run on gasoline. Ernst Heinkel gives the go-ahead to develop a flight-quality engine and a testbed aircraft to put it in.
1937: Hayne Constant, Griffith's partner at the RAE, starts negotiations with Metropolitan-Vickers (Metrovick), a British heavy industry firm, to develop a Griffith-style turboprop.
1937: At Junkers, Wagner and Müller decide to re-design their work as a pure jet.
1938: Metrovick receives a contract from the Air Ministry to start work with Constant.
1938: György Jendrassik starts work on a turboprop engine of his own design.
April, 1938: Hans Mauch takes over the RLM rocket development office. He expands the charter of his office and starts a massive jet development project, under Helmut Schelp. Mauch spurns Heinkel and Junkers, concentrating only on the "big four" engine companies, Daimler-Benz, BMW, Jumo and Bramo. Mauch and Schelp visit all four over the next few months, and find them uninterested in the jet concept.
1938: A small team at BMW led by Hermann Östrich builds and flies a simple thermojet quickly prompting them to design a true jet engine.
1938: The Heinkel He 178 V1 jet testbed is completed, awaiting an engine.
1938: The Heinkel HeS 3 "flight quality" engine is tested. This is the first truly usable jet engine. The engine flies on a Heinkel He 118 later that year, eventually becoming the first aircraft to be powered by jet power alone. This engine is tested until it burns out after a few months, and a second is readied for flight.
1938: Wagner's axial-flow engine is tested at Junkers.
1938: Messerschmitt starts the preliminary design of a twin-engine jet fighter under the direction of Waldemar Voight. This work developed into the Messerschmitt Me 262.
1939, Flight
Arkhip Mikhailovich Lyulka develops early turbofan engine at Kharkov Aviation Institute.
BMW's team led by Hermann Östrich tests their axial-flow design.
Bramo starts work on two axial-flow designs, the P.3301 and P.3302. The P.3301 is similar to Griffith's contrarotating designs, the P.3302 using a simpler compressor/stator system.
Bramo is bought out by BMW, who abandon their own jet project under Östrich, placing him in charge of Bramo's efforts.
Summer: Jumo is awarded a contract to develop an axial-flow engine, starting work under Anselm Franz. Müller decamps with half the team to Heinkel.
Frank Whittle's patent drawing for his engine is published in the German magazine Flugsport.
August: Heinkel He 178 V1, the first jet-powered aircraft, flies for the first time, powered by the HeS 3B.
September: A team from the Air Ministry visits Power Jets once again, but this time Frank Whittle demonstrates a jet engine at full power for a continuous 20-minute run. They are extremely impressed, quickly contracts are offered to Whittle to develop a flyable design, and production contracts are offered to practically every engine company in England. These companies also set up their own design efforts, reducing the possibility of financial rewards for Power Jets.
September: The Air Ministry also contracts Gloster to build an experimental airframe for testing Whittle's engines, the Gloster E.28/39
After hearing of Whittle's successful demonstration, Hayne Constant realizes that exhaust thrust is practical. The Metrovick efforts are quickly reworked into a turbojet design, the Metrovick F.2.
November: Müller's team restarts work on their axial-flow design at Heinkel, now known as the Heinkel HeS 30.
René Anxionnaz of France's Rateau company received a patent on an advanced jet design incorporating bypass.
Leist joins Daimler-Benz and starts work on an advanced contra-rotating turbofan design, the Daimler-Benz DB 007
A shakeup at the RLM's engine division places Helmut Schelp in control, and results in development contracts for all existing engine designs. The designs are also given consistent naming, the Heinkel HeS 8 becoming the 109-001, the HeS 30 the -006, BMW's efforts the -002 and -003, and Jumo's the -004. Porsche's project becomes the -005, although work never starts on it. DB gets -007. Numbers starting in the 20s are saved for turboprops, and 500 and up for rockets.
1940
The Campini Caproni CC.2 flies for first time. The flights were highly publicized, and for many years the Italians were credited with having the first jet-powered aircraft.
NACA (National Advisory Committee for Aeronautics) starts work on a CC.2 like motorjet for assisted takeoffs, and they later design an aircraft based on it. This work ends in 1943 when turbojets start to mature, and rockets take over the role of JATO, or jet assisted takeoff.
von Ohain's larger Heinkel HeS 8 (-001) engine is tested.
BMW's P.3302 (-003) axial-flow engine is tested
September: Glider testing of the Heinkel He 280 twin-jet fighter begins, while it waits for the HeS 8 to mature.
September: Henry Tizard visits the United States to show them many of the advanced technologies the British are working on and looking for US production (the Tizard Mission). Among many other details, Tizard first mentions their work on jet engines.
October: Rover is selected to build the flight-quality Power Jets W.1. They set up shop at a disused mill in Barnoldswick, but also set up a parallel effort at another factory in Clitheroe staffed entirely by their own engineers. Frank Whittle is incensed.
November: The Junkers Jumo 004 axial-flow engine is tested.
November: Gloster Aircraft Company's proposal for a twin-engine jet fighter is accepted, becoming the Gloster Meteor.
December: Whittle's flight-quality W.1X runs for the first time.
The Lockheed Corporation starts work on the L-1000 axial-flow engine, the United States's first jet design.
The Northrop Corporation starts work on the T-37 Turbodyne, the United States's first turboprop design.
After only two years of development, the Jendrassik Cs-1 turboprop engine is tested. Designed to produce , combustion problems limit it to only when it first runs. Similar problems plagued early Whittle designs, but the industry quickly provided assistance. It appears that György Jendrassik had to draw upon any similar talent pool.
1942
February: The Air Ministry places an order for 12 Gloster Meteor.
February: NACA starts testing their "Propulsive duct engine", a ramjet, unaware of earlier similar efforts. Since ramjets need to be moving in order to work, NACA engineers take the simple step of mounting it at the end of a long arm and spinning it.
April: The He 280 flies under its own power for first time, powered by two Heinkel HeS 7 (-001) engines. The HeS 8's continue to have reliability issues.
May: The Gloster E.28/39 flies for the first time. Over the next few weeks, the top speed soon passes any existing propeller aircraft.
Müller's Heinkel HeS 29 (-006) axial-flow engine runs for first time.
General Electric is awarded a USAAF contract to develop a turboprop engine, leading to the TG-100 / TG-31 / XT-31 series, and later the J35.
Work on the Jendrassik Cs-1 ends. Intended to power a twin-engine heavy fighter, the factory is selected to produce Daimler-Benz DB 605 engines under license for the Messerschmitt Me 210 instead.
October: A Power Jets W.2B is sent to General Electric to start production in the US. Sanford Alexander Moss is lured out of retirement to help on the project.
1943
The Metrovick F.2 is given test rating delivering between 1,800 and 2,000 lbf (8.9 kN)
Metrovick start on "thrust augmentation" adding a turbine and propellers to a F2/2 which will lead to the F.3 (a high bypass design) with an extra over the F2/2.
Work on the BMW 002 is stopped as it is proving too complex. Work continues on the 003.
Work on the HeS 8 (-001) and HeS 30 (-006) is stopped, although the later appears to be reaching production quality. Heinkel is ordered to continue on the more advanced Heinkel HeS 011.
The Messerschmitt Me 262 flies for the first time, powered by a Junkers Jumo 211 piston engine in the nose. The BMW 003 has been selected to power the production versions, but is not yet ready for flight tests. The design, offering more internal fuel capacity than the He 280, is selected over its now 003-powered competitor for production.
A Jumo 004 flies, fitted to a Messerschmitt Me 110
The Daimler-Benz 007 axial-flow engine is tested, similar to Griffith's "contraflow" design that uses two contra-rotating compressor stages for added efficiency.
The "production-quality" BMW 003 is first tested.
March. The Rover W2B/26 experimental engine (STX) is first run, this was the straight-through design made by Rover without the knowledge of Whittle. This design was to be adopted by Rolls-Royce as the basis for their Derwent engine after they took over from Rover (by which time four more W2B/26 engines were under test).
The British order a single-engined jet design from de Havilland
July 18, 1942: The Messerschmitt Me 262, the first jet-powered fighter aircraft, flies for the first time under jet power.
July: Frank Whittle visits the United States to help with General Electric's efforts to build the W.1. The engine is running soon after, known as the "General Electric Type 1", and later as the I-16, referring to the thrust. They also start work on an improved version, the I-40, with thrust. The majority of United States jet engines from this time through the mid-1950s are licensed versions of British designs.
Whittle returns to Power Jets and starts development of the improved Power Jets W.2/500 and /700 engines, so named for their thrust in kilograms-force (kgf).
Westinghouse starts work on an axial-flow engine design, the WE-19.
October: The Bell XP-59 flies, powered by a General Electric Type I-A (W.1).
The Fieseler Fi 103 V-1 pulsejet powered "flying bomb" (cruise missile) flies for the first time.
Armstrong Siddeley starts work on an axial-flow design, the ASX.
December: After meeting held at a pub, Rover agrees to hand over the jet development to Rolls-Royce, in exchange for their Rolls-Royce Meteor tank engine factory.
1943
January 1: Rolls takes over the Rover plants, although the official date is several months later. Stanley Hooker leads a team including Fred Morley, Arthur Rubbra and Harry Pearson. Several Rover engineers decide to stay on as well, including Adrian Lombard, leader of Rover's "offshoot" design team. They focus on making the W.2B production quality as soon as possible.
After only a few short months since Rolls-Royce took over from Rover, the W.2B/23, soon to be known as the Rolls-Royce Welland, starts production.
The parallel Rover design effort, the W.2B/26, is adopted by Rolls-Royce for further development and becomes the Rolls-Royce Derwent.
The de Havilland Goblin engine is tested, similar in most ways to the Derwent.
March: A license for the Goblin is taken out in the United States by Allis-Chalmers, later becoming the J36. Lockheed is awarded a contract to develop what would become the P-80 Shooting Star, powered by this engine.
Production of Jumo 004B starts.
Production of BMW 003A starts.
First running turbofan the German Daimler-Benz DB 670 (aka 109-007) operated on its testbed on April 1, 1943
Throughout 1943, the Jumo 004 and BMW 003 continue to destroy themselves at an alarming rate due to turbine failures. Efforts in the United Kingdom, at one point years behind due to official indifference, have now caught up due to the availability of high temperature alloys which allowed for considerably more reliable high-heat sections of their designs.
Design work on the BMW 018 starts.
The US decides to rename all existing jet projects with a single numbering scheme. The L-1000 becomes the J37, GE's Type I the J31, and Westinghouse's WE-19 the J30. Newer projects are fitted into the remaining "30's". Turboprop designs become the T series, also starting at 30.
June: Metrovick F.2/1 tested, fitted to Avro Lancaster
September: Allis-Chalmers runs into difficulty on the J36, and the Shooting Star project is re-engined with the General Electric J33, a licensed version of the W.2B/26, or Rolls-Royce Derwent. GE later modifies the design to produce over twice the thrust, at .
Frank Whittle's W.2B/700 engine is tested, fitted to a Vickers Wellington Mk II bomber.
March: Westinghouse's X19A axial-flow engine is bench tested at .
Miles Aircraft test an all-moving tailplane as part of the Miles M.52 supersonic research aircraft design effort.
A Welland-powered prototype Gloster Meteor flies.
The Goblin-powered de Havilland Vampire flies.
Lyul'ka VDR-2 axial-flow engine tested, the first Soviet jet design.
The General Electric J31, their version of the W.2B/23, is tested.
November: The Metrovick F.2 is tested on a modified Gloster Meteor. Although more powerful, smaller and more fuel efficient than the Welland, the design is judged too complex and failure prone. In his quest for perfection, Griffith instead delivers an impractical design. Work continues on a larger version with an additional compressor stage that over doubles the power.
The Armstrong Siddeley ASX is tested.
Metrovick F2/3 delivers but not developed further, moving on to 10 stage F2/4
1944
BMW tests the 003R, a 003 with an additional rocket engine mounted "in parallel" to the BMW 003A turbojet it is combined with; and produces an even more powerful "mixed-power" engine.
April: With internal design efforts underway at most engine companies, Power Jets have little possibility of profitability, and are nationalized, becoming a pure research lab as the National Gas Turbine Establishment.
June: Design work on a gas turbine engine for powering tanks begins under the direction of Müller, who left Heinkel in 1942. The first such system, the GT 101, is completed in November and fit to a Panther tank for testing.
June: A Derwent II engine is modified with an additional turbine stage powering a gearbox and five-bladed propeller. The resulting RB.50, or Rolls-Royce Trent, is not further developed, but is test flown on a modified Gloster Meteor.
The Junkers Ju 287 jet bomber is tested.
The BMW 018 engine is tested. Work ends soon after when the entire tooling and parts supply are destroyed in a bombing raid.
The Junkers Jumo 012 engine is tested, it stands as the most powerful engine in the world for some time, at .
The J35, a development of an earlier turboprop effort, runs for the first time.
Ford builds a copy of the V-1's engine, known as the PJ-31-1.
The Ishikawajima Ne-20 first runs in Japan. Originally intending to build a direct copy of the BMW 003, the plans never arrived and the Japanese engineers instead built an entirely new design based on a single cutaway image and several photographs.
The Doblhof WNF-4 flies, the first ramjet-powered helicopter.
April 5: The nearly complete prototype of the Leduc 010 ramjet-powered aircraft, under construction at the Montaudran airfield near Toulouse, France, unbeknownst to German occupation authorities, is heavily damaged by a Royal Air Force bombing raid.
April: The Messerschmitt Me 262 first enters combat service Germany.
June: The Messerschmitt Me 262 enters squadron service in Germany.
July: The Gloster Meteor enters squadron service in the United Kingdom.
27 July: First combat mission flown by a Gloster Meteor
4 August: Gloster Meteors shot down two pulsejet-powered V-1 flying bombs
A design competition starts in Germany to build a simple jet fighter, the Volksjäger. The contract is eventually won by the Heinkel He 162 Spatz (sparrow), to be powered by the BMW 003.
October 27 - After a short 6-month period Rolls-Royce designs and builds the Rolls-Royce Nene at , but it sees only limited use in the United Kingdom, and is first run on this date.
December: Northrop's T-37 turboprop is tested. The design never matures and work is later stopped in the late 1940s.
1945
The Nakajima Kikka flies for the first time on August 7, 1945, powered by two Ishikawajima Ne-20 turbojets, making it the first Japanese jet aircraft to fly.
Stanley Hooker scales the Nene down to Gloster Meteor size, producing the RB.37, also referred to, confusingly, as the Derwent V. A Derwent V powered Meteor sets the world speed record at 606 mph at the end of the year. The importance of this incident relegates the development of more powerful engines unimportant.
The Junkers 022 turboprop runs.
An afterburner equipped Jumo 004 is tested.
Lyul'ka VDR-3 axial-flow engine tested.
Lyul'ka TR-1 axial-flow engine tested.
The RB.39 Rolls-Royce Clyde turboprop runs, combining axial and centrifugal stages in the compressor. Rolls-Royce abandon development, preferring to focus on the turbojet. A carrier-based naval strike aircraft, the Westland Wyvern, having already changed from its original Rolls-Royce Eagle piston engine, uses the alternative turboprop, the Armstrong Siddeley Python.
The Avia S-92, a version of the Me 262, is built in Czechoslovakia.
1946
January: A dispirited Frank Whittle resigns from what is left of Power Jets. Gradually the company is broken up, with only a small part remaining to administer its patents.
Development of the Rolls-Royce Dart starts. The Dart would go on to become one of the most popular turboprop engines made, with over 7,000 being produced before the production lines finally shut down in 1990.
Metrovick F2/4 Beryl delivers 4,000 lbf (17.8 kN). Metrovick jet turbines sold to Armstrong Siddeley.
1949
April 21: The Leduc 010, the world's first ramjet powered aircraft, finally completes its maiden flight in Toulouse, France. The aircraft's rate of climb exceeds that of the best contemporary turbojet powered fighters.
22 June: Vickers VC.1 Viking flew with Rolls-Royce Nene turbojets: the world's first pure jet transport aircraft.
1950
late 1950: Rolls-Royce Conway the world's first production turbofan enters service, significantly improving fuel efficiency and paving the way for further improvements.
1952
2 January: the world's first flight of a geared turbofan, the Turbomeca Aspin, powering the Fouga Gemeaux test-bed aircraft.
2 May: the world's first commercial jet airliner to reach production, the de Havilland Comet, enters service with BOAC.
1953
The de Havilland Gyron, Halford's last jet design, runs for the first time. Before cancellation 2 years later it has evolved to 25,000 lbf (110,000 N) using reheat. Other comparable turbojet engines are developed at the same time including the Canadian Orenda Iroquois.
1956
15 September: the Tu-104 medium range jet airliner enters service with Aeroflot, the world's first jet airliner to provide a sustained and successful service. The Tu-104 was the sole jetliner operating in the world between 1956 and 1958.
1958
October: the Boeing 707 enters service with Pan American. This aeroplane is largely credited with ushering in the Jet Age having huge commercial success with few operating problems unlike its competitors. This plane helped establish Boeing as one of the leading makers of passenger aircraft in the world.
1959
Sud Aviation Caravelle enters service: claimed as the first short/medium range jet airliner, first flight 27 May 1955.
1968
30 June: TF39 high bypass turbofan of 43,300 lbf (193 kN) enters service on the C-5 Galaxy transport ushering in the age of wide-body transports.
1975
26 December 1975: Tu-144S the first supersonic jet airliner went into mail and freight service between Moscow and Alma-Ata in preparation for passenger services, which commenced November 1977.
1976
21 January: Concorde, the supersonic jet airliner, enters passenger service with British Airways and Air France.
1978
1 June: Tu-144 withdrawn from scheduled passenger service after 55 passenger flights due to reliability and safety problems.
1983
4 October 1983: Thrust2 turbojet-powered car gets the land speed record to 1149 km/h.
1997
15 October 1997: ThrustSSC first supersonic car, powered by two turbofans takes the land speed record to 1,228 km/h.
2002
HyShot scramjet ignited and operated.
2003
31 January - GE90-115B receives FAR 33 certification; currently holds the world record for thrust and engine (fan) size for a gas turbine powered engine at 127,900 lbf of thrust and 128 inches, respectively
26 November: Concorde retires from service
2004
Hyper-X first scramjet to maintain altitude
2007
Hyper-X first airbreathing (scram)jet to attain Mach 10
See also
Timeline of rocket and missile technology
Jet engine
References
External links
Aviation timelines
History of mechanical engineering
Technology timelines
20th century in transport | Timeline of jet power | [
"Engineering"
] | 6,830 | [
"History of mechanical engineering",
"Mechanical engineering"
] |
1,597,924 | https://en.wikipedia.org/wiki/Linear%20predictive%20analysis | Linear predictive analysis is a simple form of first-order extrapolation: if it has been changing at this rate then it will probably continue to change at approximately the same rate, at least in the short term. This is equivalent to fitting a tangent to the graph and extending the line.
One use of this is in linear predictive coding which can be used as a method of reducing the amount of data needed to approximately encode a series. Suppose it is desired to store or transmit a series of values representing voice. The value at each sampling point could be transmitted (if 256 values are possible then 8 bits of data for each point are required, if the precision of 65536 levels are desired then 16 bits per sample are required). If it is known that the value rarely changes more than +/- 15 values between successive samples (-15 to +15 is 31 steps, counting the zero) then we could encode the change in 5 bits. As long as the change is less than +/- 15 values in successive steps the value will exactly reproduce the desired sequence. When the rate of change exceeds +/-15 then the reconstructed values will temporarily differ from the desired value; provided fast changes that exceed the limit are rare it may be acceptable to use the approximation in order to attain the improved coding density.
See also
Linear prediction
References
Interpolation
Asymptotic analysis | Linear predictive analysis | [
"Mathematics"
] | 282 | [
"Mathematical analysis",
"Asymptotic analysis",
"Mathematical analysis stubs"
] |
1,597,964 | https://en.wikipedia.org/wiki/Wastegate | A wastegate is a valve that controls the flow of exhaust gases to the turbine wheel in a turbocharged engine system.
Diversion of exhaust gases regulates the turbine speed, which in turn regulates the rotating speed of the compressor. The primary function of the wastegate is to regulate the maximum boost pressure in turbocharger systems, to protect the engine and the turbocharger. One advantage of installing a remote mount wastegate to a free-float (or non-wastegate) turbo includes an allowance for a smaller area over radius (A/R) turbine housing, resulting in less lag time before the turbo begins to spool and create boost. One of the earliest usage of a modern wastegate was in the Saab 99 Turbo 1978, presented in 1977.
Wastegate types
External
An external wastegate is a separate self-contained mechanism typically used with turbochargers that do not have internal wastegates. An external wastegate requires a specially constructed turbo manifold with a dedicated runner going to the wastegate. The external wastegate may be part of the exhaust housing itself. External wastegates are commonly used for regulating boost levels more precisely than internal wastegates in high power applications, where high boost levels can be achieved. External wastegates can be much larger since there is no constraint of integrating the valve or spring into the turbocharger and turbine housing. It is possible to use an external wastegate with an internally gated turbocharger. This can be achieved through a specially designed bracket that easily bolts on and restricts the movement of the actuator arm, keeping it from opening. Another route involves welding the internal wastegate shut which permanently keeps it from opening, but failure of the weld can allow it to open again.
External wastegates generally use a valve similar to the poppet valve found in the cylinder head. However they are controlled by pneumatics rather than a camshaft and open in the opposite direction. External wastegates can also use a butterfly valve, though that is far less common.
Internal
An internal wastegate is a built-in bypass valve and passage within the turbocharger housing which allows excess exhaust pressure to bypass the turbine into the downstream exhaust. Control of the internal wastegate valve by a pressure signal from the intake manifold is identical to that of an external wastegate. Advantages include simpler and more compact installation, with no external wastegate piping. Additionally, all waste exhaust gases are automatically routed back into the catalytic converter and exhaust system. Many OEM turbochargers are of this type. Disadvantages in comparison to an external wastegate include a limited ability to bleed off exhaust pressure due to the relatively small diameter of the internal bypass valve, and less efficient performance under boost conditions.
Atmospheric/divorced wastegates
A "divorced" wastegate dumps the gases directly into the atmosphere, instead of returning them with the rest of an engine's exhaust. This is done to prevent turbulence to the exhaust flow and reduce total back pressure in the exhaust system. A Divorced wastegate dumper pipe is commonly referred to as a screamer pipe due to the unmuffled waste exhaust gases and the associated loud noises they produce.
Control
Manual
The simplest control for a wastegate is a mechanical linkage that allows the operator to directly control the wastegate valve position. This manual control is used in some turbo-charged light aircraft.
Pneumatic
The simplest closed-loop control for a wastegate is to supply boost pressure directly from the charge air side to the wastegate actuator. A small hose can connect from the turbocharger compressor outlet, charge pipes, or intake manifold to the nipple on the wastegate actuator. The wastegate will open further as the boost pressure pushes against the force of the spring in the wastegate actuator until equilibrium is obtained. More intelligent control can be added by integrating an electronic boost controller.
Standard wastegates have one port for attaching the boost control line from the charge air supply line or boost control solenoid. Recent advances in internal wastegate actuators bring dual port control.
A dual port wastegate adds a second port on the opposite side of the actuator. Air pressure allowed to enter this second port aids the spring to push harder in the direction of closing the wastegate. This is exactly the opposite of the first port. The ability to help the wastegate remain closed as boost pressure builds can be increased. This also adds further complexity to boost control, requiring more control ports on the solenoid or possibly a complete second boost control system with its own separate solenoid. Use of the second port is not necessary. Secondary ports, unlike primary ports, cannot be simply attached to a boost control line and require electronic or manual control to be useful. CO2 can also be used to apply pressure to the second port, to control boost on a much finer level.
Electric
Some 1940s aircraft engines featured electrically operated wastegates, such as the Wright R-1820 on the B-17 Flying Fortress. General Electric was the biggest manufacturer of these systems. Being before the age of computers, they were entirely analog. Pilots had a cockpit control to select different boost levels. Electric wastegates soon fell out of favor due to design philosophies which mandated the separation of the engine controls from the electrical system.
Beginning in the 2011 model year the 2.0-liter Theta II turbocharged gasoline direct-injection (GDI) engine introduced in the Hyundai Sonata includes a PCM operated electronic servo wastegate actuator. This allows a boost control strategy that reduces exhaust backpressure caused by the turbocharger by opening the wastegate when turbo boost is not needed, resulting in improved fuel economy. The wastegate is also held open during cold starting to lower emissions by speeding up initial catalyst light-off.
Starting in November 2015, Honda Earth Dreams direct injected turbocharged engines with 1.5 litre displacement employ an ECU driven electric wastegate. This was first introduced in the Honda Civic 2016 model and followed by the CR-V in 2017. In 2018 1.5L and 2.0L turbocharged direct injected engines replaced the 2.4L and 3.6L 6 cylinder naturally aspirated engines in the Honda Accord.
Hydraulic
Most modern turbocharged aircraft use a hydraulic wastegate control with engine oil as the fluid. Systems from Lycoming and Continental operate on the same principles and use similar parts which differ only in name. Inside the wastegate actuator, a spring acts to open the wastegate, and oil pressure acts to close the wastegate. On the oil output side of the wastegate actuator sits the density controller, an air-controlled oil valve which senses upper deck pressure and controls how fast oil can bleed from the wastegate actuator back to the engine. As the aircraft climbs and the air density drops, the density controller slowly closes the valve and traps more oil in the wastegate actuator, closing the wastegate to increase the speed of the turbocharger and maintain rated power. Some systems also use a differential pressure controller which senses the air pressures on either side of the throttle plate and adjusts the wastegate to maintain a set differential. This maintains an optimum balance between a low turbocharger workload and a quick spool-up time, and also prevents surging caused by a bootstrapping effect.
Wastegate sizing
Wastegate sizing is inversely proportional to the desired level of boost and is somewhat independent of the size or power of the engine. One vendor's guide for wastegate sizing is as follows:
big turbo/low boost = bigger wastegate
big turbo/high boost = smaller wastegate
small turbo/low boost = bigger wastegate
small turbo/high boost = smaller wastegate
However, exhaust flow is an effect of power. So, another decision chart should look like this.
big turbo/small engine/small power = small wastegate
big turbo/small engine/big power = big wastegate
small turbo/small engine/small power = small wastegate
big turbo/big engine/ small power = medium wastegate
small turbo/big engine/any power level = big wastegate
Reasoning: the small turbine will easily try to overspin from excess exhaust gas volume.
See also
Automatic Performance Control (APC)
Dump valve
Exhaust gas recirculation (EGR)
References
Engine components
Turbochargers
de:Turbolader#Verwendung bei PKW und Motorrädern | Wastegate | [
"Technology"
] | 1,726 | [
"Engine components",
"Engines"
] |
1,597,970 | https://en.wikipedia.org/wiki/An%20Essay%20on%20the%20Inequality%20of%20the%20Human%20Races | An Essay on the Inequality of the Human Races (originally: Essai sur l'inégalité des races humaines), published between 1853 and 1855, is a racialist work of French diplomat and writer Arthur de Gobineau. It argues that there are intellectual differences between human races, that civilizations decline and fall when the races are mixed and that the white race is superior. It is today considered to be one of if not the earliest example of scientific racism.
Expanding upon Boulainvilliers' use of ethnography to defend the Ancien Régime against the claims of the Third Estate, Gobineau aimed for an explanatory system universal in scope: namely, that race is the primary force determining world events. Using scientific disciplines as varied as linguistics and anthropology, Gobineau divides the human species into three major groupings, white, yellow and black, claiming to demonstrate that "history springs only from contact with the white races." Among the white races, he distinguishes the Aryan race, specifically the Nordic race and Germanic peoples, as the pinnacle of human development, comprising the basis of all European aristocracies. However, inevitable miscegenation led to the "downfall of civilizations".
Background
Gobineau was a Legitimist who despaired at France's decline into republicanism and centralization. The book was written after the 1848 revolution when Gobineau began studying the works of physiologists Xavier Bichat and Johann Blumenbach.
The book was dedicated to King George V of Hanover (1851–66), the last king of Hanover. In the dedication, Gobineau writes that he presents to His Majesty the fruits of his speculations and studies into the hidden causes of the "revolutions, bloody wars, and lawlessness" ("révolutions, guerres sanglantes, renversements de lois") of the age.
In a letter to Count Anton von Prokesch-Osten in 1856 he describes the book as based upon "a hatred for democracy and its weapon, the Revolution, which I satisfied by showing, in a variety of ways, where revolution and democracy come from and where they are going."
Gobineau and the Bible
In Vol I, chapter 11, "Les différences ethniques sont permanentes" ("The ethnic differences are permanent"), Gobineau writes that "Adam is the originator of our white species" ("Adam soit l'auteur de notre espèce blanche"), and creatures not part of the white race are not part of that species.
By this Gobineau refers to his division of humans into three main races: white, black, and yellow. The biblical division into Hamites, Semites, and Japhetites is for Gobineau a division within the white race. In general, Gobineau considers the Bible to be a reliable source of actual history, and he was not a supporter of the idea of polygenesis.
Influence
Steven Kale argues that Gobineau's "influence on the development of racial theory has been exaggerated and his ideas have been routinely misconstrued".
Gobineau's ideas found an audience in the United States and in German-speaking areas more so than in France, becoming the inspiration for a host of racial theories, for example those of Houston Stewart Chamberlain. "Gobineau was the first to theorize that race was the deciding factor in history and the precursors of Nazism repeated some of his ideas, but his principle arguments were either ignored, deformed, or taken out of context in German racial thought".
German historian Joachim C. Fest, who wrote a biography of Hitler, describes Gobineau, in particular his negative views on race-mixing as expressed in his essay, as an eminent influence on Adolf Hitler and Nazism. Fest writes that the influence of Gobineau on Hitler can be easily seen and that Gobineau's ideas were used by Hitler in simplified form for demagogic purposes: "Significantly, Hitler simplified Gobineau's elaborate doctrine until it became demagogically usable and offered a set of plausible explanations for all the discontents, anxieties, and crises of the contemporary scene." However, Professor Steven Kale has cautioned that "Gobineau's influence on German racism has been repeatedly overstated".
Although cited by groups such as the Nazi Party, the text implicitly criticizes antisemitism and describes Jews in positive terms, the Jews being seen as a superbly forged race of "ancient Greek-like strength" of cohesion. Implicitly, the folk of Judah merely represented a wandering, semi-austral variation of Ur-Aryan blood-stock. Gobineau stated, "Jews... became a people that succeeded in everything it undertook, a free, strong, and intelligent people, and one which, before it lost, sword in hand, the name of an independent nation, had given as many learned men to the world as it had merchants." Philo-Judaic sentiment was intermixed with ethnological theories concerning the primally Indo-Iranian/Indo-Aryan archeogenetic matrix whence sprang the Jews. In these lines of speculative anthropology, the Jews were anciently (supposedly) primordially interpreted as of atypical Indo-European ethnicity: Judaic racial typology emerged from Iranid–Nordid founders, the details considered inessential, possessors of compatibly "white" "Aryan" blood being the main point. The latter-day "Hamiticized" Jewish folk came into existence from non-Afro-Asiatic Hurrian (or Horite), Jebusite, Amorite or early-Hittite, Mittani-affiliated racial nuclei, the "consensus science" of the time asserted. The blatantly, ironically almost aggressive pro-Jewish attitude of Gobineau, akin to Nietzsche in sheer admiration and lionization of the Jews as one of the "highest races", proved ideologically vertiginous to the Nazi propagandists and Procrustean thinkers—here Gobineau unmistakably contradicted perhaps the main pillar of Nazi political ideology, which has been described as the schizoid, neo-Gnostic dualism of "Jewish demonology". Incompatible with Nazi ideology, the Count's fervent Judaic positivity and total dearth of antisemitism the Nazis could only attempt to ignore or minimize away in the silence of hypocrisy.
The book continued to influence the white supremacist movement in the United States in the early 21st century.
Translations
Josiah Clark Nott hired Henry Hotze to translate the work into English. Hotze's translation was published in 1856 as The Moral and Intellectual Diversity of Races, with an added essay from Hotze and appendix from Nott. However, it "omitted the laws of repulsion and attraction, which were at the heart of Gobineau's account of the role of race-mixing in the rise and fall of civilizations". Gobineau was not pleased with the version; Gobineau was "particularly concerned that Hotze had ignored his comments on 'American decay generally and upon slaveholding in particular'."
The German translation Versuch über die Ungleichheit der Menschenrassen first appeared in 1897 and was translated by Ludwig Schemann, a member of the Bayreuth Circle and "one of the most important racial theorists of imperial and Weimar Germany".
A new English-language version The Inequality of Human Races, translated by Adrian Collins, was published in Britain and the US in 1915 and remains the standard English-language version. It continues to be republished in the US.
See also
IQ and Global Inequality
References
Bibliography
Gobineau, Arthur (Count Joseph Arthur de Gobineau) The Inequality of Human Races translated by Adrian Collins
Gobineau, Arthur (Count Joseph Arthur de Gobineau) The Moral and Intellectual Diversity of Races, with particular reference to their perspective influence in the civil and political history of mankind translated by Henry Hotze
Gobineau, Arthur (Count Joseph Arthur de Gobineau) Versuch Uber Die Ungleichheit Der Menschenracen translated by Ludwig Schemann
External links
Essai sur l'Inegalite de Races Humaine in French at Google Books Vol. 1, Vol. 2, Vol. 4
Versuch über die Ungleichheit der Menschenracen trans. by Ludwig Schemann at Google Books Vol. 1, Vol. 2, Vol. 3, Vol. 4
The Moral and Intellectual Diversity of Races: With Particular Reference to Their Respective trans. by H. Hotz, with an Appendix by J. C. Nott
1855 books
1855 essays
Ethnography
Pseudoscience literature
Race and intelligence controversy
Scientific racism
Sociology books
White supremacy
Works about the theory of history | An Essay on the Inequality of the Human Races | [
"Biology"
] | 1,833 | [
"Biology theories",
"Obsolete biology theories",
"Scientific racism"
] |
1,597,987 | https://en.wikipedia.org/wiki/Gr%C3%A9vy%27s%20zebra | Grévy's zebra (Equus grevyi), also known as the imperial zebra, is the largest living wild equid and the most threatened of the three species of zebra, the other two being the plains zebra and the mountain zebra. Named after French president Jules Grévy, it is found in parts of Kenya and Ethiopia. Superficially, Grévy's zebras' physical features can help to identify it from the other zebra species; their overall appearance is slightly closer to that of a mule, compared to the more "equine" (horse) appearance of the plains and mountain zebras. Compared to other zebra species, Grévy's are the tallest; they have mule-like, larger ears, and have the tightest stripes of all zebras. They have distinctively erect manes, and more slender snouts.
Grévy's zebra live in semi-arid savanna, where they feed on grasses, legumes, and browse, such as acacia; they can survive up to five days without water. They differ from the other zebra species in that they do not live in a harem, and they maintain few long-lasting social bonds. Stallion territoriality and mother–foal relationships form the basis of the social system of the Grévy's zebra. Despite a handful of zoos and animal parks around the world having had successful captive-breeding programs, in its native home this zebra is listed by the IUCN as endangered. Its population has declined from 15,000 to 2,000 since the 1970s. In 2016, the population was reported to be "stable"; however, as of 2020, the wild numbers are still estimated at only around 2,250 animals, in part due to anthrax outbreaks in eastern Africa.
Taxonomy and naming
The Grévy's zebra was first described by French naturalist Émile Oustalet in 1882. He named it after Jules Grévy, then president of France, who, in the 1880s, was given one by the government of Abyssinia. Traditionally, this species was classified in the subgenus Dolichohippus with plains zebra and mountain zebra in Hippotigris. Groves and Bell (2004) place all three species in the subgenus Hippotigris.
Fossils of zebra-like equids have been found throughout Africa and Asia in the Pliocene and Pleistocene deposits. Notable examples include E. sanmeniensis from China, E. cautleyi from India, E. valeriani from central Asia and E. oldowayensis from East Africa. The latter, in particular is very similar to the Grévy's zebra and may have been its ancestor.
The modern Grévy's zebra arose in the Middle Pleistocene. Zebras appear to be a monophyletic lineage and recent (2013) phylogenies have placed Grévy's zebra in a sister taxon with the plains zebra. In areas where Grévy's zebras are sympatric with plains zebras, the two may gather in same herds and fertile hybrids do occur.
Description
Grévy's zebra is the largest of all wild equines. It is in head-body with a tail, and stands high at the withers. These zebras weigh . Grévy's zebra differs from the other two zebras in its more primitive characteristics. It is particularly mule-like in appearance; the head is large, long, and narrow with elongated nostril openings; the ears are very large, rounded, and conical and the neck is short but thick. The muzzle is ash-grey to black in colour, and the lips are whiskered. The mane is tall and erect; juveniles have a mane that extends to the length of the back and shortens as they reach adulthood.
As with all zebra species, Grévy's zebra's pelage has a black and white striping pattern. The stripes are narrow and close-set, broader on the neck, and extending to the hooves. The belly and the area around the base of the tail lack stripes and are just white in color, which is unique to the Grévy's zebra. Foals are born with brown and white striping, with the brown stripes darkening as they grow older.
Range and ecology
Grévy's zebra largely inhabits northern Kenya, with some isolated populations in Ethiopia. It was extirpated from Somalia and Djibouti and its status in South Sudan is uncertain. It lives in Acacia-Commiphora bushlands and barren plains. Ecologically, this species is intermediate between the arid-living African wild ass and the water-dependent plains zebra. Lactating mares and non-territorial stallions use areas with green, short grass and medium, dense bush more often than non-lactating mares and territorial stallions.
Grévy's zebras rely on grasses, legumes, and browse for nutrition. They commonly browse when grasses are not plentiful. Their hindgut fermentation digestive system allows them to subsist on diets of lower nutritional quality than that necessary for ruminant herbivores. Grevy's zebras can survive up to a week without water, but will drink daily when it is plentiful. They often migrate to better watered highlands during the dry season. Mares require significantly more water when they are lactating. During droughts, the zebras will dig water holes and defend them. The Grévy's zebra's main predator is the lion, but adults can be hunted by spotted hyenas. African hunting dogs, cheetahs and leopards almost never attack adults, even in desperate times, but sometimes prey on young animals, although mares are fiercely protective of their young. In addition, they are susceptible to various gastro-intestinal parasites, notably of the genus Trichostrongylus.
Behaviour and life history
Adult stallions mostly live in territories during the wet seasons but some may stay in them year round if there's enough water left. Stallions that are unable to establish territories are free-ranging and are known as bachelors. Mares, young and non-territorial stallions wander through large home ranges. The mares will wander from territory to territory preferring the ones with the highest-quality food and water sources. Up to nine stallions may compete for a mare outside of a territory. Territorial stallions will tolerate other stallions who wander in their territory. However, when an oestrous mare is present the territorial stallion keeps other stallions at bay. Non-territorial stallions might avoid territorial ones because of harassment. When mares are not around, a territorial stallion will seek the company of other stallions. The stallion shows his dominance with an arched neck and a high-stepping gait and the least dominant stallions submit by extending their tail, lowering their heads and nuzzling their superior's chest or groin.
Zebras produce numerous sounds and vocalisations. When alarmed, they produce deep, hoarse grunts. Whistles and squeals are also made when alarmed, during fights, when scared or in pain. Snorts may be produced when scared or as a warning. A stallion will bray in defense of his territory, when driving mares, or keeping other stallions at bay. Barks may be made during copulation and distressed foals will squeal. The call of the Grévy's zebra has been described as "something like a hippo's grunt combined with a donkey's wheeze". To get rid of flies or parasites, they roll in dust, water or mud or, in the case of flies, they twitch their skin. They also rub against trees, rocks and other objects to get rid of irritations such as itchy skin, hair or parasites. Although Grévy's zebras do not perform mutual grooming, they do sometimes rub against a conspecific.
Reproduction
Grévy's zebras can mate and give birth year round, but most mating takes place in the early rainy seasons and births mostly take place in August or September after the long rains. An oestrous mare may visit as many as four territories a day and will mate with the stallions in them. Among territorial stallions, the most dominant ones control territories near water sources, which mostly attract mares with dependant foals, while more subordinate stallions control territories away from water with greater amounts of vegetation, which mostly attract mares without dependant foals.
The resident stallions of territories will try to subdue the entering mares with dominance rituals and then continue with courtship and copulation. Grévy's zebra stallions have large testicles and can ejaculate a large amount of semen to replace the sperm of other males. This is a useful adaptation for a species whose mares mate polyandrously. Bachelors or outside territorial stallions sometimes "sneak" copulation of mares in another stallion's territory. While mare associations with individual stallions are brief and mating is promiscuous, mares who have just given birth will reside with one stallion for long periods and mate exclusively with that stallion. Lactating females are harassed by stallions more often than non-lactating ones and thus associating with one male and his territory provides an advantage as he will guard against other males.
Gestation of the Grévy's zebra normally lasts 390 days, with a single foal being born. A newborn zebra will follow anything that moves, so new mothers prevent other mares from approaching their foals while imprinting their own striping pattern, scent and vocalisation on them. Mares with young foals may gather into small groups. Mares may leave their foals in "kindergartens" while searching for water. The foals will not hide, so they can be vulnerable to predators. However, kindergartens tend to be protected by an adult, usually a territorial stallion. A mare with a foal stays with one dominant territorial stallion who has exclusive mating rights to her. While the foal may not be his, the stallion will look after it to ensure that the mare stays in his territory. To adapt to a semi-arid environment, Grévy's zebra foals have longer nursing intervals and wait until they are three months old before they start drinking water. Although offspring become less dependent on their mothers after half a year, associations with them continue for up to three years.
Relationship with humans
The Grévy's zebra was known to the Europeans in antiquity and was used by the Romans in circuses. It was subsequently forgotten in the Western world for a thousand years. In the seventeenth century, the king of Shoa (now central Ethiopia) exported two zebras; one to the Sultan of Turkey and another to the Dutch governor of Jakarta. A century later, in 1882, the government of Abyssinia sent one to French president Jules Grévy. It was at that time that the animal was recognised as its own species and named in Grévy's honour. Grévy's zebra appears on the Eritrean 25-cent coin.
Status and conservation
The Grévy's zebra is considered endangered. Its population was estimated to be 15,000 in the 1970s and by the early 21st century the population was lower than 3,500, a 75% decline. In 2008, it was estimated that there are less than 2,500 Grévy's zebras still living in the wild, further declining to fewer than 2,000 mature individuals in 2016. Nonetheless, the Grévy's zebra population trend was considered stable as of 2016.
There are also an estimated 600 Grévy's zebras in captivity. Captive herds have been known to thrive, like at White Oak Conservation in Yulee, Florida, United States, where more than 70 foals have been born. There, research is underway in partnership with the Conservation Centers for Species Survival on semen collection and freezing and on artificial insemination.
The Grévy's zebra is legally protected in Ethiopia. In Kenya, it is protected by the hunting ban of 1977. In the past, Grévy's zebras were threatened mainly by hunting for their skins which fetched a high price on the world market. However, hunting has declined and the main threat to the zebra is habitat loss and competition with livestock. Cattle gather around watering holes and the Grévy's zebras are fenced from those areas. Community-based conservation efforts have shown to be the most effective in preserving Grévy's zebras and their habitat. Less than 0.5% of the range of the Grévy's zebra is in protected areas. In Ethiopia, the protected areas include Aledeghi Wildlife Reserve, Yabelo Wildlife Sanctuary, Borana National Park, and Chelbi Sanctuary. In Kenya, important protected areas include the Buffalo Springs, Samburu and Shaba National Reserves and the private and community land wildlife conservancies in Isiolo, Samburu and the Laikipia Plateau.
The mesquite plant was introduced into Ethiopia around 1997 and is endangering the zebra's food supply. An invasive species, it is replacing the two grass species, Cenchrus ciliaris and Chrysopogon plumulosus, which the zebras eat for most of their food.
References
External links
"Wildlife Grévy's Zebra" – summary from the African Wildlife Foundation
Images and footage of Grévy's zebra from ARKive.org
"To Catch a Zebra" by Brian Jackman – story of catching endangered Grévy's zebra for relocation
"Why are the Grevy's Zebras in Trouble?" – Rich Blundell reports from Kenya
Grevy's Zebra Trust – a Kenyan organization dedicated to preserving the Grévy's zebra
EDGE species
Grévy's zebra
Mammals of Kenya
Mammals of Ethiopia
Mammals of South Sudan
Fauna of East Africa
Fauna of the Horn of Africa
Grévy's zebra | Grévy's zebra | [
"Biology"
] | 2,911 | [
"EDGE species",
"Biodiversity"
] |
1,598,022 | https://en.wikipedia.org/wiki/Pickover%20stalk | Pickover stalks are certain kinds of details to be found empirically in the Mandelbrot set, in the study of fractal geometry. They are so named after the researcher Clifford Pickover, whose "epsilon cross" method was instrumental in their discovery. An "epsilon cross" is a cross-shaped orbit trap.
According to Vepstas (1997) "Pickover hit on the novel concept of looking to see how closely the orbits of interior points come to the x and y axes. In these pictures, the closer that the point approaches, the higher up the color scale, with red denoting the closest approach. The logarithm of the distance is taken to accentuate the details".
Biomorphs
Biomorphs are biological-looking Pickover Stalks. At the end of the 1980s, Pickover developed biological feedback organisms similar to Julia sets and the fractal Mandelbrot set. According to Pickover (1999) in summary, he "described an algorithm that can be used for the creation of diverse and complicated forms resembling invertebrate organisms. The shapes are complicated and difficult to predict before actually experimenting with the mappings." He hoped "these techniques will encourage [others] to explore further and discover new forms, by accident, that are on the edge of science and art".
Pickover developed an algorithm (which uses neither random perturbations nor natural laws) to create very complicated forms resembling invertebrate organisms. The iteration, or recursion, of mathematical transformations is used to generate biological morphologies. He called them "biomorphs." At the same time he coined "biomorph" for these patterns, the famous evolutionary biologist Richard Dawkins used the word to refer to his own set of biological shapes that were arrived at by a very different procedure. More rigorously, Pickover's "biomorphs" encompass the class of organismic morphologies created by small changes to traditional convergence tests in the field of "Julia set" theory.
Pickover's biomorphs show a self-similarity at different scales, a common feature of dynamical systems with feedback. Real systems, such as shorelines and mountain ranges, also show self-similarity over some scales. A 2-dimensional parametric 0L system can “look” like Pickover's biomorphs.
Implementation
The below example, written in pseudocode, renders a Mandelbrot set colored using a Pickover Stalk with a transformation vector and a color dividend.
The transformation vector is used to offset the (x, y) position when sampling the point's distance to the horizontal and vertical axis.
The color dividend is a float used to determine how thick the stalk is when it is rendered.
For each pixel (x, y) on the target, do:
{
zx = scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1))
zy = scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1))
float2 c = (zx, zy) //Offset in the Mandelbrot formulae
float x = zx; //Coordinates to be iterated
float y = zy;
float trapDistance = 1000000; //Keeps track of distance, set to a high value at first.
int iteration = 0;
while (x*x + y*y < 4 && iteration < maxIterations)
{
float2 z = float2(x, y);
z = cmul(z, z); // z^2, cmul is a multiplication function for complex numbers
z += c;
x = z.x;
y = z.y;
float distanceToX = abs(z.x + transformationVector.x); //Checks the distance to the vertical axis
float distanceToY = abs(z.y + transformationVector.y); //Checks the distance to the horizontal axis
smallestDistance = min(distanceToX, distanceToY); // Use only smaller axis distance
trapDistance = min(trapDistance, smallestDistance);
iteration++;
}
return trapDistance * color / dividend;
//Dividend is an external float, the higher it is the thicker the stalk is
}
References
Further reading
External links
Apeirographic Explorations: Biomorphs A random assortment of biomorphs.
Mad Teddy's Biomorphs, detailed write-up on Pickover's algorithm, including examples and source code.
Fractals | Pickover stalk | [
"Mathematics"
] | 963 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical objects",
"Fractals",
"Mathematical relations"
] |
1,598,060 | https://en.wikipedia.org/wiki/Ferdinand%20Qu%C3%A9nisset | Ferdinand Jules Quénisset (1872–1951) was a French astronomer who specialized in astrophotography.
Early life and career
Quénisset was born on 8 August 1872 in Paris, the son of Gatien Jules Quénisset, an assistant director of the Administration des Monnaies et Médailles in Paris, and Juliette Antonia Mallard, a dressmaker.
He became a member of the Société astronomique de France in 1890, after becoming interested in astronomy by reading Camille Flammarion's books.
From 1891 to 1894, Quénisset served as member of the society's council as assistant librarian in the society's headquarters, which at the time was located at 28 rue Serpente in the 6th arrondissement of Paris.
Quénisset worked as an observer at Flammarion's observatory in Juvisy-sur-Orge from 1891 to 1893, during which time he discovered a comet. He was forced to abandon astronomy for a dozen years while he performed his military service, but then returned to Juvisy in 1906 to resume his post at the observatory (he succeeded Eugène Antoniadi, who had left Juvisy in 1902).
Quénisset worked at the Juvisy observatory for the remainder of his career until 1947, when his health obliged him to quit.
He was a member of the International Union for Cooperation in Solar Research in 1913.
He was a member of the International Astronomical Union and participated in Commissions 15 (Physical Study of Comets & Minor Planets) and 16 (Physical Study of Planets & Satellites).
Quénisset died on 8 April 1951 and is buried in the new cemetery of Juvisy.
Scientific achievements
Co-discovered comet C/1893 N1 (Rordame–Quenisset) on 9 January 1893.
First in France to photograph zodiacal light in 1902.
Discovered comet C/1911 S2 (Quenisset) on 23 September 1911.
First to photograph details of the atmosphere of Venus in 1911.
Took nearly 6,000 astronomical photographs and more than 1,500 meteorological photographs [as of 1932], many of which were published in the Bulletin of the Société astronomique de France, the Comptes Rendus des séances de l’Académie des sciences, and other scientific publications. His most noteworthy meteorological photographs were published as individual plates in the book Les Nuages et les Systèmes nuageux. Quénisset also made numerous drawings of Venus, Mars, Jupiter and the Moon.
First to successfully record Mercury's albedo features photographically.
First in France to photograph Pluto, in Spring and Autumn 1930.
Delivered numerous conferences on astronomy in France (Paris, Versailles, Le Havre, Saint-Quentin, Tours, Lille, Crépy-en-Valois) and in other countries (Belgium, Switzerland).
Awards and honors
1899 - Prix des Dames from the Société Astronomique de France.
1901 - Officier d'académie by decree of the Ministre de l'instruction publique et des beaux-arts of 12 April 1901.
1911 - Donohoe Comet-Medal (Seventy-Second) from the Astronomical Society of the Pacific, for his discovery of the comet C/1911 S2 (Quenisset) on 23 September 1911.
1923 - Honorary member of the Société astronomique Flammarion de Genève, for his contribution to the establishment of that association.
1926 - Médaille Commémorative from the Société Astronomique de France.
1932 - Chevalier of the Légion d'honneur on 29 December 1932.
1933 - First Prize in the Concours photographie de nuages (Cloud Photography Competition) of the Office National Météorologique.
1934 - Valz Prize from the French Academy of Sciences for his observations of comets.
1938 - Prix Gabrielle et Camille Flammarion from the Société Astronomique de France.
1945 - Prix Dorothéa Klumpke-Isaac Roberts from the Société Astronomique de France.
1973 - Quenisset impact crater on Mars named in his honor by the International Astronomical Union (IAU).
2022 - Asteroid 423645 Quénisset named in his honor by the IAU.
Publications
Author
Les phototypes sur papier au gélatinobromure (Paris: Gauthier-Villars, 1901).
Applications de la photographie à la physique et à la météorologie (Paris: Charles Mendel, 1901).
Manuel pratique de photographie astronomique à l'usage des amateurs photographes (Paris: Charles Mendel, 1903).
Instruction pour la photographie des nuages (Paris: Office National de Météorologie, 1923), .
Annuaire astronomique et météorologique Camille Flammarion (Paris: Flammarion (impr. de Jouve), 1937–1951).
Contributor
Cours de météorologie à l'usage des candidats au brevet de météorologiste militaire. 2ème Partie, Les Nuages et les Systèmes nuageux: Planches (Paris : Office national météorologique de France, 1926).
Atlas international des nuages et des types de ciels. I. Atlas général (Paris : Office National Météorologique de France, 1939).
References
External links
Astronomes de Juvisy
1872 births
1951 deaths
19th-century French astronomers
20th-century French astronomers
Astrophotographers
Scientists from Paris
Q | Ferdinand Quénisset | [
"Astronomy"
] | 1,120 | [
"People associated with astronomy",
"Astrophotographers"
] |
1,598,279 | https://en.wikipedia.org/wiki/Formox%20process | The Formox process produces formaldehyde. Formox is a registered trademark owned by Johnson Matthey. The process was originally invented jointly by Swedish chemical company Perstorp and Reichhold Chemicals.
Industrially, formaldehyde is produced by catalytic oxidation of methanol. The most commonly used catalysts are silver metal or a mixture of an iron oxide with molybdenum and/or vanadium. In the recently more commonly used Formox process using iron oxide and molybdenum and/or vanadium, methanol and oxygen react at 300-400°C to produce formaldehyde according to the chemical equation:
CH3OH + ½ O2 → H2CO + H2O.
The silver-based catalyst is usually operated at a higher temperature, about 650 °C. On it, two chemical reactions simultaneously produce formaldehyde: the one shown above, and the dehydrogenation reaction:
CH3OH → H2CO + H2
Further oxidation of the formaldehyde product during its production usually gives formic acid that is found in formaldehyde solution, found in parts per million values.
References
Chemical processes | Formox process | [
"Chemistry"
] | 238 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
1,598,299 | https://en.wikipedia.org/wiki/Brumalia | The Brumalia ( ) were a winter solstice festival celebrated in the eastern part of the Roman Empire. In Rome there had been the minor holiday of Bruma on November 24, which turned into large scale end of the year festivities in Constantinople and Christianity. The festival included night-time feasting, drinking, and merriment. During this time, prophetic indications were taken as predictions for the remainder of the winter. Despite the 6th century emperor Justinian's official repression of paganism, the holiday was celebrated at least until the 11th century, as recorded by Christopher of Mytilene. No references exist after the 1204 sacking of the capital by the Fourth Crusade.
Etymology
The name of Brumalia comes from , , "winter solstice", "winter cold", a shortening of , , presumed obsolete superlative form of , later ("smallest", "shallowest", "briefest").
Overview
The Roman "Bruma" is known only from a few passing remarks, none of which predates Imperial times. Mentions of the Brumalia are found after the IV c. Against the Church disapproval John Malalas and John the Lydian used rhetoric that claimed their introduction by Romulus himself.
Roman life during classical antiquity centred on the military, agriculture, and hunting. The short, cold days of winter would halt most forms of work. Brumalia was a festival celebrated during this dark, interludal period. It was chthonic in character and associated with crops, of which seeds are sown in the ground before sprouting.
Farmers would sacrifice pigs to Saturn and Ceres. Vine-growers would sacrifice goats in honor of Bacchus—for the goat is an enemy of the vine; and they would skin them, fill the skin-bags with air and jump on them. Civic officials would bring offerings of firstfruits (including wine, olive oil, grain, and honey) to the priests of Ceres.
Although Brumalia was still celebrated as late as the 6th century, it was uncommon and celebrants were ostracised by the Christian church. However, some practices did persist as November and December time customs.
In later times, Romans would greet each other with words of blessing at night, "", "Live for years".
Contemporary celebration
It has been revived as a festival annually held by Connecticut College.
References
Notes
Bibliography
Graf F., Roman Festivals in the Greek East From the Early Empire to the Middle Byzantine Era, Cambridge UP 2015, ch.7 The Brumalia (p.201-18)
Webography
Wright H., The Classical Weekly, Vol. 15, No. 7 (Nov. 28, 1921), p.52-4, epitome of De Bruma et Brumalibus Festis by J. R. Crawford
Ancient Roman festivals
Roman festivals of Dionysus
Saturn (mythology)
November observances
December observances
Winter solstice | Brumalia | [
"Astronomy"
] | 608 | [
"Astronomical events",
"Winter solstice"
] |
1,598,325 | https://en.wikipedia.org/wiki/List%20of%20alchemical%20substances | Alchemical studies produced a number of substances, which were later classified as particular chemical compounds or mixtures of compounds.
Many of these terms were in common use into the 20th century.
Metals and metalloids
Antimony/ – Sb
Bismuth () – Bi
Copper/ – associated with Venus. Cu
Gold/ – associated with the Sun. Au
Iron/ – associated with Mars. Fe
Lead/ – associated with Saturn. Pb
Quicksilver/ – associated with Mercury. Hg
Silver/ – associated with the Moon. Ag
Tin/ – associated with Jupiter. Sn
Minerals, stones, and pigments
Bluestone – mineral form of copper(II) sulfate pentahydrate, also called blue vitriol.
Borax – sodium borate; was also used to refer to other related minerals.
Cadmia/tuttia/tutty – probably zinc carbonate.
Calamine – zinc carbonate.
Calomel/horn quicksilver/horn mercury – mercury(I) chloride, a very poisonous purgative formed by subliming a mixture of mercuric chloride and metallic mercury, triturated in a mortar and heated in an iron pot. The crust formed on the lid was ground to powder and boiled with water to remove the calomel.
Calx – calcium oxide; was also used to refer to other metal oxides.
Chalcanthum – the residue produced by strongly roasting blue vitriol (copper sulfate); it is composed mostly of cupric oxide.
Chalk – a rock composed of porous biogenic calcium carbonate. CaCO3
Chrome green – chromic oxide and cobalt oxide.
Chrome orange – chrome yellow and chrome red.
Chrome red – basic lead chromate – PbCrO4+PbO
Chrome yellow/Paris yellow/Leipzig yello – lead chromate, PbCrO4
Cinnabar/vermilion – refers to several substances, among them: mercury(II) sulfide (HgS), or native vermilion (the common ore of mercury).
Copper Glance – copper(I) sulfide ore.
Cuprite – copper(I) oxide ore.
Dutch White – a pigment, formed from one part of white lead to three of barium sulfate. BaSO4
Flowers of antimony – antimony trioxide, formed by roasting stibnite at high temperature and condensing the white fumes that form. Sb2O3
Fool's gold – a mineral, iron disulfide or pyrite; can form oil of vitriol on contact with water and air.
Fulminating silver – principally, silver nitride, formed by dissolving silver(I) oxide in ammonia. Very explosive when dry.
Fulminating gold – a number of gold based explosives which "fulminate", or detonate easily.
– gold hydrazide, formed by adding ammonia to the auric hydroxide. When dry, can explode on concussion.
– an unstable gold carbonate formed by precipitation by potash from gold dissolved in aqua regia.
Galena – lead(II) sulfide. Lead ore.
Glass of antimony – impure antimony tetroxide, SbO4 formed by roasting stibnite. A yellow pigment for glass and porcelain.
Gypsum – a mineral; calcium sulfate. CaSO4
Horn silver/argentum cornu – a weathered form of chlorargyrite, an ore of silver chloride.
Luna cornea – silver chloride, formed by heating horn silver till it liquefies and then cooling.
King's yellow – formed by mixing orpiment with white arsenic.
Lapis solaris (Bologna stone) – barium sulfide – 1603, Vincenzo Cascariolo.
Lead fume – lead oxide, found in flues at lead smelters.
Lime/quicklime (burnt lime)/calx viva/unslaked lime – calcium oxide, formed by calcining limestone
Slaked lime – calcium hydroxide. Ca(OH)2
Marcasite – a mineral; iron disulfide. In moist air it turns into green vitriol, FeSO4.
Massicot – lead monoxide. PbO
Litharge – lead monoxide, formed by fusing and powdering massicot.
Minium/red lead – trilead tetroxide, Pb3O4; formed by roasting litharge in air.
Naples yellow/cassel yellow – oxychloride of lead, formed by heating litharge with sal ammoniac.
Mercurius praecipitatus – red mercuric oxide.
Mosaic gold – stannic sulfide, formed by heating a mixture of tin filings, sulfur, and sal-ammoniac.
Orpiment – arsenic trisulfide, an ore of arsenic.
Pearl white – bismuth nitrate, BiNO3
Philosophers' wool/nix alba (white snow)/Zinc White – zinc oxide, formed by burning zinc in air, used as a pigment
Plumbago – a mineral, graphite; not discovered in pure form until 1564
Powder of Algaroth – antimony oxychloride, formed by precipitation when a solution of butter of antimony and spirit of salt is poured into water.
Purple of Cassius – formed by precipitating a mixture of gold, stannous and stannic chlorides, with alkali. Used for glass coloring
Realgar – arsenic disulfide, an ore of arsenic.
Regulus of antimony
Resin of copper – copper(I) chloride (cuprous chloride), formed by heating copper with corrosive sublimate.
Rouge/crocus/colcothar – ferric oxide, formed by burning green vitriol in air.
Stibnite – antimony or antimony trisulfide, ore of antimony.
Turpeth mineral – hydrolysed form of mercury(II) sulfate.
Verdigris – Carbonate of Copper or (more recently) copper(II) acetate. The carbonate is formed by weathering copper. The acetate is formed by vinegar acting on copper. One version was used as a green pigment.
White arsenic – arsenious oxide, formed by sublimating arsenical soot from the roasting ovens.
White lead – carbonate of lead, a toxic pigment, produced by corroding stacks of lead plates with dilute vinegar beneath a heap of moistened wood shavings. (replaced by blanc fixe & lithopone)
Venetian white – formed from equal parts of white lead and barium sulfate.
Zaffre – impure cobalt arsenate, formed after roasting cobalt ore.
Zinc blende – zinc sulfide.
Salts
Glauber's salt – sodium sulfate. Na2SO4
Sal alembroth – salt composed of chlorides of ammonium and mercury.
Sal ammoniac – ammonium chloride.
Sal petrae (Med. Latin: "stone salt")/salt of petra/saltpetre/nitrate of potash – potassium nitrate, KNO3, typically mined from covered dungheaps.
Salt/common salt – a mineral, sodium chloride, NaCl, formed by evaporating seawater (impure form).
Salt of tartar – potassium carbonate; also called potash.
Salt of hartshorn/sal volatile – ammonium carbonate formed by distilling bones and horns.
Tin salt – hydrated stannous chloride; see also , another chloride of tin.
Vitriols
Blue vitriol – copper(II) sulfate pentahydrate.
Green vitriol – a mineral; iron(II) sulfate heptahydrate. (or ferrous sulfate)
Red vitriol - cobalt sulfate.
Sweet vitriol – diethyl ether. It could be made by mixing oil of vitriol with spirit of wine and heating it.
White vitriol – zinc sulfate, formed by lixiviating roasted zinc blende.
Waters, oils and spirits
/spirit of nitre – nitric acid, formed by 2 parts saltpetre in 1 part (pure) oil of vitriol (sulfuric acid). (Historically, this process could not have been used, as 98% oil of vitriol was not available.)
/spirit of turpentine/oil of turpentine/gum turpentine – turpentine, formed by the distillation of pine tree resin.
(Latin: "royal water") – a mixture of aqua fortis and spirit of salt.
– arsenic trioxide, As2O3 (extremely poisonous)
/aqua vita/spirit of wine, ardent spirits – ethanol, formed by distilling wine
Butter (or oil) of antimony – antimony trichloride. Formed by distilling roasted stibnite with corrosive sublimate, or dissolving stibnite in hot concentrated hydrochloric acid and distilling. SbCl3
Butter of tin – hydrated tin(IV) chloride; see also , another chloride of tin.
Oil of tartar – concentrated potassium carbonate, K2CO3 solution
Oil of tartar per deliquium – potassium carbonate dissolved in the water which its extracts from the air.
Oil of vitriol/spirit of vitriol – sulfuric acid, a weak version can be formed by heating green vitriol and blue vitriol. H2SO4
Spirit of box/pyroxylic spirit – methanol, CH3OH, distilled wood alcohol.
– stannic chloride, formed by distilling tin with corrosive sublimate.
Spirit of hartshorn – ammonia, formed by the decomposition of sal-ammoniac by unslaked lime.
Spirit of salt/ – the liquid form of hydrochloric acid (also called muriatic acid), formed by mixing common salt with oil of vitriol.
Marine acid air – gaseous form of hydrochloric acid.
Others
Alkahest – universal solvent.
Azoth – initially this referred to a supposed universal solvent but later became another name for Mercury.
Bitumen – highly viscous liquid or semi-solid form of petroleum.
Blende
Brimstone – sulfur
Flowers of sulfur – formed by distilling sulfur.
Caustic potash/caustic wood alkali – potassium hydroxide, formed by adding lime to potash.
Caustic Soda/caustic marine alkali – sodium hydroxide, NaOH, formed by adding lime to natron.
Caustic volatile alkali – ammonium hydroxide.
Corrosive sublimate – mercuric chloride, formed by subliming mercury, calcined green vitriol, common salt, and nitre.
Gum Arabic – gum from the acacia tree.
Liver of sulfur – formed by fusing potash and sulfur.
Lunar caustic/ – silver nitrate, formed by dissolving silver in aqua fortis and evaporating.
Lye – potash in a water solution, formed by leaching wood ashes.
Potash – potassium carbonate, formed by evaporating lye; also called salt of tartar. K2CO3
Pearlash – formed by baking potash in a kiln.
Milk of sulfur () – formed by adding an acid to thion hudor (lime sulfur).
Natron/soda ash/soda – sodium carbonate. Na2CO3
– ammonium nitrate.
Sugar of lead – lead(II) acetate, formed by dissolving lead oxide in vinegar.
– lime sulfur, formed by boiling flowers of sulfur with slaked lime.
See also
Alchemical symbol
List of alchemists
References
External links
Eklund, Jon (1975). The Incompleat Chymist: Being an Essay on the Eighteenth-Century Chemist in His Laboratory, with a Dictionary of Obsolete Chemical Terms of the Period (Smithsonian Studies in History and Technology, Number 33). Smithsonian Institution Press.
Giunta, Carmen. Glossary of Archaic Chemical Terms: Introduction and Part I (A-B). Classic Chemistry.
Giunta, Carmen. A Dictionary of the New Chymical Nomenclature. Classic Chemistry. Based on Guyton de Morveau, Louis Bernard; Lavoisier, Antoine; Bertholet, Claude-Louis; Fourcroy, Antoine-François de (1788) [1787]. Method of chymical nomenclature, proposed by Messrs. de Morveau, Lavoisier, Bertholet, and de Fourcroy: To which is added A new system of chymical characters adapted to the nomenclature by Mess. Hassenfratz and Adet. Translated by St. John, James. pp. 105-176.
Chemistry-related lists | List of alchemical substances | [
"Chemistry"
] | 2,632 | [
"Alchemical substances",
"nan"
] |
1,598,419 | https://en.wikipedia.org/wiki/Logic%20File%20System | The Logic File System is a research file system which replaces pathnames with expressions in propositional logic. It allows file metadata to be queried with a superset of the Boolean syntax commonly used in modern search engines.
The actual name is the Logic Information Systems File System, and is abbreviated LISFS to avoid confusion with the log-structured file system (LFS). An implementation of the Logic File System is available at the LISFS website.
It is intended to be used on Unix-like operating systems and is a bit difficult to install, as it needs several non-standard OCaml modules.
References
Notes
Ferré, Sébastian and Ridoux, Olivier (2000). "A File System Based on Concept Analysis."
Padioleau, Yoann and Ridoux, Olivier (2003). "A Logic File System."
Padioleau, Yoann and Ridoux, Olivier (2005). "A Parts of File File System."
External links
LFS new homepage
Computer file systems
Semantic file systems | Logic File System | [
"Technology"
] | 211 | [
"Operating system stubs",
"Computing stubs"
] |
1,598,759 | https://en.wikipedia.org/wiki/Alternating%20series%20test | In mathematical analysis, the alternating series test is the method used to show that an alternating series is convergent when its terms (1) decrease in absolute value, and (2) approach zero in the limit. The test was used by Gottfried Leibniz and is sometimes known as Leibniz's test, Leibniz's rule, or the Leibniz criterion. The test is only sufficient, not necessary, so some convergent alternating series may fail the first part of the test.
For a generalization, see Dirichlet's test.
Formal statement
Alternating series test
A series of the form
where either all an are positive or all an are negative, is called an alternating series.
The alternating series test guarantees that an alternating series converges if the following two conditions are met:
decreases monotonically, i.e., , and
.
Alternating series estimation theorem
Moreover, let L denote the sum of the series, then the partial sum approximates L with error bounded by the next omitted term:
Proof
Suppose we are given a series of the form , where and for all natural numbers n. (The case follows by taking the negative.)
Proof of the alternating series test
We will prove that both the partial sums with odd number of terms, and with even number of terms, converge to the same number L. Thus the usual partial sum also converges to L.
The odd partial sums decrease monotonically:
while the even partial sums increase monotonically:
both because an decreases monotonically with n.
Moreover, since an are positive, . Thus we can collect these facts to form the following suggestive inequality:
Now, note that a1 − a2 is a lower bound of the monotonically decreasing sequence S2m+1, the monotone convergence theorem then implies that this sequence converges as m approaches infinity. Similarly, the sequence of even partial sum converges too.
Finally, they must converge to the same number because .
Call the limit L, then the monotone convergence theorem also tells us extra information that
for any m. This means the partial sums of an alternating series also "alternates" above and below the final limit. More precisely, when there is an odd (even) number of terms, i.e. the last term is a plus (minus) term, then the partial sum is above (below) the final limit.
This understanding leads immediately to an error bound of partial sums, shown below.
Proof of the alternating series estimation theorem
We would like to show by splitting into two cases.
When k = 2m+1, i.e. odd, then
When k = 2m, i.e. even, then
as desired.
Both cases rely essentially on the last inequality derived in the previous proof.
Examples
A typical example
The alternating harmonic series
meets both conditions for the alternating series test and converges.
An example to show monotonicity is needed
All of the conditions in the test, namely convergence to zero and monotonicity, should be met in order for the conclusion to be true. For example, take the series
The signs are alternating and the terms tend to zero. However, monotonicity is not present and we cannot apply the test. Actually, the series is divergent. Indeed, for the partial sum we have which is twice the partial sum of the harmonic series, which is divergent. Hence the original series is divergent.
The test is only sufficient, not necessary
Leibniz test's monotonicity is not a necessary condition, thus the test itself is only sufficient, but not necessary. (The second part of the test is well known necessary condition of convergence for all series.)
Examples of nonmonotonic series that converge are:
In fact, for every monotonic series it is possible to obtain an infinite number of nonmonotonic series that converge to the same sum by permuting its terms with permutations satisfying the condition in Agnew's theorem.
See also
Alternating series
Dirichlet's test
Notes
References
Konrad Knopp (1956) Infinite Sequences and Series, § 3.4, Dover Publications
Konrad Knopp (1990) Theory and Application of Infinite Series, § 15, Dover Publications
James Stewart, Daniel Clegg, Saleem Watson (2016) Single Variable Calculus: Early Transcendentals (Instructor's Edition) 9E, Cengage ISBN 978-0-357-02228-9
E. T. Whittaker & G. N. Watson (1963) A Course in Modern Analysis, 4th edition, §2.3, Cambridge University Press
External links
Jeff Cruzan. "Alternating series"
Convergence tests
Gottfried Wilhelm Leibniz | Alternating series test | [
"Mathematics"
] | 951 | [
"Theorems in mathematical analysis",
"Convergence tests"
] |
1,598,792 | https://en.wikipedia.org/wiki/Scantling | Scantling is a measurement of prescribed size, dimensions, or cross sectional areas.
When used in regard to timber, the scantling is (also "the scantlings are") the thickness and breadth, the sectional dimensions; in the case of stone it refers to the dimensions of thickness, breadth and length.
The word is a variation of scantillon, a carpenter's or stonemason's measuring tool, also used of the measurements taken by it, and of a piece of timber of small size cut as a sample. Sometimes synonymous with story pole. The Old French escantillon, mod. échantillon, is usually taken to be related to Italian scandaglio, sounding-line (Latin scandere, to climb; cf. scansio, the metrical scansion). It was probably influenced by cantel, cantle, a small piece, a corner piece.
Shipbuilding
In shipbuilding, the scantling refers to the collective dimensions of the framing (apart from the keel) to which planks or plates are attached to form the hull. The word is most often used in the plural to describe how much structural strength in the form of girders, I-beams, etc., is in a given section.
Scantling length
The scantling length refers to the structural length of a ship. Its distance is slightly less than the waterline length of a ship, and generally less than the overall length of a ship.
In the American Bureau of Shipping's Rules for Building and Classing Steel Vessels, it is defined as the distance on the summer load line from the fore side of the stem to the centerline of the rudder stock. Scantling length need not be less than 96%, nor more than 97% of the length of the summer load line.
Most other classification societies use a similar definition of scantling length to define the general length of a ship. The scantling length is used by classification societies for all calculations where the waterline length, overall length, displacement length, etc. is called for. Naval architects wishing to comply with class rules would also use the scantling length.
Shipping
In shipping, a "full scantling vessel" is understood to be a geared ship, that can reach all parts of its own cargo spaces with its own cranes.
References
Oxford English Dictionary
External links
Units of length
Nautical terminology
Naval architecture
Timber framing | Scantling | [
"Mathematics",
"Technology",
"Engineering"
] | 483 | [
"Naval architecture",
"Timber framing",
"Units of length",
"Quantity",
"Structural system",
"Marine engineering",
"Units of measurement"
] |
1,598,868 | https://en.wikipedia.org/wiki/Digit%20sum | In mathematics, the digit sum of a natural number in a given number base is the sum of all its digits. For example, the digit sum of the decimal number would be
Definition
Let be a natural number. We define the digit sum for base , to be the following:
where is one less than the number of digits in the number in base , and
is the value of each digit of the number.
For example, in base 10, the digit sum of 84001 is
For any two bases and for sufficiently large natural numbers
The sum of the base 10 digits of the integers 0, 1, 2, ... is given by in the On-Line Encyclopedia of Integer Sequences. use the generating function of this integer sequence (and of the analogous sequence for binary digit sums) to derive several rapidly converging series with rational and transcendental sums.
Extension to negative integers
The digit sum can be extended to the negative integers by use of a signed-digit representation to represent each integer.
Properties
The amount of n-digit numbers with digit sum q can be calculated using:
Applications
The concept of a decimal digit sum is closely related to, but not the same as, the digital root, which is the result of repeatedly applying the digit sum operation until the remaining value is only a single digit. The decimal digital root of any non-zero integer will be a number in the range 1 to 9, whereas the digit sum can take any value. Digit sums and digital roots can be used for quick divisibility tests: a natural number is divisible by 3 or 9 if and only if its digit sum (or digital root) is divisible by 3 or 9, respectively. For divisibility by 9, this test is called the rule of nines and is the basis of the casting out nines technique for checking calculations.
Digit sums are also a common ingredient in checksum algorithms to check the arithmetic operations of early computers. Earlier, in an era of hand calculation, suggested using sums of 50 digits taken from mathematical tables of logarithms as a form of random number generation; if one assumes that each digit is random, then by the central limit theorem, these digit sums will have a random distribution closely approximating a Gaussian distribution.
The digit sum of the binary representation of a number is known as its Hamming weight or population count; algorithms for performing this operation have been studied, and it has been included as a built-in operation in some computer architectures and some programming languages. These operations are used in computing applications including cryptography, coding theory, and computer chess.
Harshad numbers are defined in terms of divisibility by their digit sums, and Smith numbers are defined by the equality of their digit sums with the digit sums of their prime factorizations.
See also
Arithmetic dynamics
Casting out nines
Checksum
Digital root
Hamming weight
Harshad number
Perfect digital invariant
Sideways sum
Smith number
Sum-product number
References
External links
Addition
Arithmetic dynamics
Base-dependent integer sequences
Number theory | Digit sum | [
"Mathematics"
] | 606 | [
"Discrete mathematics",
"Recreational mathematics",
"Arithmetic dynamics",
"Number theory",
"Dynamical systems"
] |
1,599,249 | https://en.wikipedia.org/wiki/Polydisc | In the theory of functions of several complex variables, a branch of mathematics, a polydisc is a Cartesian product of discs.
More specifically, if we denote by the open disc of center z and radius r in the complex plane, then an open polydisc is a set of the form
It can be equivalently written as
One should not confuse the polydisc with the open ball in Cn, which is defined as
Here, the norm is the Euclidean distance in Cn.
When , open balls and open polydiscs are not biholomorphically equivalent, that is, there is no biholomorphic mapping between the two. This was proven by Poincaré in 1907 by showing that their automorphism groups have different dimensions as Lie groups.
When the term bidisc is sometimes used.
A polydisc is an example of logarithmically convex Reinhardt domain.
References
Several complex variables | Polydisc | [
"Mathematics"
] | 185 | [
"Several complex variables",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
1,599,532 | https://en.wikipedia.org/wiki/Wiggler%20%28synchrotron%29 | A wiggler is an insertion device in a synchrotron. It is a series of magnets designed to periodically laterally deflect ('wiggle') a beam of charged particles (invariably electrons or positrons) inside a storage ring of a synchrotron. These deflections create a change in acceleration which in turn produces emission of broad synchrotron radiation tangent to the curve, much like that of a bending magnet, but the intensity is higher due to the contribution of many magnetic dipoles in the wiggler. Furthermore, as the wavelength (λ) is decreased this means the frequency (ƒ) has increased. This increase of frequency is directly proportional to energy, hence, the wiggler creates a wavelength of light with a larger energy.
A wiggler has a broader spectrum of radiation than an undulator.
Typically the magnets in a wiggler are arranged in a Halbach array. The design shown above is usually known as a Halbach wiggler.
History
The first suggestion of a wiggler magnet to produce synchrotron radiation was made by K. W. Robinson in an unpublished report at the Cambridge Electron Accelerator (CEA) at Harvard University in 1956. CEA built the first wiggler in 1966, not as a source of synchrotron radiation, but to provide additional damping of betatron and synchrotron oscillations to create a beam storage system. A wiggler magnet was first used as a synchrotron radiation source at the Stanford Synchrotron Radiation Lightsource (SSRL) in 1979.
References
Synchrotron instrumentation
de:Wiggler (Synchrotron) | Wiggler (synchrotron) | [
"Technology",
"Engineering"
] | 341 | [
"Synchrotron instrumentation",
"Measuring instruments"
] |
1,599,683 | https://en.wikipedia.org/wiki/Sedan%20%28nuclear%20test%29 | Storax Sedan was a shallow underground nuclear test conducted in Area 10 of Yucca Flat at the Nevada National Security Site on July 6, 1962, as part of Operation Plowshare, a program to investigate the use of nuclear weapons for mining, cratering, and other civilian purposes. The radioactive fallout from the test contaminated more US residents than any other nuclear test. The Sedan Crater is the largest human-made crater in the United States and is listed on the National Register of Historic Places.
Effects
Sedan was a thermonuclear device with a fission yield less than 30% and a fusion yield about 70%. According to Carey Sublette, the design of the Sedan device was similar to that used in the Bluestone and Swanee tests of Operation Dominic conducted days and months prior to Sedan respectively, and was therefore not unlike the W56 high yield Minuteman I missile warhead. The device had a diameter of , a length of , and a weight of .
The timing of the test put it within the Operation Storax fiscal year, but Sedan was functionally part of Operation Plowshare, and the test protocol was sponsored and conducted by Lawrence Livermore National Laboratory with minimal involvement by the United States Department of Defense. The explosive device was lowered into a shaft drilled into the desert alluvium deep. The fusion-fission blast had a yield equivalent to 104 kilotons of TNT (435 terajoules) and lifted a dome of earth above the desert floor before it vented at three seconds after detonation, exploding upward and outward displacing of soil.
Sedan Crater
The resulting crater is deep with a diameter of about . A circular area of the desert floor five miles across was obscured by fast-expanding dust clouds moving out horizontally from the base surge, akin to pyroclastic surge. The blast caused seismic waves equivalent to an earthquake of 4.75 on the Richter scale. The radiation level on the crater lip at 1 hour after the burst was 500 R per hour (130 mC/(kg·h)), but it dropped to 500 mR per hour after 27 days.
Within 7 months (~210 days) of the excavation, the bottom of the crater could be safely walked upon with no protective clothing, with radiation levels at 35 mR per hour after 167 days.
Over 10,000 people per year visit the crater through free monthly tours offered by the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office. The crater was listed on the National Register of Historic Places on March 21, 1994.
Russian thistle, also known as tumbleweed, is the primary plant species growing in the crater along with some grasses. Analysis in 1993 observed that the original perennial shrubs once living there had shown no recovery.
Statistics
Maximum depth
Maximum diameter
Volume
Weight of material lifted
Maximum lip height
Minimum lip height
Fallout
The explosion caused two plumes of radioactive cloud, rising to 3.0 km and 4.9 km (10,000 ft and 16,000 ft). The plumes headed northeast and then east in roughly parallel paths towards the Atlantic Ocean. Nuclear fallout was dropped through several counties. Detected radioactivity was especially high in eight counties in Iowa and one county each in Nebraska, South Dakota and Illinois. The most heavily affected counties were Howard, Mitchell and Worth counties in Iowa as well as Washabaugh County in South Dakota. The average estimated fallout from Sedan on Howard, Mitchell and Worth counties was 950 microCuries each, while Washabaugh County SD received an estimated average of 860 microCuries of fallout. The explosion created fallout that affected more US residents than any other nuclear test, exposing more than 13 million people to radiation.
Of all the nuclear tests conducted in the United States, Sedan ranked highest in overall activity of radionuclides in fallout. The test released 880,000 curies (33 PBq) of radioactive iodine-131, an agent of thyroid disease, into the atmosphere. Sedan ranked first in percentages of these particular radionuclides detected in fallout: 198Au, 199Au, 7Be, 99Mo, 147Nd, 203Pb, 181W, 185W and 188W. Sedan ranked second in these radionuclides in fallout: 57Co, 60Co and 54Mn. Sedan ranked third in the detected amount of 24Na in fallout. In countrywide deposition of radionuclides, Sedan was highest in the amount of 7Be, 54Mn, 106Ru and 242Cm, and second highest in the amount of deposited 127mTe. Sedan ranks highest in percentages of 198Au detected.
Sedan's fallout contamination contributed a little under 7% to the total amount of radiation which fell on the US population during all of the nuclear tests at NTS. Sedan's effects were similar to shot "George" of Operation Tumbler–Snapper, detonated on June 1, 1952, which also contributed about 7% to the total radioactive fallout. Uncertainty regarding exact amounts of exposure prevents knowing which of the two nuclear tests caused the most; George is listed as being the highest exposure and Sedan second highest by the United States Department of Health and Human Services, Centers for Disease Control and Prevention, and the National Cancer Institute.
Had this test been conducted after 1965 when improvements in device design were realized, achieving a 100-fold reduction in radiation release is considered feasible.
Conclusions
The Plowshare project developed the Sedan test in order to determine the feasibility of using nuclear detonations to quickly and economically excavate large amounts of earth and rock. Proposed applications included the creation of harbors, canals, open pit mines, railroad and highway cuts through mountainous terrain and the construction of dams. Assessment of the full effects of the Sedan shot showed that the radioactive fallout from such uses would be extensive. Public concerns about the health effects and a lack of political support eventually led to abandonment of the concept. No such nuclear excavation has since been undertaken by the United States, though the Soviet Union continued to pursue the concept through their program Nuclear Explosions for the National Economy, particularly with their 140 kiloton Chagan (nuclear test), which created an artificial lake reservoir (see Lake Chagan).
Diplomatic issue with Sudan
On March 2, 2005, Ellen Tauscher, a Democratic member of the U.S. House of Representatives from California, used Sedan as an example of a test which produced a considerable amount of radioactive fallout while giving congressional testimony on the containment of debris from nuclear testing. However, the name "Sedan" was incorrectly transcribed as "Sudan" in the Congressional Record.
Within days of the error, the international community took notice. Sudanese officials responded by stating that "the Sudanese government takes this issue seriously and with extreme importance". The Chinese Xinhua General News Service published an article claiming that the Sudanese government blamed the U.S. for raising cancer rates among the Sudanese people. Despite the U.S. embassy in Khartoum issuing a statement clarifying that it was a typographic error, Mustafa Osman Ismail, the Sudanese Foreign Minister, stated his government would continue investigating the claims.
See also
Chagan (nuclear test)
Peaceful nuclear explosion
Greenhouse Item
References
External links
US government movie about the Sedan test
Virtual-Reality tour of Sedan Site
Sedan Crater at the Online Nevada Encyclopedia
Sedan Nuclear Test – Original Military Film – YouTube
Nevada National Security Site History – Sedan Crater (PDF)
The Nuclear Sedan Crater of Nevada
1962 in military history
1962 in Nevada
1962 in the United States
Diplomatic incidents
Explosion craters
Explosions in 1962
History of Nye County, Nevada
July 1962 events in the United States
National Register of Historic Places in Nye County, Nevada
Nevada Test Site nuclear explosive tests
Nuclear history of the United States
Peaceful nuclear explosions
Project Plowshare nuclear tests
Tourist attractions in Nye County, Nevada
Underground nuclear weapons testing | Sedan (nuclear test) | [
"Chemistry"
] | 1,581 | [
"Explosion craters",
"Explosions",
"Peaceful nuclear explosions"
] |
1,599,730 | https://en.wikipedia.org/wiki/Computer%20speakers | Computer speakers, or multimedia speakers, are speakers sold for use with computers, although usually capable of other audio uses, e.g. for an MP3 player. Most such speakers have an internal amplifier and consequently require a power source, which may be by a mains power supply often via an AC adapter, batteries, or a USB port. The signal input connector is often a 3.5 mm jack plug (usually color-coded lime green per the PC 99 standard); RCA connectors are sometimes used, and a USB port may supply both signal and power (requiring additional circuitry, and only suitable for use with a computer). Battery-powered wireless Bluetooth speakers require no connections at all. Most computers have speakers of low power and quality built in; when external speakers are connected they disable the built-in speakers. Altec Lansing claims to have created the computer speaker market in 1990.
Computer speakers range widely in quality and in price. Computer speakers sometimes packaged with computer systems are small, plastic, and have mediocre sound quality. Some computer speakers have equalization features such as bass and treble controls. Bluetooth speakers can be connected with a computer by using an Aux jack and compatible adaptor prove instrumental.
More sophisticated computer speakers can have a subwoofer unit to enhance bass output. The larger subwoofer enclosure usually contains the amplifiers for the subwoofer and the left and right speakers.
Some computer displays have rather basic speakers built-in. Laptop computers have built-in integrated speakers, usually small and of restricted sound quality to conserve space.
See also
PC speaker
Loudspeaker enclosure
References
American inventions
Computer peripherals
Loudspeakers
Boombox culture
tl:Speaker | Computer speakers | [
"Technology"
] | 348 | [
"Computer peripherals",
"Components"
] |
1,599,733 | https://en.wikipedia.org/wiki/Radiation%20implosion | Radiation implosion is the compression of a target by the use of high levels of electromagnetic radiation. The major use for this technology is in fusion bombs and inertial confinement fusion research.
History
Radiation implosion was first developed by Klaus Fuchs and John von Neumann in the United States, as part of their work on the original "Classical Super" hydrogen-bomb design. Their work resulted in a secret patent filed in 1946, and later given to the USSR by Fuchs as part of his nuclear espionage. However, their scheme was not the same as used in the final hydrogen-bomb design, and neither the American nor the Soviet programs were able to make use of it directly in developing the hydrogen bomb (its value would become apparent only after the fact). A modified version of the Fuchs-von Neumann scheme was incorporated into the "George" shot of Operation Greenhouse.
In 1951, Stanislaw Ulam had the idea to use hydrodynamic shock of a fission weapon to compress more fissionable material to extremely high densities in order to make megaton-range, two-stage fission bombs. He then realized that this approach might be useful for starting a thermonuclear reaction. He presented the idea to Edward Teller, who realized that radiation compression would be both faster and more efficient than mechanical shock. This combination of ideas, along with a fission "spark plug" embedded inside the fusion fuel, became what is known as the Teller–Ulam design for the hydrogen bomb.
Fission bomb radiation source
Most of the energy released by a fission bomb is in the form of x-rays. The spectrum is approximately that of a black body at a temperature of 50,000,000 kelvins (a little more than three times the temperature of the Sun's core). The amplitude can be modeled as a trapezoidal pulse with a one microsecond rise time, one microsecond plateau, and one microsecond fall time. For a 30 kiloton fission bomb, the total x-ray output would be 100 terajoules (more than 70% of the total yield).
Radiation transport
In a Teller-Ulam bomb, the object to be imploded is called the "secondary". It contains fusion material, such as lithium deuteride, and its outer layers are a material which is opaque to x-rays, such as lead or uranium-238.
In order to get the x-rays from the surface of the primary, the fission bomb, to the surface of the secondary, a system of "x-ray reflectors" is used.
The reflector is typically a cylinder made of a material such as uranium. The primary is located at one end of the cylinder and the secondary is located at the other end. The interior of the cylinder is commonly filled with a foam which is mostly transparent to x-rays, such as polystyrene.
The term reflector is misleading, since it gives the reader an idea that the device works like a mirror. Some of the x-rays are diffused or scattered, but the majority of the energy transport happens by a two-step process: the x-ray reflector is heated to a high temperature by the flux from the primary, and then it emits x-rays which travel to the secondary. Various classified methods are used to improve the performance of the reflection process.
Some Chinese documents show that Chinese scientists used a different method to achieve radiation implosion. According to these documents, an X-ray lens, not a reflector, was used to transfer the energy from primary to secondary during the making of the first Chinese H-bomb.
The implosion process in nuclear weapons
The term "radiation implosion" suggests that the secondary is crushed by radiation pressure, and calculations show that while this pressure is very large, the pressure of the materials vaporized by the radiation is much larger. The outer layers of the secondary become so hot that they vaporize and fly off the surface at high speeds. The recoil from this surface layer ejection produces pressures which are an order of magnitude stronger than the simple radiation pressure. The so-called radiation implosion in thermonuclear weapons is therefore thought to be a radiation-powered ablation-drive implosion.
Laser radiation implosions
There has been much interest in the use of large lasers to ignite small amounts of fusion material. This process is known as inertial confinement fusion (ICF). As part of that research, much information on radiation implosion technology has been declassified.
When using optical lasers, there is a distinction made between "direct drive" and "indirect drive" systems. In a direct drive system, the laser beam(s) are directed onto the target, and the rise time of the laser system determines what kind of compression profile will be achieved.
In an indirect drive system, the target is surrounded by a shell (called a Hohlraum) of some intermediate-Z material, such as selenium. The laser heats this shell to a temperature such that it emits x-rays, and these x-rays are then transported onto the fusion target. Indirect drive has various advantages, including better control over the spectrum of the radiation, smaller system size (the secondary radiation typically has a wavelength 100 times smaller than the driver laser), and more precise control over the compression profile.
References
External links
http://nuclearweaponarchive.org/Library/Teller.html
Radiation
Implosion | Radiation implosion | [
"Physics",
"Chemistry"
] | 1,129 | [
"Transport phenomena",
"Physical phenomena",
"Implosion",
"Waves",
"Radiation",
"Mechanics"
] |
1,599,825 | https://en.wikipedia.org/wiki/Manual%20%28music%29 | The word "manual" is used instead of the word "keyboard" when referring to any hand-operated keyboard on a keyboard instrument that has a pedalboard (a keyboard on which notes are played with the feet), such as an organ; or when referring to one of the keyboards on an instrument that has more than one hand-operated keyboard, such as a two- or three-manual harpsichord. (On instruments that have neither a pedalboard nor more than one hand-operated keyboard, the word "manual" is not a synonym for "keyboard".)
Music written to be played only on the manuals (and not using the pedals) can be designated by the word manualiter (first attested in 1511, but particularly common in the 17th and 18th centuries).
Overview
Organs and synthesizers can, and usually do, have more than one manual; most home instruments have two manuals, while most larger organs have two or three. Elaborate pipe organs and theater organs can have four or more manuals. The manuals are set into the organ console (or "keydesk").
The layout of a manual is roughly the same as a piano keyboard, with long, usually ivory or light-colored keys for the natural notes of the Western musical scale, and shorter, usually ebony or dark-colored keys for the five sharps and flats. A typical, full-size organ manual consists of five octaves, or 61 keys. Piano keyboards, by contrast, normally have 88 keys; some electric pianos and digital pianos have fewer keys, such as 61 or 73 keys. Some smaller electronic organs may have manuals of four octaves or less (25, 49, 44, or even 37 keys). Changes in registration through use of drawknobs, stop tabs, or other mechanisms to control organ stops allow such instruments to achieve an aggregate range well in excess of pianos and other keyboard instruments even with manuals of shorter pitch range and smaller size.
On smaller electronic organs and synthesizers, the manuals may span fewer octaves, and they may also be offset, with the lower one an octave to the left of the upper one. This arrangement encourages the organist to play the melody line on the upper manual while playing the harmony line, chords or bassline on the lower manual.
On pipe organs each manual plays a specific subset of the organ's stops, and electric organs (e.g., Hammond organ) can emulate this style of play. Hammond organs differ from pipe organs in that pipe organs can only pull a stop out (that is, turn on a stop) or push it in (turning off this stop); in contrast, Hammond organs typically have drawbars, so that the player can control how much of each "pipe rank" (e.g., 16 ft, 8 ft, 4 ft 2 ft, etc.) they wish to use. Synthesizers can program separate manuals to emulate sounds of various orchestral sections or instruments, using imitative digital sounds or sampling of real instruments, or using entirely synthesized sounds. On digital synthesizer instruments a performer can produce the sounds of an entire orchestra through the use of all available manuals in conjunction with the pedalboard and the various registration controls.
Organ manuals vs. piano keyboards
Despite the superficial resemblance to piano keyboards, organ manuals require a very different style of playing. Organ keys often require less force to depress than piano keys; however, the keys on mechanical instruments can be very heavy (St Sulpice Paris, St Ouen Rouen, St Etienne Caen, etc.). When depressed, an organ key continues to sound its note at the same volume until the organist releases the key, unlike a piano key, whose note gradually fades away as the string vibrations fade away. On the other hand, while the pianist may allow the piano notes to continue to sound for a few moments after lifting their hands from the keys by depressing the sustain pedal, most organs have no corresponding control; the note invariably ceases when the organist releases the key. The exception is some modern electronic instruments and relatively contemporary upgrades to theatre pipe organ consoles, which may have a knee lever which sustains the previous chords or notes. The knee lever enables an organist to hold a chord or note during a fermata or cadence, thus freeing their hands to turn a page in the sheet music, change stops, conduct a choir or orchestra, or shift hands to another manual.
Another difference is that of dynamic control. Unlike the case of piano keys, the force with which the organist depresses the key has no relation to the note's resonance; instead, the organist controls the volume through use of the expression pedals. While the piano note, then, can only decay, the organ note may increase in volume or undergo other dynamic changes. Some modern electronic instruments allow for volume to vary with the force applied to the key and permit the organist to sustain the note and alter both its attack and decay in a variety of ways. For example, Hammond organs often have an expression pedal, which enables the performer to increase or decrease the volume of a note, chord, or passage. All of these variables mean that both the technique of organ playing and the resulting music are quite different from those of the piano. Nevertheless, the trained pianist may play a basic organ repertoire with little difficulty, although more advanced organ music will require specialized training and practice, as the musician has to learn to play on multiple manuals, set stops and other controls while performing, and play the pedal keyboard with the feet.
Electromechanical organs
One of the key types of electromechanical organs, the Hammond B-3, has two manuals. Each manual has drawbars which are used to control the registration for each manuals.
Types of manuals and related controls
Different manuals on pipe organs are usually used to play stops from a variety of divisions, which group together a series of different tones. Divisions are usually standardised within pipe organs belonging to certain regions; in the English school of organ building, common divisions include the Great, Swell, Choir, Solo and Echo, while French organs commonly include Grand Orgue, Positif, Récit and Echo. German organ divisions include the Hauptwerk, Rückpositiv, Brustwerk and Oberwerk, while in Dutch, common divisions are Hoofdwerk, Rugwerk, Borstwerk and Bovenwerk. Finally, theatre organs are usually composed of Great, Accompaniment, Solo, Bombarde and Orchestral divisions. Organ builders choose different divisions to accommodate the type of music played, the space in which the organ is installed, as well as the desired character and tone of the instrument.
Various other controls, such as stops, pistons, and registration presets are usually located adjacent to the manuals to allow the organist ready access to them while playing. This further increases the instrument's versatility, as a piston or other preset function can cause multiple stops to be pulled out or pushed in automatically. This is of particular benefit in pieces where a number of stops have to be pulled out or pushed in between sections. Devices known as couplers are sometimes available to link the manuals, so that the stops (and pipes) normally played on one can be played from another.
Gallery
References
Keyboard instruments
Musical instrument parts and accessories | Manual (music) | [
"Technology"
] | 1,489 | [
"Components",
"Musical instrument parts and accessories"
] |
1,599,873 | https://en.wikipedia.org/wiki/Atrophic%20gastritis | Atrophic gastritis is a process of chronic inflammation of the gastric mucosa of the stomach, leading to a loss of gastric glandular cells and their eventual replacement by intestinal and fibrous tissues. As a result, the stomach's secretion of essential substances such as hydrochloric acid, pepsin, and intrinsic factor is impaired, leading to digestive problems. The most common are pernicious anemia possibly leading to vitamin B12 deficiency; and malabsorption of iron, leading to iron deficiency anaemia. It can be caused by persistent infection with Helicobacter pylori, or can be autoimmune in origin. Those with autoimmune atrophic gastritis (Type A gastritis) are statistically more likely to develop gastric carcinoma (a form of stomach cancer), Hashimoto's thyroiditis, and achlorhydria.
Type A gastritis primarily affects the fundus (body) of the stomach and is more common with pernicious anemia. Type B gastritis primarily affects the antrum, and is more common with H. pylori infection.
Signs and symptoms
Some people with atrophic gastritis may be asymptomatic. Symptomatic patients are mostly females and signs of atrophic gastritis are those associated with iron deficiency: fatigue, restless legs syndrome, brittle nails, hair loss, impaired immune function, and impaired wound healing. And other symptoms, such as delayed gastric emptying (80%), reflux symptoms (25%), peripheral neuropathy (25% of cases), autonomic abnormalities, and memory loss, are less common and occur in 1%–2% of cases. Psychiatric disorders are also reported, such as mania, depression, obsessive-compulsive disorder, psychosis, and cognitive impairment.
Although autoimmune atrophic gastritis impairs iron and vitamin B12 absorption, iron deficiency is detected at a younger age than pernicious anemia.
Associated conditions
People with atrophic gastritis are also at increased risk for the development of gastric adenocarcinoma.
Causes
Recent research has shown that autoimmune metaplastic atrophic gastritis (AMAG) is a result of the immune system attacking the parietal cells.
Environmental metaplastic atrophic gastritis (EMAG) is due to environmental factors, such as diet and H. pylori infection. EMAG is typically confined to the body of the stomach. Patients with EMAG are also at increased risk of gastric carcinoma.
Pathophysiology
Autoimmune metaplastic atrophic gastritis (AMAG) is an inherited form of atrophic gastritis characterized by an immune response directed toward parietal cells and intrinsic factor.
Achlorhydria induces G cell (gastrin-producing) hyperplasia, which leads to hypergastrinemia. Gastrin exerts a trophic effect on enterochromaffin-like cells (ECL cells are responsible for histamine secretion) and is hypothesized to be one mechanism to explain the malignant transformation of ECL cells into carcinoid tumors in AMAG.
Diagnosis
Detection of APCA (anti-parietal cell antibodies), anti-intrinsic factor antibodies (AIFA), and Helicobacter pylori (HP) antibodies in conjunction with serum gastrin are effective for diagnostic purposes.
Classification
The notion that atrophic gastritis could be classified depending on the level of progress as "closed type" or "open type" was suggested in early studies, but no universally accepted classification exists as of 2017.
Treatment
Supplementation of folic acid in deficient patients can improve the histopathological findings of chronic atrophic gastritis and reduce the incidence of gastric cancer.
See also
Chronic gastritis
References
External links
Aging-associated diseases
Autoimmune diseases
Stomach disorders | Atrophic gastritis | [
"Biology"
] | 852 | [
"Senescence",
"Aging-associated diseases"
] |
1,599,902 | https://en.wikipedia.org/wiki/Andrew%20B.%20Whinston | Andrew B. Whinston (born June 3, 1936), is an American economist and computer scientist. He serves as the Hugh Roy Cullen Centennial Chair in Business Administration and works as a Professor of Information Systems, Computer Science, and Economics, and Director of the Center for Research in Electronic Commerce (CREC) in the McCombs School of Business at the University of Texas at Austin.
In the late 1950s, he was Sanxsay Fellow at Princeton University. Whinston received his PhD from the Carnegie Institute of Technology in 1962, when he also received the Alexander Henderson Award for Excellence in Economic Theory. He then started working at the economics department of Yale University, where he was a member of the Cowles Foundation. In 1964, he became Associate Professor of Economics at University of Virginia. By 1966, he was a Full Professor at Purdue University, where he became the university's inaugural Weiler Distinguished Professor of management, economics, and computer science.
In 1962, Whinston published a research paper in the Journal of Political Economy on how non-cooperative game theory could be applied to issues in microeconomics. In a second paper entitled "A Model of Multi-Period Investment Under Uncertainty", which appeared in Management Science, he used nonlinear optimization methods to determine optimal portfolios over time.
Publications
Whinston has papers in economics journals such as American Economic Review, Econometrica, Review of Economic Studies, Journal of Economic Theory, Journal of Financial Economics, Journal of Mathematical Economics, in multidisciplinary journals such as Management Science, Decision Sciences, and Organization Science, in operations journals such as Operations Research, European Journal of Operational Research, Production and Operations Management, Journal of Production Research, and Naval Research Logistics, in mathematics journals such as Journal of Combinatorics, SIAM Journal on Applied Mathematics, and Discrete Mathematics, in accounting journals such as the Accounting Review and Auditing: A Journal of Practice and Theory, in marketing journals such as Marketing Science, Journal of Marketing, Journal of Marketing Research, and Journal of Retailing, in the premier journals devoted to information systems – Management Science, Decision Support Systems, MIS Quarterly, Journal of Management Information Systems, and Information Systems Research - and in computer science journals such as Communications of the ACM, ACM Transactions on Database Systems, ACM Transactions, IEEE Computing on Internet Technology, and ACM Journal on Mobile Networking and Applications.
His publication record consists of more than 25 books, and 400 refereed publications.
Awards
In 1995, Whinston was honored by the Data Processing Management Association with its Information Systems Educator of the Year Award. In 2005, Whinston received the LEO Award for Lifetime Exceptional Achievement in Information Systems. This award, created by the Association for Information Systems Council and the International Conference on Information Systems Executive Committee, recognizes the work of outstanding scholars on the field.
In 2009, Whinston was honored with the Career Award for Outstanding Research Contributions at the University of Texas at Austin which recognizes significant research contributions made by a tenured or tenure-track faculty member. In 2009, the INFORMS Information System Society (ISS) honored Whinston by recognizing him as the inaugural INFORMS ISS Fellow for outstanding contributions to information systems research.
Bibliography
See also
Chance-constrained portfolio selection
References
External links
Andrew Whinston's personal website
Presentation of Andrew B. Whinston at UT Austin
The Center for Research in Electronic Commerce
Andrew Whinston's CV
American economists
American computer scientists
Princeton University fellows
Carnegie Mellon University alumni
Yale University faculty
Purdue University faculty
McCombs School of Business faculty
1936 births
Living people
Information systems researchers | Andrew B. Whinston | [
"Technology"
] | 723 | [
"Information systems",
"Information systems researchers"
] |
1,599,913 | https://en.wikipedia.org/wiki/Astronomer%20Royal%20for%20Scotland | Astronomer Royal for Scotland was the title of the director of the Royal Observatory, Edinburgh until 1995. It has since been an honorary title.
Astronomers Royal for Scotland
See also
Edinburgh Astronomical Institution
City Observatory
Royal Observatory, Edinburgh
Astronomer Royal
Royal Astronomer of Ireland
References
Lists of office-holders in Scotland
Scottish royalty
Positions within the British Royal Household
Ceremonial officers in the United Kingdom | Astronomer Royal for Scotland | [
"Astronomy"
] | 72 | [
"Astronomy stubs"
] |
1,600,053 | https://en.wikipedia.org/wiki/Self-replicating%20machine | A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of self-replicating machines has been advanced and examined by Homer Jacobson, Edward F. Moore, Freeman Dyson, John von Neumann, Konrad Zuse and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machines which provided the first comprehensive analysis of the entire replicator design space. The future development of such technology is an integral part of several plans involving the mining of moons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The von Neumann probe is one theoretical example of such a machine. Von Neumann also worked on what he called the universal constructor, a self-replicating machine that would be able to evolve and which he formalized in a cellular automata environment. Notably, Von Neumann's Self-Reproducing Automata scheme posited that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.
A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. The concept, first proposed by Von Neumann no later than the 1940s, has attracted a range of different approaches involving various types of technology. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler to distinguish macroscale replicating systems from the microscopic nanorobots or "assemblers" that nanotechnology may make possible, but the term is informal and is rarely used by others in popular or technical discussions. Replicators have also been called "von Neumann machines" after John von Neumann, who first rigorously studied the idea. However, the term "von Neumann machine" is less specific and also refers to a completely unrelated computer architecture that von Neumann proposed and so its use is discouraged where accuracy is important. Von Neumann used the term universal constructor to describe such self-replicating machines.
Historians of machine tools, even before the numerical control era, sometimes figuratively said that machine tools were a unique class of machines because they have the ability to "reproduce themselves" by copying all of their parts. Implicit in these discussions is that a human would direct the cutting processes (later planning and programming the machines), and would then assemble the parts. The same is true for RepRaps, which are another class of machines sometimes mentioned in reference to such non-autonomous "self-replication". Such discussions refer to collections of machine tools, and such collections have an ability to reproduce their own parts which is finite and low for one machine, and ascends to nearly 100% with collections of only about a dozen similarly made, but uniquely functioning machines, establishing what authors Frietas and Merkle refer to as matter or material closure. Energy closure is the next most difficult dimension to close, and control the most difficult, noting that there are no other dimensions to the problem. In contrast, machines that are truly autonomously self-replicating (like biological machines) are the main subject discussed here, and would have closure in each of the three dimensions.
History
The general concept of artificial machines capable of producing copies of themselves dates back at least several hundred years. An early reference is an anecdote regarding the philosopher René Descartes, who suggested to Queen Christina of Sweden that the human body could be regarded as a machine; she responded by pointing to a clock and ordering "see to it that it reproduces offspring." Several other variations on this anecdotal response also exist. Samuel Butler proposed in his 1872 novel Erewhon that machines were already capable of reproducing themselves but it was man who made them do so, and added that "machines which reproduce machinery do not reproduce machines after their own kind". In George Eliot's 1879 book Impressions of Theophrastus Such, a series of essays that she wrote in the character of a fictional scholar named Theophrastus, the essay "Shadows of the Coming Race" speculated about self-replicating machines, with Theophrastus asking "how do I know that they may not be ultimately made to carry, or may not in themselves evolve, conditions of self-supply, self-repair, and reproduction".
In 1802 William Paley formulated the first known teleological argument depicting machines producing other machines, suggesting that the question of who originally made a watch was rendered moot if it were demonstrated that the watch was able to manufacture a copy of itself. Scientific study of self-reproducing machines was anticipated by John Bernal as early as 1929 and by mathematicians such as Stephen Kleene who began developing recursion theory in the 1930s. Much of this latter work was motivated by interest in information processing and algorithms rather than physical implementation of such a system, however. In the course of the 1950s, suggestions of several increasingly simple mechanical systems capable of self-reproduction were made — notably by Lionel Penrose.
Von Neumann's kinematic model
A detailed conceptual proposal for a self-replicating machine was first put forward by mathematician John von Neumann in lectures delivered in 1948 and 1949, when he proposed a kinematic model of self-reproducing automata as a thought experiment. Von Neumann's concept of a physical self-replicating machine was dealt with only abstractly, with the hypothetical machine using a "sea" or stockroom of spare parts as its source of raw materials. The machine had a program stored on a memory tape that instructed it to retrieve parts from this "sea" using a manipulator, assemble them into a copy of itself, and then transfer the contents of its memory tape into the new duplicate. The machine was envisioned as consisting of as few as eight different types of components: four logic elements for sending and receiving stimuli and four mechanical elements for providing structural support and mobility. Although qualitatively sound, von Neumann was evidently dissatisfied with this self-replicating machine model due to the difficulty of analyzing it with mathematical precision. He went on to instead develop an even more abstract model self-replicator based on cellular automata. His original kinematic concept remained obscure until it was popularized in a 1955 issue of Scientific American.
Von Neumann's goal for his self-reproducing automata theory, as specified in his lectures at the University of Illinois in 1949, was to design a machine whose complexity could grow automatically akin to biological organisms under natural selection. He asked what is the threshold of complexity that must be crossed for machines to be able to evolve. His answer was to design an abstract machine which, when run, would replicate itself. Notably, his design implies that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.
Moore's artificial living plants
In 1956 mathematician Edward F. Moore proposed the first known suggestion for a practical real-world self-replicating machine, also published in Scientific American. Moore's "artificial living plants" were proposed as machines able to use air, water and soil as sources of raw materials and to draw its energy from sunlight via a solar battery or a steam engine. He chose the seashore as an initial habitat for such machines, giving them easy access to the chemicals in seawater, and suggested that later generations of the machine could be designed to float freely on the ocean's surface as self-replicating factory barges or to be placed in barren desert terrain that was otherwise useless for industrial purposes. The self-replicators would be "harvested" for their component parts, to be used by humanity in other non-replicating machines.
Dyson's replicating systems
The next major development of the concept of self-replicating machines was a series of thought experiments proposed by physicist Freeman Dyson in his 1970 Vanuxem Lecture. He proposed three large-scale applications of machine replicators. First was to send a self-replicating system to Saturn's moon Enceladus, which in addition to producing copies of itself would also be programmed to manufacture and launch solar sail-propelled cargo spacecraft. These spacecraft would carry blocks of Enceladean ice to Mars, where they would be used to terraform the planet. His second proposal was a solar-powered factory system designed for a terrestrial desert environment, and his third was an "industrial development kit" based on this replicator that could be sold to developing countries to provide them with as much industrial capacity as desired. When Dyson revised and reprinted his lecture in 1979 he added proposals for a modified version of Moore's seagoing artificial living plants that was designed to distill and store fresh water for human use and the "Astrochicken."
Advanced Automation for Space Missions
In 1980, inspired by a 1979 "New Directions Workshop" held at Wood's Hole, NASA conducted a joint summer study with ASEE entitled Advanced Automation for Space Missions to produce a detailed proposal for self-replicating factories to develop lunar resources without requiring additional launches or human workers on-site. The study was conducted at Santa Clara University and ran from June 23 to August 29, with the final report published in 1982. The proposed system would have been capable of exponentially increasing productive capacity and the design could be modified to build self-replicating probes to explore the galaxy.
The reference design included small computer-controlled electric carts running on rails inside the factory, mobile "paving machines" that used large parabolic mirrors to focus sunlight on lunar regolith to melt and sinter it into a hard surface suitable for building on, and robotic front-end loaders for strip mining. Raw lunar regolith would be refined by a variety of techniques, primarily hydrofluoric acid leaching. Large transports with a variety of manipulator arms and tools were proposed as the constructors that would put together new factories from parts and assemblies produced by its parent.
Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery would be placed under the canopy.
A "casting robot" would use sculpting tools and templates to make plaster molds. Plaster was selected because the molds are easy to make, can make precise parts with good surface finishes, and the plaster can be easily recycled afterward using an oven to bake the water back out. The robot would then cast most of the parts either from nonconductive molten rock (basalt) or purified metals. A carbon dioxide laser cutting and welding system was also included.
A more speculative, more complex microchip fabricator was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins."
A 2004 study supported by NASA's Institute for Advanced Concepts took this idea further. Some experts are beginning to consider self-replicating machines for asteroid mining.
Much of the design study was concerned with a simple, flexible chemical system for processing the ores, and the differences between the ratio of elements needed by the replicator, and the ratios available in lunar regolith. The element that most limited the growth rate was chlorine, needed to process regolith for aluminium. Chlorine is very rare in lunar regolith.
Lackner-Wendt Auxon replicators
In 1995, inspired by Dyson's 1970 suggestion of seeding uninhabited deserts on Earth with self-replicating machines for industrial development, Klaus Lackner and Christopher Wendt developed a more detailed outline for such a system. They proposed a colony of cooperating mobile robots 10–30 cm in size running on a grid of electrified ceramic tracks around stationary manufacturing equipment and fields of solar cells. Their proposal didn't include a complete analysis of the system's material requirements, but described a novel method for extracting the ten most common chemical elements found in raw desert topsoil (Na, Fe, Mg, Si, Ca, Ti, Al, C, O2 and H2) using a high-temperature carbothermic process. This proposal was popularized in Discover magazine, featuring solar-powered desalination equipment used to irrigate the desert in which the system was based. They named their machines "Auxons", from the Greek word auxein which means "to grow".
Recent work
NIAC studies on self-replicating systems
In the spirit of the 1980 "Advanced Automation for Space Missions" study, the NASA Institute for Advanced Concepts began several studies of self-replicating system design in 2002 and 2003. Four phase I grants were awarded:
Hod Lipson (Cornell University), "Autonomous Self-Extending Machines for Accelerating Space Exploration"
Gregory Chirikjian (Johns Hopkins University), "Architecture for Unmanned Self-Replicating Lunar Factories"
Paul Todd (Space Hardware Optimization Technology Inc.), "Robotic Lunar Ecopoiesis"
Tihamer Toth-Fejel (General Dynamics), "Modeling Kinematic Cellular Automata: An Approach to Self-Replication" The study concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.
Bootstrapping self-replicating factories in space
In 2012, NASA researchers Metzger, Muscatello, Mueller, and Mantovani argued for a so-called "bootstrapping approach" to start self-replicating factories in space. They developed this concept on the basis of In Situ Resource Utilization (ISRU) technologies that NASA has been developing to "live off the land" on the Moon or Mars. Their modeling showed that in just 20 to 40 years this industry could become self-sufficient then grow to large size, enabling greater exploration in space as well as providing benefits back to Earth. In 2014, Thomas Kalil of the White House Office of Science and Technology Policy published on the White House blog an interview with Metzger on bootstrapping solar system civilization through self-replicating space industry. Kalil requested the public submit ideas for how "the Administration, the private sector, philanthropists, the research community, and storytellers can further these goals." Kalil connected this concept to what former NASA Chief technologist Mason Peck has dubbed "Massless Exploration", the ability to make everything in space so that you do not need to launch it from Earth. Peck has said, "...all the mass we need to explore the solar system is already in space. It's just in the wrong shape." In 2016, Metzger argued that fully self-replicating industry can be started over several decades by astronauts at a lunar outpost for a total cost (outpost plus starting the industry) of about a third of the space budgets of the International Space Station partner nations, and that this industry would solve Earth's energy and environmental problems in addition to providing massless exploration.
New York University artificial DNA tile motifs
In 2011, a team of scientists at New York University created a structure called 'BTX' (bent triple helix) based around three double helix molecules, each made from a short strand of DNA. Treating each group of three double-helices as a code letter, they can (in principle) build up self-replicating structures that encode large quantities of information.
Self-replication of magnetic polymers
In 2001, Jarle Breivik at University of Oslo created a system of magnetic building blocks, which in response to temperature fluctuations, spontaneously form self-replicating polymers.
Self-replication of neural circuits
In 1968, Zellig Harris wrote that "the metalanguage is in the language," suggesting that self-replication is part of language. In 1977 Niklaus Wirth formalized this proposition by publishing a self-replicating deterministic context-free grammar. Adding to it probabilities, Bertrand du Castel published in 2015 a self-replicating stochastic grammar and presented a mapping of that grammar to neural networks, thereby presenting a model for a self-replicating neural circuit.
Harvard Wyss Institute
November 29, 2021 a team at Harvard Wyss Institute built the first living robots that can reproduce.
Self-replicating spacecraft
The idea of an automated spacecraft capable of constructing copies of itself was first proposed in scientific literature in 1974 by Michael A. Arbib, but the concept had appeared earlier in science fiction such as the 1967 novel Berserker by Fred Saberhagen or the 1950 novellette trilogy The Voyage of the Space Beagle by A. E. van Vogt. The first quantitative engineering analysis of a self-replicating spacecraft was published in 1980 by Robert Freitas, in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory replicate many copies of itself there to increase its total manufacturing capacity, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each.
Prospects for implementation
As the use of industrial automation has expanded over time, some factories have begun to approach a semblance of self-sufficiency that is suggestive of self-replicating machines. However, such factories are unlikely to achieve "full closure" until the cost and flexibility of automated machinery comes close to that of human labour and the manufacture of spare parts and other components locally becomes more economical than transporting them from elsewhere. As Samuel Butler has pointed out in Erewhon, replication of partially closed universal machine tool factories is already possible. Since safety is a primary goal of all legislative consideration of regulation of such development, future development efforts may be limited to systems which lack either control, matter, or energy closure. Fully capable machine replicators are most useful for developing resources in dangerous environments which are not easily reached by existing transportation systems (such as outer space).
An artificial replicator can be considered to be a form of artificial life. Depending on its design, it might be subject to evolution over an extended period of time. However, with robust error correction, and the possibility of external intervention, the common science fiction scenario of robotic life run amok will remain extremely unlikely for the foreseeable future.
In fiction
Authors who have used self-replicating machine in works of fiction include: Phillip K Dick, Arthur C Clarke, Karel Čapek: (R.U.R.: Rossum’s Universal Robots (1920)), John Sladek (The Reproductive System), Samuel Butler (Erewhon), Dennis E. Taylor and E. M. Forster (The Machine Stops (1909)).
Other sources
A number of patents have been granted for self-replicating machine concepts. "Self reproducing fundamental fabricating machines (F-Units)" Inventor: Collins; Charles M. (Burke, Va.) (August 1997), " Self reproducing fundamental fabricating machine system" Inventor: Collins; Charles M. (Burke, Va.)(June 1998); and Collins' PCT patent WO 96/20453: "Method and system for self-replicating manufacturing stations" Inventors: Merkle; Ralph C. (Sunnyvale, Calif.), Parker; Eric G. (Wylie, Tex.), Skidmore; George D. (Plano, Tex.) (January 2003).
Macroscopic replicators are mentioned briefly in the fourth chapter of K. Eric Drexler's 1986 book Engines of Creation.
In 1995, Nick Szabo proposed a challenge to build a macroscale replicator from Lego robot kits and similar basic parts. Szabo wrote that this approach was easier than previous proposals for macroscale replicators, but successfully predicted that even this method would not lead to a macroscale replicator within ten years.
In 2004, Robert Freitas and Ralph Merkle published the first comprehensive review of the field of self-replication (from which much of the material in this article is derived, with permission of the authors), in their book Kinematic Self-Replicating Machines, which includes 3000+ literature references. This book included a new molecular assembler design, a primer on the mathematics of replication, and the first comprehensive analysis of the entire replicator design space.
See also
Autopoiesis
Grey goo scenario
Self-reconfiguring modular robot
AI takeover
3D printing
Computer virus
Computer worm
Ecophagy
Existential risk from advanced artificial intelligence
Astrochicken
Lights out (manufacturing)
Nanorobotics
Spiegelman's Monster
Self-replicating spacecraft
RepRap project
Self-reconfiguring and self-reproducing molecube robots
Quine
References
Further reading
M. Sipper, Fifty years of research on self-replication: An overview, Artificial Life, vol. 4, no. 3, pp. 237–257, Summer 1998.
Freeman Dyson expanded upon Neumann's automata theories, and advanced a biotechnology-inspired theory. See Astrochicken.
The first technical design study of a self-replicating interstellar probe was published in a 1980 paper by Robert Freitas.
Clanking replicators are also mentioned briefly in the fourth chapter of K. Eric Drexler's 1986 book Engines of Creation.
Article about a proposed clanking replicator system to be used for developing Earthly deserts in the October 1995 Discover Magazine, featuring forests of solar panels that powered desalination equipment to irrigate the land.
In 1995, Nick Szabo proposed a challenge to build a macroscale replicator from Lego(tm) robot kits and similar basic parts. Szabo wrote that this approach was easier than previous proposals for macroscale replicators, but successfully predicted that even this method would not lead to a macroscale replicator within ten years.
In 1998, Chris Phoenix suggested a general idea for a macroscale replicator on the sci.nanotech newsgroup, operating in a pool of ultraviolet-cured liquid plastic, selectively solidifying the plastic to form solid parts. Computation could be done by fluidic logic. Power for the process could be supplied by a pressurized source of the liquid.
In 2001, Peter Ward mentioned an escaped clanking replicator destroying the human race in his book Future Evolution.
In 2004, General Dynamics completed a study for NASA's Institute for Advanced Concepts. It concluded that complexity of the development was equal to that of a Pentium 4, and promoted a design based on cellular automata.
In 2004, Robert Freitas and Ralph Merkle published the first comprehensive review of the field of self-replication, in their book Kinematic Self-Replicating Machines, which includes 3000+ literature references.
In 2005, Adrian Bowyer of the University of Bath started the RepRap project to develop a rapid prototyping machine that would be able to replicate itself, making such machines cheap enough for people to buy and use in their homes. The project is releasing material under the GNU GPL.
In 2015, advances in graphene and silicene suggested that it could form the basis for a neural network with densities comparable to the human brain if integrated with silicon carbide based nanoscale CPUs containing memristors.
The power source might be solar or possibly radioisotope based given that new liquid based compounds can generate substantial power from radioactive decay.
Artificial life
Robotics concepts
Self-organization
Reproduction
Machines
Thought experiments | Self-replicating machine | [
"Physics",
"Mathematics",
"Technology",
"Engineering",
"Biology"
] | 4,930 | [
"Self-organization",
"Machines",
"Behavior",
"Self-replicating machines",
"Reproduction",
"Biological interactions",
"Self-replication",
"Physical systems",
"Mechanical engineering",
"Dynamical systems"
] |
1,600,294 | https://en.wikipedia.org/wiki/Discrete%20modelling | Discrete modelling is the discrete analogue of continuous modelling. In discrete modelling, formulae are fit to discrete data—data that could potentially take on only a countable set of values, such as the integers, and which are not infinitely divisible. A common method in this form of modelling is to use recurrence relations.
Applied mathematics | Discrete modelling | [
"Mathematics"
] | 70 | [
"Applied mathematics",
"Applied mathematics stubs"
] |
1,600,321 | https://en.wikipedia.org/wiki/Chicago%20Sanitary%20and%20Ship%20Canal | The Chicago Sanitary and Ship Canal, historically known as the Chicago Drainage Canal, is a canal system that connects the Chicago River to the Des Plaines River. It reverses the direction of the Main Stem and the South Branch of the Chicago River, which now flows out of Lake Michigan rather than into it. The related Calumet-Saganashkee Channel does the same for the Calumet River a short distance to the south, joining the Chicago canal about halfway along its route to the Des Plaines. The two provide the only navigation for ships between the Great Lakes Waterway and the Mississippi River system.
The canal was in part built as a sewage treatment scheme. Prior to its opening in 1900, sewage from the city of Chicago was dumped into the Chicago River and flowed into Lake Michigan. The city's drinking water supply was (and remains) located offshore, and there were fears that the sewage could reach the intake and cause serious disease outbreaks. Since the sewer systems were already flowing into the river, the decision was made to reverse the flow of the river, thereby sending all the sewage inland where it could be diluted before emptying it into the Des Plaines.
Another goal of the construction was to replace the shallow and narrow Illinois and Michigan Canal (I&M), which had originally connected Lake Michigan with the Mississippi starting in 1848. As part of the construction of the new canal, the entire route was built to allow much larger ships to navigate it. It is wide and deep, over three times the size of the I&M. The I&M became a secondary route with the new canal's opening and was shut down entirely with the creation of the Illinois Waterway network in 1933.
The building of the Chicago canal served as intensive and practical training for engineers who later built the Panama Canal. The canal is operated by the Metropolitan Water Reclamation District of Greater Chicago. In 1999, the system was named a Civil Engineering Monument of the Millennium by the American Society of Civil Engineers (ASCE). The Canal was listed on the National Register of Historic Places on December 20, 2011.
Reasons for construction
Early Chicago sewage systems discharged directly into Lake Michigan or into the Chicago River, which itself flowed into the lake. The city's water supply also comes from the lake, through water intake cribs located offshore. There were fears that sewage could infiltrate the water supply, leading to typhoid fever, cholera, and dysentery. During a tremendous storm in 1885, the rainfall washed refuse from the river far out into the lake (although reports of an 1885 cholera epidemic are untrue), spurring a panic that a future similar storm would cause a huge epidemic in Chicago. The only reason for the storm not causing such a catastrophic event was that the weather was cooler than normal. The Sanitary District of Chicago (now The Metropolitan Water Reclamation District) was created by the Illinois legislature in 1889 in response to this close call.
In addition, the canal was built to supplement and ultimately replace the older and smaller Illinois and Michigan Canal (built 1848) as a conduit to the Mississippi River system. In 1871, the old canal had been deepened in an attempt to reverse the river and improve shipping but the reversal of the river only lasted one season. The I&M canal was also badly polluted as a result of unrestricted dumping from city sewers and industries, such as the Union Stock Yards.
Planning and construction, 1887–1922
By 1887, it was decided to reverse the flow of the Chicago River through civil engineering. Engineer Isham Randolph noted that a ridge about from the lakeshore divided the Mississippi River drainage system from the Great Lakes drainage system. This low divide had been known since pre-Columbian time by the Native Americans, who used it as the Chicago Portage to cross from the Chicago River drainage to the Des Plaines River basin drainage. The Illinois and Michigan Canal was cut across that divide in the 1840s. In an attempt to better drain sewage and pollution in the Chicago River, the flow of the river had already been reversed in 1871 when the Illinois and Michigan Canal was deepened enough to reverse the river's flow for one season. A plan soon emerged to again cut through the ridge and reverse the flow permanently carrying wastewater away from the lake, through the Des Plaines and Illinois rivers, to the Mississippi River and the Gulf of Mexico. In 1889, the Illinois General Assembly created the Sanitary District of Chicago (SDC) to carry out the plan. After four years of turmoil during construction, Isham Randolph was appointed Chief Engineer for the newly formed Sanitary District of Chicago and resolved many issues circulating around the project. While the canal was being built, permanent reversal of the Chicago River was attained in 1892, when the Army Corps of Engineers further deepened the Illinois and Michigan Canal.
One of the issues for Randolph to resolve was a strike of about 2000 union workers, centered in Lemont and Joliet. On June 1, 1893, quarrymen went out to protest a wage cut, an action that also drew in 1200 canal workers. Reports describe 400 quarrymen marching along the length of the canal project on June 2, between Lemont and Romeo, conducting a "reign of terror" at worksites, "armed with clubs and revolvers", "almost crazed with liquor". On the 9th strikers clashed with replacement workers and local law enforcement, and Governor Altgeld called out the First and Second Regiments of the Illinois National Guard. Dozens were wounded and at least five killed: strikers Gregor Kilka, Jacob (or Ignatz) Ast, Thomas Moorski, Mike Berger, and 17-year-old bystander John Kluga. The strike was settled by the 15th.
The new Chicago Sanitary and Ship Canal, linking the south branch of the Chicago River to the Des Plaines River at Lockport, and in advance of an application by the Missouri Attorney General for an injunction against the opening, opened on January 2, 1900. However, it was not until January 17 that the complete flow of the water was released. Further construction from 1903 to 1907 extended the canal to Joliet, as the SDC wanted to replace the previously built Illinois and Michigan Canal with the Chicago Sanitary and Ship Canal. The rate of flow is controlled by the Lockport Powerhouse, sluice gates at Chicago Harbor and at the O'Brien Lock in the Calumet River, and also by pumps at Wilmette Harbor. Two more canals were later built to add to the system: the North Shore Channel in 1910, and the Calumet-Saganashkee Channel in 1922.
Construction of the Ship and Sanitary Canal was the largest earth-moving operation that had been undertaken in North America up to that time. It was also notable for training a generation of engineers, many of whom later worked on the Panama Canal. In 1989, the Sanitary District of Chicago was renamed the Metropolitan Water Reclamation District of Greater Chicago.
Diversion of water from the Great Lakes
The Chicago Sanitary and Ship Canal is designed to work by taking water from Lake Michigan and discharging it into the Mississippi River watershed. At the time of construction, a specific amount of water diversion was authorized by the United States Army Corps of Engineers (USACE) and approved by the Secretary of War, under provisions of various Rivers and Harbors Acts; over the years however, this limit was not honored or well regulated. While the increased flow more rapidly flushed the untreated sewage, it also was seen as a hazard to navigation, a concern to USACE in relation to the level of the Great Lakes and the St. Lawrence River, from which the water was diverted. Litigation ensued from 1907, which eventually saw states downstream of the canal siding with the sanitary district and those states upstream of Lake Michigan with Canada siding against the district. The litigation was eventually decided by the Supreme Court in Sanitary District of Chicago v. United States in 1925, and again in Wisconsin v. Illinois in 1929. In 1930, management of the canal was turned over to the United States Army Corps of Engineers. The Corps of Engineers reduced the flow of water from Lake Michigan into the canal, but kept it open for navigation purposes. These decisions prompted the sanitary district to accelerate their treatment of raw sewage. Today, diversions from the Great Lakes system are regulated by an international treaty with Canada, through the International Joint Commission, and by governors of the Great Lakes states.
Pollution of the canals
Most local sewers in the Chicago area were built over 100 years ago before wastewater treatment existed. They were designed to drain sanitary flow and a limited amount of stormwater directly into the river. If intercepting sewers and the Metropolitan Water Reclamation District of Greater Chicago (MWRD) water reclamation plants reach capacity during heavy rain, the local sewer continues to drain, or “overflow,” to a waterway, thus causing concern for pollution. However, the MWRD’s Tunnel and Reservoir Plan (TARP) has worked to decrease the combined sewage overflow (CSOs) and nearly eliminated them in the Calumet Area River System. Since the tunnels became operational in 2006, CSOs have been reduced from an average of 100 days per year to 50. Since Thornton Reservoir came online in 2015, CSOs have been nearly eliminated. TARP captures and stores combined stormwater and sewage that would otherwise overflow from sewers into waterways in rainy weather. This stored water is pumped from TARP to water reclamation plants to be cleaned before being released to waterways.
Asian carp and the canal
On November 20, 2009, the Corps of Engineers announced a single sample of DNA from Asian carp had been found above the electric barrier constructed in the canal in an attempt to prevent carp from migrating into the Great Lakes. The silver carp, also known as the flying carp, displace native species of fish by filter feeding and removing the bottom of the food chain. It migrated through the Mississippi River system, and could make its way into the Great Lakes, through the man-made canal. Carp were introduced to the U.S. with the blessing of the Environmental Protection Agency (EPA) in the 1970s to help remove algae from catfish farms in Arkansas. They escaped the farms.
On December 2, 2009, the Chicago Sanitary and Ship Canal closed, as the EPA and the Illinois Department of Natural Resources (IDNR) began applying a fish poison, rotenone, in an effort to kill Asian carp north of Lockport. Although no Asian carp were found in the two months of commercial and electrofishing, the massive fish kill did yield a single carp.
On December 21, 2009, Michigan Attorney General Mike Cox filed a lawsuit with the Supreme Court seeking the immediate closure of the Chicago Sanitary and Ship Canal to keep Asian carp out of Lake Michigan. The state of Illinois and the Corps of Engineers, which constructed the Canal, are co-defendants in the lawsuit.
In response to the Michigan lawsuit, on January 5, 2010, Illinois State Attorney General Lisa Madigan filed a counter-suit with the Supreme Court requesting that it reject Michigan's claims. Siding with the State of Illinois, both the Illinois Chamber of Commerce and the American Waterways Operators have filed affidavits, arguing that closing the Chicago Sanitary and Ship Canal would upset the movement of millions of tons of vital shipments of iron ore, coal, grain and other cargo, totaling more than $1.5 billion a year, and contribute to the loss of hundreds, perhaps thousands of jobs. However, Michigan along with several other Great Lakes states argue that the sport and commercial fishery and tourism associated with the fishery of the entire Great Lakes region is estimated at $7 billion a year, and impacts the economies of all Great Lakes states and Canada.
On January 19, 2010, the U.S. Supreme Court rejected the request for a preliminary injunction closing the canal. In August 2011, the United States Court of Appeals also rejected the preliminary injunction.
See also
Chicago 1885 cholera epidemic myth
Chicago flood
Tunnel and Reservoir Plan (TARP)
Isham Randolph
References
External links
A History from the Chicago Public Library . (However this credits Rudolph Hering, not Isham Randolph with the project.)
An album of photographs of the dig, including a 26 stanza poem written by Isham Randolph to Admiral Dewey on the opening of the canal
History and Heritage of Civil Engineering – Reversal of the Chicago River
Graph of Lakes Michigan and Huron water levels since 1860
Evaluation of the Potential for Hysteresis in Index-Velocity Ratings for the Chicago Sanitary and Ship Canal Near Lemont, Illinois United States Geological Survey
Canals in Illinois
Ship canals
Water supply and sanitation in the United States
Canals opened in 1900
Canals on the National Register of Historic Places in Illinois
Buildings and structures on the National Register of Historic Places in Chicago
Buildings and structures on the National Register of Historic Places in Cook County, Illinois
Historic districts in Chicago
Illinois waterways
Interbasin transfer
Transportation buildings and structures in Chicago
Transportation buildings and structures in DuPage County, Illinois
Transportation buildings and structures in Will County, Illinois
Lockport, Illinois
United States Army Corps of Engineers
Historic American Engineering Record in Illinois
Historic Civil Engineering Landmarks
Metropolitan Water Reclamation District of Greater Chicago | Chicago Sanitary and Ship Canal | [
"Engineering",
"Environmental_science"
] | 2,643 | [
"Hydrology",
"United States Army Corps of Engineers",
"Historic Civil Engineering Landmarks",
"Engineering units and formations",
"Interbasin transfer",
"Civil engineering"
] |
1,600,474 | https://en.wikipedia.org/wiki/Momentum%20exchange%20tether | A momentum exchange tether is a kind of space tether that could theoretically be used as a launch system, or to change spacecraft orbits. Momentum exchange tethers create a controlled force on the end-masses of the system due to the pseudo-force known as centrifugal force. While the tether system rotates, the objects on either end of the tether will experience continuous acceleration; the magnitude of the acceleration depends on the length of the tether and the rotation rate. Momentum exchange occurs when an end body is released during the rotation. The transfer of momentum to the released object will cause the rotating tether to lose energy, and thus lose velocity and altitude. However, using electrodynamic tether thrusting, or ion propulsion the system can then re-boost itself with little or no expenditure of consumable reaction mass.
A non-rotating tether is a rotating tether that rotates exactly once per orbit so that it always has a vertical orientation relative to the parent body. A spacecraft arriving at the lower end of this tether, or departing from the upper end, will take momentum from the tether, while a spacecraft departing from the lower end of the tether, or arriving at the upper end, will add momentum to the tether.
In some cases momentum exchange systems are intended to run as balanced transportation schemes where an arriving spacecraft or payload is exchanged with one leaving with the same speed and mass, and then no net change in momentum or angular momentum occurs.
Tether systems
Tidal stabilization
Gravity-gradient stabilization, also called "gravity stabilization" and "tidal stabilization", is a simple and reliable method for controlling the attitude of a satellite that requires no electronic control systems, rocket motors or propellant.
This type of attitude control tether has a small mass on one end, and a satellite on the other. Tidal forces stretch the tether between the two masses. There are two ways of explaining tidal forces. In one, the upper end mass of the system is moving faster than orbital velocity for its altitude, so centrifugal force makes it want to move further away from the planet it is orbiting. At the same time, the lower end mass of the system is moving at less than orbital speed for its altitude, so it wants to move closer to the planet. The end result is that the tether is under constant tension and wants to hang in a vertical orientation. Simple satellites have often been stabilized this way; either with tethers, or with how the mass is distributed within the satellite.
As with any freely hanging object, it can be disturbed and start to swing. Since there is no atmospheric drag in space to slow the swing, a small bottle of fluid with baffles may be mounted in the spacecraft to damp the pendulum vibrations via the viscous friction of the fluid.
Electrodynamic tethers
In a strong planetary magnetic field such as around the Earth, a conducting tether can be configured as an electrodynamic tether. This can either be used as a dynamo to generate power for the satellite at the cost of slowing its orbital velocity, or it can be used to increase the orbital velocity of the satellite by putting power into the tether from the satellite's power system. Thus the tether can be used to either accelerate or to slow an orbiting spacecraft without using any rocket propellant.
When using this technique with a rotating tether, the current through the tether must alternate in phase with the rotation rate of the tether in order to produce either a consistent slowing force or a consistent accelerating force.
Whether slowing or accelerating the satellite, the electrodynamic tether pushes against the planet's magnetic field, and thus the momentum gained or lost ultimately comes from the planet.
Sky-hooks
A sky-hook is a theoretical class of orbiting tether propulsion intended to lift payloads to high altitudes and speeds. Simple sky-hooks are essentially partial elevators, extending some distance below a base-station orbit and allowing orbital insertion by lifting the cargo. Most proposals spin the tether so that its angular momentum also provides energy to the cargo, speeding it up to orbital velocity or beyond while slowing the tether. Some form of propulsion is then applied to the tether to regain the angular momentum.
Bolo
A Bolo, or rotating tether, is a tether that rotates more than once per orbit and whose endpoints have a significant tip speed (~ ). The maximum speed of the endpoints is limited by the strength of the cable material, the taper, and the safety factor it is designed for.
The purpose of the Bolo is to either speed up, or slow down, a spacecraft that docks with it without using any of the spacecraft's on-board propellant and to change the spacecraft's orbital flight path. Effectively, the Bolo acts as a reusable upper stage for any spacecraft that docks with it.
The momentum imparted to the spacecraft by the Bolo is not free. In the same way that the Bolo changes the spacecraft's momentum and direction of travel, the Bolo's orbital momentum and rotational momentum is also changed, and this costs energy that must be replaced. The idea is that the replacement energy would come from a more efficient and lower cost source than a chemical rocket motor. Two possible lower cost sources for this replacement energy are an ion propulsion system, or an electrodynamic tether propulsion system that would be part of the Bolo. An essentially free source of replacement energy is momentum gathered from payloads to be accelerated in the other direction, suggesting that the need for adding energy from propulsion systems will be quite minimal with balanced, two-way, space commerce.
Rotovator
Rotovators are rotating tethers with a rotational direction such that the lower endpoint of the tether is moving slower than the orbital velocity of the tether and the upper endpoint is moving faster. The word is a portmanteau derived from the words rotor and elevator.
If the tether is long enough and the rotation rate high enough, it is possible for the lower endpoint to completely cancel the orbital speed of the tether such that the lower endpoint is stationary with respect to the planetary surface that the tether is orbiting. As described by Moravec, this is "a satellite that rotates like a wheel". The tip of the tether moves in approximately a cycloid, in which it is momentarily stationary with respect to the ground. In this case, a payload that is "grabbed" by a capture mechanism on the rotating tether during the moment when it is stationary would be picked up and lifted into orbit; and potentially could be released at the top of the rotation, at which point it is moving with a speed significantly greater than the escape velocity and thus could be released onto an interplanetary trajectory. (As with the bolo, discussed above, the momentum and energy given to the payload must be made up, either with a high-efficiency rocket engine, or with momentum gathered from payload moving the other direction.)
On bodies with an atmosphere, such as the Earth, the tether tip must stay above the dense atmosphere. On bodies with reasonably low orbital speed (such as the Moon and possibly Mars), a rotovator in low orbit can potentially touch the ground, thereby providing cheap surface transport as well as launching materials into cislunar space. In January 2000, The Boeing Company completed a study of tether launch systems including two-stage tethers that had been commissioned by the NASA Institute for Advanced Concepts.
Earth launch assist bolo
Unfortunately an Earth-to-orbit rotovator cannot be built from currently available materials since the thickness and tether mass to handle the loads on the rotovator would be uneconomically large. A "watered down" rotovator with two-thirds the rotational speed, however, would halve the centripetal acceleration stresses.
Therefore, another trick to achieve lower stresses is that rather than picking up a cargo from the ground at zero velocity, a rotovator could pick up a moving vehicle and sling it into orbit. For example, a rotovator could pick up a Mach 12 aircraft from the upper atmosphere of the Earth and move it into orbit without using rockets, and could likewise catch such a vehicle and lower it into atmospheric flight. It is easier for a rocket to achieve the lower tip speed, so "single stage to tether" has been proposed. One such is called the Hyper-sonic Airplane Space Tether Orbital Launch (HASTOL). Either air breathing or rocket to tether could save a great deal of fuel per flight, and would permit for both a simpler vehicle and more cargo.
The company Tethers Unlimited, Inc. (founded by Robert Forward and Robert P. Hoyt) has called this approach "Tether Launch Assist". It has also been referred to as a space bolas. The company's goals have drifted to deorbit assist modules and marine tethers as in 2020 though.
Investigation of "Tether Launch Assist" concepts in 2013 have indicated the concept may become marginally economical in near future as soon as rotovators with high enough (~10 W/kg) power-to-mass ratio are developed.
Space elevator
A space elevator is a space tether that is attached to a planetary body. For example, on Earth, a space elevator would go from the equator to well above geosynchronous orbit.
A space elevator does not need to be powered as a rotovator does, because it gets any required angular momentum from the planetary body. The disadvantage is that it is much longer, and for many planets a space elevator cannot be constructed from known materials. A space elevator on Earth would require material strengths outside current technological limits (2014). Martian and lunar space elevators could be built with modern-day materials however. A space elevator on Phobos has also been proposed.
Space elevators also have larger amounts of potential energy than a rotovator, and if heavy parts (like a "dropped wrench") should fall they would reenter at a steep angle and impact the surface at near orbital speeds. On most anticipated designs, if the cable component itself fell, it would burn up before hitting the ground.
Cislunar transportation system
Although it might be thought that this requires constant energy input, it can in fact be shown to be energetically favorable to lift cargo off the surface of the Moon and drop it into a lower Earth orbit, and thus it can be achieved without any significant use of propellant, since the Moon's surface is in a comparatively higher potential energy state. Also, this system could be built with a total mass of less than 28 times the mass of the payloads.
Rotovators can thus be charged by momentum exchange. Momentum charging uses the rotovator to move mass from a place that is "higher" in a gravity field to a place that is "lower". The technique to do this uses the Oberth effect, where releasing the payload when the tether is moving with higher linear speed, lower in a gravitational potential gives more specific energy, and ultimately more speed than the energy lost picking up the payload at a higher gravitational potential, even if the rotation rate is the same. For example, it is possible to use a system of two or three rotovators to implement trade between the Moon and Earth. The rotovators are charged by lunar mass (dirt, if exports are not available) dumped on or near the Earth, and can use the momentum so gained to boost Earth goods to the Moon. The momentum and energy exchange can be balanced with equal flows in either direction, or can increase over time.
Similar systems of rotovators could theoretically open up inexpensive transportation throughout the Solar System.
Tether cable catapult system
A tether cable catapult system is a system where two or more long conducting tethers are held rigidly in a straight line, attached to a heavy mass. Power is applied to the tethers and is picked up by a vehicle that has linear magnet motors on it, which it uses to push itself along the length of the cable. Near the end of the cable the vehicle releases a payload and slows and stops itself and the payload carries on at very high velocity. The calculated maximum speed for this system is extremely high, more than 30 times the speed of sound in the cable; and velocities of more than seem to be possible.
See also
Yo-yo de-spin
References
Space elevator
Spaceflight concepts | Momentum exchange tether | [
"Astronomy",
"Technology"
] | 2,552 | [
"Exploratory engineering",
"Astronomical hypotheses",
"Space elevator"
] |
16,266,307 | https://en.wikipedia.org/wiki/Timeline%20of%20ancient%20Greek%20mathematicians | This is a timeline of mathematicians in ancient Greece.
Timeline
Historians traditionally place the beginning of Greek mathematics proper to the age of Thales of Miletus (ca. 624–548 BC), which is indicated by the at 600 BC. The at 300 BC indicates the approximate year in which Euclid's Elements was first published. The at 300 AD passes through Pappus of Alexandria (), who was one of the last great Greek mathematicians of late antiquity. Note that the solid thick is at year zero, which is a year that does not exist in the Anno Domini (AD) calendar year system
The mathematician Heliodorus of Larissa is not listed due to the uncertainty of when he lived, which was possibly during the 3rd century AD, after Ptolemy.
Overview of the most important mathematicians and discoveries
Of these mathematicians, those whose work stands out include:
Thales of Miletus () is the first known individual to use deductive reasoning applied to geometry, by deriving four corollaries to Thales' theorem.
Pythagoras () was credited with many mathematical and scientific discoveries, including the Pythagorean theorem, Pythagorean tuning, the five regular solids, the Theory of Proportions, the sphericity of the Earth, and the identity of the morning and evening stars as the planet Venus.
Theaetetus () Proved that there are exactly five regular convex polyhedra (it is emphasized that it was, in particular, proved that there does not exist any regular convex polyhedra other than these five). This fact led these five solids, now called the Platonic solids, to play a prominent role in the philosophy of Plato (and consequently, also influenced later Western Philosophy) who associated each of the four classical elements with a regular solid: earth with the cube, air with the octahedron, water with the icosahedron, and fire with the tetrahedron (of the fifth Platonic solid, the dodecahedron, Plato obscurely remarked, "...the god used [it] for arranging the constellations on the whole heaven"). The last book (Book XIII) of the Euclid's Elements, which is probably derived from the work of Theaetetus, is devoted to constructing the Platonic solids and describing their properties; Andreas Speiser has advocated the view that the construction of the 5 regular solids is the chief goal of the deductive system canonized in the Elements. Astronomer Johannes Kepler proposed a model of the Solar System in which the five solids were set inside one another and separated by a series of inscribed and circumscribed spheres.
Eudoxus of Cnidus () is considered by some to be the greatest of classical Greek mathematicians, and in all antiquity second only to Archimedes. Book V of Euclid's Elements is thought to be largely due to Eudoxus.
Aristarchus of Samos () presented the first known heliocentric model that placed the Sun at the center of the known universe with the Earth revolving around it. Aristarchus identified the "central fire" with the Sun, and he put the other planets in their correct order of distance around the Sun. In On the Sizes and Distances, he calculates the sizes of the Sun and Moon, as well as their distances from the Earth in terms of Earth's radius. However, Eratosthenes () was the first person to calculate the circumference of the Earth. Posidonius () also measured the diameters and distances of the Sun and the Moon as well as the Earth's diameter; his measurement of the diameter of the Sun was more accurate than Aristarchus', differing from the modern value by about half.
Euclid (fl. 300 BC) is often referred to as the "founder of geometry" or the "father of geometry" because of his incredibly influential treatise called the Elements, which was the first, or at least one of the first, axiomatized deductive systems.
Archimedes () is considered to be the greatest mathematician of ancient history, and one of the greatest of all time. Archimedes anticipated modern calculus and analysis by applying concepts of infinitesimals and the method of exhaustion to derive and rigorously prove a range of geometrical theorems, including: the area of a circle; the surface area and volume of a sphere; area of an ellipse; the area under a parabola; the volume of a segment of a paraboloid of revolution; the volume of a segment of a hyperboloid of revolution; and the area of a spiral. He was also one of the first to apply mathematics to physical phenomena, founding hydrostatics and statics, including an explanation of the principle of the lever. In a lost work, he discovered and enumerated the 13 Archimedean solids, which were later rediscovered by Johannes Kepler around 1620 A.D.
Apollonius of Perga () is known for his work on conic sections and his study of geometry in 3-dimensional space. He is considered one of the greatest ancient Greek mathematicians.
Hipparchus () is considered the founder of trigonometry and also solved several problems of spherical trigonometry. He was the first whose quantitative and accurate models for the motion of the Sun and Moon survive. In his work On Sizes and Distances, he measured the apparent diameters of the Sun and Moon and their distances from Earth. He is also reputed to have measured the Earth's precession.
Diophantus () wrote Arithmetica which dealt with solving algebraic equations and also introduced syncopated algebra, which was a precursor to modern symbolic algebra. Because of this, Diophantus is sometimes known as "the father of algebra," which is a title he shares with Muhammad ibn Musa al-Khwarizmi. In contrast to Diophantus, al-Khwarizmi wasn't primarily interested in integers and he gave an exhaustive and systematic description of solving quadratic equations and some higher order algebraic equations. However, al-Khwarizmi did not use symbolic or syncopated algebra but rather "rhetorical algebra" or ancient Greek "geometric algebra" (the ancient Greeks had expressed and solved some particular instances of algebraic equations in terms of geometric properties such as length and area but they did not solve such problems in general; only particular instances). An example of "geometric algebra" is: given a triangle (or rectangle, etc.) with a certain area and also given the length of some of its sides (or some other properties), find the length of the remaining side (and justify/prove the answer with geometry). Solving such a problem is often equivalent to finding the roots of a polynomial.
Hellenic mathematicians
The conquests of Alexander the Great around led to Greek culture being spread around much of the Mediterranean region, especially in Alexandria, Egypt. This is why the Hellenistic period of Greek mathematics is typically considered as beginning in the 4th century BC. During the Hellenistic period, many people living in those parts of the Mediterranean region subject to Greek influence ended up adopting the Greek language and sometimes also Greek culture. Consequently, some of the Greek mathematicians from this period may not have been "ethnically Greek" with respect to the modern Western notion of ethnicity, which is much more rigid than most other notions of ethnicity that existed in the Mediterranean region at the time. Ptolemy, for example, was said to have originated from Upper Egypt, which is far South of Alexandria, Egypt. Regardless, their contemporaries considered them Greek.
Straightedge and compass constructions
For the most part, straightedge and compass constructions dominated ancient Greek mathematics and most theorems and results were stated and proved in terms of geometry. These proofs involved a straightedge (such as that formed by a taut rope), which was used to construct lines, and a compass, which was used to construct circles. The straightedge is an idealized ruler that can draw arbitrarily long lines but (unlike modern rulers) it has no markings on it. A compass can draw a circle starting from two given points: the center and a point on the circle.
A taut rope can be used to physically construct both lines (since it forms a straightedge) and circles (by rotating the taut rope around a point).
Geometric constructions using lines and circles were also used outside of the Mediterranean region.
The Shulba Sutras from the Vedic period of Indian mathematics, for instance, contains geometric instructions on how to physically construct a (quality) fire-altar by using a taut rope as a straightedge. These altars could have various shapes but for theological reasons, they were all required to have the same area. This consequently required a high precision in construction, along with (written) instructions on how to geometrically construct such altars with the tools that were most widely available throughout the Indian subcontinent (and elsewhere) at the time.
Ancient Greek mathematicians went one step further by axiomatizing plane geometry in such a way that straightedge and compass constructions became mathematical proofs. Euclid's Elements was the culmination of this effort and for over two thousand years, even as late as the 19th century, it remained the "standard text" on mathematics throughout the Mediterranean region (including Europe and the Middle East), and later also in North and South America after European colonization.
Algebra
Ancient Greek mathematicians are known to have solved specific instances of polynomial equations with the use of straightedge and compass constructions, which simultaneously gave a geometric proof of the solution's correctness. Once a construction was completed, the answer could be found by measuring the length of a certain line segment (or possibly some other quantity). A quantity multiplied by itself, such as for example, would often be constructed as a literal square with sides of length which is why the second power "" is referred to as " squared" in ordinary spoken language. Thus problems that would today be considered "algebra problems" were also solved by ancient Greek mathematicians, although not in full generality. A guide to systematically solving low-order polynomials equations for an unknown quantity (instead of just specific instances of such problems) would not appear until The Compendious Book on Calculation by Completion and Balancing by Muhammad ibn Musa al-Khwarizmi, who used Greek geometry to "prove the correctness" of the solutions that were given in the treatise. However, this treatise was entirely rhetorical (meaning that everything, including numbers, was written using words structured in ordinary sentences) and did not have any "algebraic symbols" that are today associated with algebra problems – not even the syncopated algebra that appeared in Arithmetica.
See also
References
Ancient Greek mathematicians
Greek mathematics
History of geometry
History of mathematics
Greek | Timeline of ancient Greek mathematicians | [
"Mathematics"
] | 2,206 | [
"History of geometry",
"Geometry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.