id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,532,957
https://en.wikipedia.org/wiki/Cognitive%20load
In cognitive psychology, cognitive load refers to the amount of working memory resources used. However, it is essential to distinguish it from the actual construct of Cognitive Load (CL) or Mental Workload (MWL), which is studied widely in many disciplines. According to work conducted in the field of instructional design and pedagogy, broadly, there are three types of cognitive load: intrinsic cognitive load is the effort associated with a specific topic; extraneous cognitive load refers to the way information or tasks are presented to a learner; and germane cognitive load refers to the work put into creating a permanent store of knowledge (a schema). However, over the years, the additivity of these types of cognitive load has been investigated and questioned. Now it is believed that they circularly influence each other. Cognitive load theory was developed in the late 1980s out of a study of problem solving by John Sweller. Sweller argued that instructional design can be used to reduce cognitive load in learners. Much later, other researchers developed a way to measure perceived mental effort which is indicative of cognitive load. Task-invoked pupillary response is a reliable and sensitive measurement of cognitive load that is directly related to working memory. Information may only be stored in long term memory after first being attended to, and processed by, working memory. Working memory, however, is extremely limited in both capacity and duration. These limitations will, under some conditions, impede learning. Heavy cognitive load can have negative effects on task completion, and the experience of cognitive load is not the same in everyone. The elderly, students, and children experience different, and more often higher, amounts of cognitive load. The fundamental tenet of cognitive load theory is that the quality of instructional design will be raised if greater consideration is given to the role and limitations of working memory. With increased distractions, particularly from cell phone use, students are more prone to experiencing high cognitive load which can reduce academic success. Theory In the late 1980s, John Sweller developed cognitive load theory out of a study of problem solving, in order "to provide guidelines intended to assist in the presentation of information in a manner that encourages learner activities that optimize intellectual performance". Sweller's theory employs aspects of information processing theory to emphasize the inherent limitations of concurrent working memory load on learning during instruction. It makes use of the schema as primary unit of analysis for the design of instructional materials. History The history of cognitive load theory can be traced to the beginning of cognitive science in the 1950s and the work of G.A. Miller. In his classic paper, Miller was perhaps the first to suggest our working memory capacity has inherent limits. His experimental results suggested that humans are generally able to hold only seven plus or minus two units of information in short-term memory. In 1973 Simon and Chase were the first to use the term "chunk" to describe how people might organize information in short-term memory. This chunking of memory components has also been described as schema construction. In the late 1980s John Sweller developed cognitive load theory (CLT) while studying problem solving. Studying learners as they solved problems, he and his associates found that learners often use a problem solving strategy called means-ends analysis. He suggests problem solving by means-ends analysis requires a relatively large amount of cognitive processing capacity, which may not be devoted to schema construction. Sweller suggested that instructional designers should prevent this unnecessary cognitive load by designing instructional materials which do not involve problem solving. Examples of alternative instructional materials include what are known as worked-examples and goal-free problems. In the 1990s, cognitive load theory was applied in several contexts. The empirical results from these studies led to the demonstration of several learning effects: the completion-problem effect; modality effect; split-attention effect; worked-example effect; and expertise reversal effect. Categories Cognitive load theory provides a general framework and has broad implications for instructional design, by allowing instructional designers to control the conditions of learning within an environment or, more generally, within most instructional materials. Specifically, it provides empirically-based guidelines that help instructional designers decrease extraneous cognitive load during learning and thus refocus the learner's attention toward germane materials, thereby increasing germane (schema related) cognitive load. This theory differentiates between three types of cognitive load: intrinsic cognitive load, and extraneous cognitive load. Intrinsic Intrinsic cognitive load is the inherent level of difficulty associated with a specific instructional topic. The term was first used in the early 1990s by Chandler and Sweller. According to them, all instructions have an inherent difficulty associated with them (e.g., the calculation of 2 + 2, versus solving a differential equation). This inherent difficulty may not be altered by an instructor. However, many schemas may be broken into individual "subschemas" and taught in isolation, to be later brought back together and described as a combined whole. Germane load Germane load refers to the working memory resources that the learner dedicates to managing the intrinsic cognitive load associated with the essential information for learning. Unlike intrinsic load, which is directly related to the complexity of the material, germane load does not stem from the presented information but from the learner's characteristics. It does not represent an independent source of working memory load; rather, it is influenced by the relationship between intrinsic and extraneous load. If the intrinsic load is high and the extraneous load is low, the germane load will be high, as the learner can devote more resources to processing the essential material. However, if the extraneous load increases, the germane load decreases, and learning is affected because the learner must use working memory resources to deal with external elements instead of the essential content. This assumes a constant level of motivation, where all available working memory resources are focused on managing both intrinsic and extraneous cognitive load. Element interactivity and intrinsic, extraneous, and germane cognitive load. Extraneous Extraneous cognitive load is generated by the manner in which information is presented to learners and is under the control of instructional designers. This load can be attributed to the design of the instructional materials. Because there is a single limited cognitive resource using resources to process the extraneous load, the number of resources available to process the intrinsic load and germane load (i.e., learning) is reduced. Thus, especially when intrinsic and/or germane load is high (i.e., when a problem is difficult), materials should be designed so as to reduce the extraneous load. An example of extraneous cognitive load occurs when there are two possible ways to describe a square to a student. A square is a figure and should be described using a figural medium. Certainly an instructor can describe a square in a verbal medium, but it takes just a second and far less effort to see what the instructor is talking about when a learner is shown a square, rather than having one described verbally. In this instance, the efficiency of the visual medium is preferred. This is because it does not unduly load the learner with unnecessary information. This unnecessary cognitive load is described as extraneous. Chandler and Sweller introduced the concept of extraneous cognitive load. This article was written to report the results of six experiments that they conducted to investigate this working memory load. Many of these experiments involved materials demonstrating the split attention effect. They found that the format of instructional materials either promoted or limited learning. They proposed that differences in performance were due to higher levels of the cognitive load imposed by the format of instruction. "Extraneous cognitive load" is a term for this unnecessary (artificially induced) cognitive load. Extraneous cognitive load may have different components, such as the clarity of texts or interactive demands of educational software. Measurement As of 1993 Paas and Van Merriënboer had developed a construct known as relative condition efficiency, which helps researchers measure perceived mental effort, an index of cognitive load. This construct provides a relatively simple means of comparing instructional conditions, taking into account both mental effort ratings and performance scores. Relative condition efficiency is calculated by subtracting standardized mental effort from standardized performance and dividing by the square root of two. Paas and Van Merriënboer used relative condition efficiency to compare three instructional conditions (worked examples, completion problems, and discovery practice). They found learners who studied worked examples were the most efficient, followed by those who used the problem completion strategy. Since this early study many other researchers have used this and other constructs to measure cognitive load as it relates to learning and instruction. The ergonomic approach seeks a quantitative neurophysiological expression of cognitive load which can be measured using common instruments, for example using the heart rate-blood pressure product (RPP) as a measure of both cognitive and physical occupational workload. They believe that it may be possible to use RPP measures to set limits on workloads and for establishing work allowance. There is active research interest in using physiological responses to indirectly estimate cognitive load, particularly by monitoring pupil diameter, eye gaze, respiratory rate, heart rate, or other factors. While some studies have found correlations between physiological factors and cognitive load, the findings have not held outside controlled laboratory environments. Task-invoked pupillary response is one such physiological response of cognitive load on working memory, with studies finding that pupil dilation occurs with high cognitive load. Some researchers have compared different measures of cognitive load. For example, Deleeuw and Mayer (2008) compared three commonly used measures of cognitive load and found that they responded in different ways to extraneous, intrinsic, and germane load. A 2020 study showed that there may be various demand components that together form extraneous cognitive load, but that may need to be measured using different questionnaires. Effects of heavy cognitive load A heavy cognitive load typically creates error or some kind of interference in the task at hand. A heavy cognitive load can also increase stereotyping. This is because a heavy cognitive load pushes excess information into subconscious processing, which involves the use of schemas, the patterns of thought and behavior that help us to organize information into categories and identify the relationships between them. Stereotypical associations may be automatically activated by the use of pattern recognition and schemas, producing an implicit stereotype effect. Stereotyping is an extension of the Fundamental Attribution Error which also increases in frequency with heavier cognitive load. The notions of cognitive load and arousal contribute to the "Overload Hypothesis" explanation of social facilitation: in the presence of an audience, subjects tend to perform worse in subjectively complex tasks (whereas they tend to excel in subjectively easy tasks). Sub-population studies Individual differences As of 1984 it was established for example, that there were individual differences in processing capacities between novices and experts. Experts have more knowledge or experience with regard to a specific task which reduces the cognitive load associated with the task. Novices do not have this experience or knowledge and thus have heavier cognitive load. Elderly The danger of heavy cognitive load is seen in the elderly population. Aging can cause declines in the efficiency of working memory which can contribute to higher cognitive load. Heavy cognitive load can disturb balance in elderly people. The relationship between heavy cognitive load and control of center of mass are heavily correlated in the elderly population. As cognitive load increases, the sway in center of mass in elderly individuals increases. A 2007 study examined the relationship between body sway and cognitive function and their relationship during multitasking and found disturbances in balance led to a decrease in performance on the cognitive task. Conversely, an increasing demand for balance can increase cognitive load. College students As of 2014, an increasing cognitive load for students using a laptop in school has become a concern. With the use of Facebook and other social forms of communication, adding multiple tasks jeopardizes students performance in the classroom. When many cognitive resources are available, the probability of switching from one task to another is high and does not lead to optimal switching behavior. In a study from 2013, both students who were heavy Facebook users and students who sat nearby those who were heavy Facebook users performed poorly and resulted in lower GPA. Children In 2004, British psychologists, Alan Baddeley and Graham Hitch proposed that the components of working memory are in place at 6 years of age. They found a clear difference between adult and child knowledge. These differences were due to developmental increases in processing efficiency. Children lack general knowledge, and this is what creates increased cognitive load in children. Children in impoverished families often experience even higher cognitive load in learning environments than those in middle-class families. These children do not hear, talk, or learn about schooling concepts because their parents often do not have formal education. When it comes to learning, their lack of experience with numbers, words, and concepts increases their cognitive load. As children grow older they develop superior basic processes and capacities. They also develop metacognition, which helps them to understand their own cognitive activities. Lastly, they gain greater content knowledge through their experiences. These elements help reduce cognitive load in children as they develop. Gesturing is a technique children use to reduce cognitive load while speaking. By gesturing, they can free up working memory for other tasks. Pointing allows a child to use the object they are pointing at as the best representation of it, which means they do not have to hold this representation in their working memory, thereby reducing their cognitive load. Additionally, gesturing about an object that is absent reduces the difficulty of having to picture it in their mind. Poverty As of 2013 it has been theorized that an impoverished environment can contribute to cognitive load. Regardless of the task at hand, or the processes used in solving the task, people who experience poverty also experience higher cognitive load. A number of factors contribute to the cognitive load in people with lower socioeconomic status that are not present in middle and upper-class people. Embodiment and interactivity Bodily activity can both be advantageous and detrimental to learning depending on how this activity is implemented. Cognitive load theorists have asked for updates that makes CLT more compatible with insights from embodied cognition research. As a result, Embodied Cognitive Load Theory has been suggested as a means to predict the usefulness of interactive features in learning environments. In this framework, the benefits of an interactive feature (such as easier cognitive processing) need to exceed its cognitive costs (such as motor coordination) in order for an embodied mode of interaction to increase learning outcomes. Application in driving and piloting With increase in secondary tasks inside cockpit, cognitive load estimation became an important problem for both automotive drivers and pilots. The research problem is investigated in various names like drowsiness detection, distraction detection and so on. For automotive drivers, researchers explored various physiological parameters like heart rate, facial expression, ocular parameters and so on. In aviation there are numerous simulation studies on analysing pilots' distraction and attention using various physiological parameters. For military fast jet pilots, researchers explored air to ground dive attacks and recorded cardiac, EEG and ocular parameters. See also Energy (psychological) (in scuba diving) References Further reading Journal special issues For those wishing to learn more about cognitive load theory, please consider reading these journals and special issues of those journals: Educational Psychologist, vol. 43 (4) Applied Cognitive Psychology vol. 20(3) (2006) Applied Cognitive Psychology vol. 21(6) (2007) ETR&D vol. 53 (2005) Instructional Science vol. 32(1) (2004) Educational Psychologist vol. 38(1) (2003) Learning and Instruction vol. 12 (2002) Computers in Human Behavior vol. 25 (2) (2009) For ergonomics standards see: ISO 10075-1:1991 Ergonomic Principles Related to Mental Workload – Part 1: General Terms and Definitions ISO 10075-2:1996 Ergonomic Principles Related To Mental Workload – Part 2: Design Principles ISO 10075-3:2004 Ergonomic Principles Related To Mental Workload – Part 3: Principles And Requirements Concerning Methods For Measuring And Assessing Mental Workload ISO 9241 Ergonomics of Human System Interaction Cognition Cognitive psychology Educational psychology Educational technology Learning Pedagogy Psychological methodology
Cognitive load
[ "Biology" ]
3,291
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
1,532,991
https://en.wikipedia.org/wiki/Slayer%27s%20Slab
The Slayer's Slab is a title given to a medieval gravestone formerly in the graveyard of Lyminster church in West Sussex, England. It has now been moved inside the church to protect it from weathering. According to legend it is the gravestone of the dragonslayer who killed the Knucker who lived in the nearby knuckerhole. The stone has a cross on it overlaying a herringbone pattern, but no inscription to identify the tomb's occupant. References Headstones Monuments and memorials in West Sussex Stones
Slayer's Slab
[ "Physics" ]
114
[ "Stones", "Physical objects", "Matter" ]
1,533,051
https://en.wikipedia.org/wiki/Eruption%20column
An eruption column or eruption plume is a cloud of super-heated ash and tephra suspended in gases emitted during an explosive volcanic eruption. The volcanic materials form a vertical column or plume that may rise many kilometers into the air above the vent of the volcano. In the most explosive eruptions, the eruption column may rise over , penetrating the stratosphere. Stratospheric injection of aerosols by volcanoes is a major cause of short-term climate change. A common occurrence in explosive eruptions is column collapse when the eruption column is or becomes too dense to be lifted high into the sky by air convection, and instead falls down the slopes of the volcano to form pyroclastic flows or surges (although the latter is less dense). On some occasions, if the material is not dense enough to fall, it may create pyrocumulonimbus clouds. Formation Eruption columns form in explosive volcanic activity, when the high concentration of volatile materials in the rising magma causes it to be disrupted into fine volcanic ash and coarser tephra. The ash and tephra are ejected at speeds of several hundred metres per second, and can rise rapidly to heights of several kilometres, lifted by enormous convection currents. Eruption columns may be transient, if formed by a discrete explosion, or sustained, if produced by a continuous eruption or closely spaced discrete explosions. Structure The solid and liquid materials in an eruption column are lifted by processes that vary as the material ascends: At the base of the column, material is violently forced upward out of the crater by the pressure of rapidly expanding gases, mainly steam. The gases expand because the pressure of rock above it rapidly reduces as it approaches the surface. This region is called the gas thrust region and typically reaches to only one or two kilometers above the vent. The convective thrust region covers most of the height of the column. The gas thrust region is very turbulent and surrounding air becomes mixed into it and heated. The air expands, reducing its density and rising. The rising air carries all the solid and liquid material from the eruption entrained in it upwards. As the column rises into less dense surrounding air, it will eventually reach an altitude where the hot, rising air is of the same density as the surrounding cold air. In this neutral buoyancy region, the erupted material will then no longer rise through convection, but solely through any upward momentum which it has. This is called the umbrella region, and is usually marked by the column spreading out sideways. The eruptive material and the surrounding cold air has the same density at the base of the umbrella region, and the top is marked by the maximum height which momentum carries the material upward. Because the speeds are very low or negligible in this region it is often distorted by stratospheric winds. Column heights The column will stop rising once it attains an altitude where it is more dense than the surrounding air. Several factors control the height that an eruption column can reach. Intrinsic factors include the diameter of the erupting vent, the gas content of the magma, and the velocity at which it is ejected. Extrinsic factors can be important, with winds sometimes limiting the height of the column, and the local thermal temperature gradient also playing a role. The atmospheric temperature in the troposphere normally decreases by about 6-7 K/km, but small changes in this gradient can have a large effect on the final column height. Theoretically, the maximum achievable column height is thought to be about . In practice, column heights ranging from about are seen. Eruption columns with heights of over break through the tropopause and inject particulates into the stratosphere. Ashes and aerosols in the troposphere are quickly removed by precipitation, but material injected into the stratosphere is much more slowly dispersed, in the absence of weather systems. Substantial amounts of stratospheric injection can have global effects: after Mount Pinatubo erupted in 1991, global temperatures dropped by about . The largest eruptions are thought to cause temperature drops down to several degrees, and are potentially the cause of some of the known mass extinctions. Eruption column heights are a useful way of measuring eruption intensity since for a given atmospheric temperature, the column height is proportional to the fourth root of the mass eruption rate. Consequently, given similar conditions, to double the column height requires an eruption ejecting 16 times as much material per second. The column height of eruptions which have not been observed can be estimated by mapping the maximum distance that pyroclasts of different sizes are carried from the vent—the higher the column the further ejected material of a particular mass (and therefore size) can be carried. The approximate maximum height of an eruption column is given by the equation. H = k(MΔT)1/4 Where: k is a constant that depends on various properties, such as atmospheric conditions. M is the mass eruption rate. ΔT is the difference in temperature between the erupting magma and the surrounding atmosphere. Hazards Column collapse Eruption columns may become so laden with dense material that they are too heavy to be supported by convection currents. This can suddenly happen if, for example, the rate at which magma is erupted increases to a point where insufficient air is entrained to support it, or if the magma density suddenly increases as denser magma from lower regions in a stratified magma chamber is tapped. If it does happen, then material reaching the bottom of the convective thrust region can no longer be adequately supported by convection and will fall under gravity, forming a pyroclastic flow or surge which can travel down the slopes of a volcano at speeds of over . Column collapse is one of the most common and dangerous volcanic hazards in column-creating eruptions. Aircraft Several eruptions have seriously endangered aircraft which have encountered or passed by the eruption column. In two separate incidents in 1982, airliners flew into the upper reaches of an eruption column blasted off by Mount Galunggung, and the ash severely damaged both aircraft. Particular hazards were the ingestion of ash stopping the engines, the sandblasting of the cockpit windows rendering them largely opaque and the contamination of fuel through the ingestion of ash through pressurisation ducts. The damage to engines is a particular problem since temperatures inside a gas turbine are sufficiently high that volcanic ash is melted in the combustion chamber, and forms a glass coating on components farther downstream of it, for example on turbine blades. In the case of British Airways Flight 9, the aircraft lost power on all four engines, and in the other, nineteen days later, three of the four engines failed on a Singapore Airlines 747. In both cases, engines were successfully restarted, but the aircraft were forced to make emergency landings in Jakarta. Similar damage to aircraft occurred due to an eruption column over Redoubt volcano in Alaska in 1989. Following the eruption of Mount Pinatubo in 1991, aircraft were diverted to avoid the eruption column, but nonetheless, fine ash dispersing over a wide area in Southeast Asia caused damage to 16 aircraft, some as far as from the volcano. Eruption columns are not usually visible on weather radar and may be obscured by ordinary clouds or night. Because of the risks posed to aviation by eruption columns, there is a network of nine Volcanic Ash Advisory Centers around the world which continuously monitor for eruption columns using data from satellites, ground reports, pilot reports and meteorological models. See also Cryovolcano Enceladus – a volcanically active moon of planet Saturn Mount Pelée Pele (volcano) Peléan eruption Plinian eruption References Further reading External links USGS information Description of Galunggung eruption column Volcanoes Volcanic eruptions Explosive eruptions Volcanic degassing Tephra
Eruption column
[ "Chemistry" ]
1,573
[ "Explosive eruptions", "Explosions" ]
1,533,070
https://en.wikipedia.org/wiki/Volatility%20smile
Volatility smiles are implied volatility patterns that arise in pricing financial options. It is a parameter (implied volatility) that is needed to be modified for the Black–Scholes formula to fit market prices. In particular for a given expiration, options whose strike price differs substantially from the underlying asset's price command higher prices (and thus implied volatilities) than what is suggested by standard option pricing models. These options are said to be either deep in-the-money or out-of-the-money. Graphing implied volatilities against strike prices for a given expiry produces a skewed "smile" instead of the expected flat surface. The pattern differs across various markets. Equity options traded in American markets did not show a volatility smile before the Crash of 1987 but began showing one afterwards. It is believed that investor reassessments of the probabilities of fat-tail have led to higher prices for out-of-the-money options. This anomaly implies deficiencies in the standard Black–Scholes option pricing model which assumes constant volatility and log-normal distributions of underlying asset returns. Empirical asset returns distributions, however, tend to exhibit fat-tails (kurtosis) and skew. Modelling the volatility smile is an active area of research in quantitative finance, and better pricing models such as the stochastic volatility model partially address this issue. A related concept is that of term structure of volatility, which describes how (implied) volatility differs for related options with different maturities. An implied volatility surface is a 3-D plot that plots volatility smile and term structure of volatility in a consolidated three-dimensional surface for all options on a given underlying asset. Implied volatility In the Black–Scholes model, the theoretical value of a vanilla option is a monotonic increasing function of the volatility of the underlying asset. This means it is usually possible to compute a unique implied volatility from a given market price for an option. This implied volatility is best regarded as a rescaling of option prices which makes comparisons between different strikes, expirations, and underlyings easier and more intuitive. When implied volatility is plotted against strike price, the resulting graph is typically downward sloping for equity markets, or valley-shaped for currency markets. For markets where the graph is downward sloping, such as for equity options, the term "volatility skew" is often used. For other markets, such as FX options or equity index options, where the typical graph turns up at either end, the more familiar term "volatility smile" is used. For example, the implied volatility for upside (i.e. high strike) equity options is typically lower than for at-the-money equity options. However, the implied volatilities of options on foreign exchange contracts tend to rise in both the downside and upside directions. In equity markets, a small tilted smile is often observed near the money as a kink in the general downward sloping implicit volatility graph. Sometimes the term "smirk" is used to describe a skewed smile. Market practitioners use the term implied-volatility to indicate the volatility parameter for ATM (at-the-money) option. Adjustments to this value are undertaken by incorporating the values of Risk Reversal and Flys (Skews) to determine the actual volatility measure that may be used for options with a delta which is not 50. Formula where: is the implied volatility at which the x%-delta call is trading in the market is the implied volatility of the x%-delta put ATM is the At-The-Money Forward volatility at which ATM Calls and Puts are trading in the market Risk reversals are generally quoted as x% delta risk reversal and essentially is Long x% delta call, and short x% delta put. Butterfly, on the other hand, is a strategy consisting of: −y% delta fly which mean Long y% delta call, Long y% delta put, short one ATM call and short one ATM put (small hat shape). Implied volatility and historical volatility It is helpful to note that implied volatility is related to historical volatility, but the two are distinct. Historical volatility is a direct measure of the movement of the underlying’s price (realized volatility) over recent history (e.g. a trailing 21-day period). Implied volatility, in contrast, is determined by the market price of the derivative contract itself, and not the underlying. Therefore, different derivative contracts on the same underlying have different implied volatilities as a function of their own supply and demand dynamics. For instance, the IBM call option, strike at $100 and expiring in 6 months, may have an implied volatility of 18%, while the put option strike at $105 and expiring in 1 month may have an implied volatility of 21%. At the same time, the historical volatility for IBM for the previous 21 day period might be 17% (all volatilities are expressed in annualized percentage moves). Term structure of volatility For options of different maturities, we also see characteristic differences in implied volatility. However, in this case, the dominant effect is related to the market's implied impact of upcoming events. For instance, it is well-observed that realized volatility for stock prices rises significantly on the day that a company reports its earnings. Correspondingly, we see that implied volatility for options will rise during the period prior to the earnings announcement, and then fall again as soon as the stock price absorbs the new information. Options that mature earlier exhibit a larger swing in implied volatility (sometimes called "vol of vol") than options with longer maturities. Other option markets show other behavior. For instance, options on commodity futures typically show increased implied volatility just prior to the announcement of harvest forecasts. Options on US Treasury Bill futures show increased implied volatility just prior to meetings of the Federal Reserve Board (when changes in short-term interest rates are announced). The market incorporates many other types of events into the term structure of volatility. For instance, the impact of upcoming results of a drug trial can cause implied volatility swings for pharmaceutical stocks. The anticipated resolution date of patent litigation can impact technology stocks, etc. Volatility term structures list the relationship between implied volatilities and time to expiration. The term structures provide another method for traders to gauge cheap or expensive options. Implied volatility surface It is often useful to plot implied volatility as a function of both strike price and time to maturity. The result is a two-dimensional curved surface plotted in three dimensions whereby the current market implied volatility (z-axis) for all options on the underlying is plotted against the price (y-axis) and time to maturity (x-axis "DTM"). This defines the absolute implied volatility surface; changing coordinates so that the price is replaced by delta yields the relative implied volatility surface. The implied volatility surface simultaneously shows both volatility smile and term structure of volatility. Option traders use an implied volatility plot to quickly determine the shape of the implied volatility surface, and to identify any areas where the slope of the plot (and therefore relative implied volatilities) seems out of line. The graph shows an implied volatility surface for all the put options on a particular underlying stock price. The z-axis represents implied volatility in percent, and x and y axes represent the option delta, and the days to maturity. Note that to maintain put–call parity, a 20 delta put must have the same implied volatility as an 80 delta call. For this surface, we can see that the underlying symbol has both volatility skew (a tilt along the delta axis), as well as a volatility term structure indicating an anticipated event in the near future. Evolution: Sticky An implied volatility surface is static: it describes the implied volatilities at a given moment in time. How the surface changes as the spot changes is called the evolution of the implied volatility surface. Common heuristics include: "sticky strike" (or "sticky-by-strike", or "stick-to-strike"): if spot changes, the implied volatility of an option with a given absolute strike does not change. "sticky moneyness" (aka, "sticky delta"; see moneyness for why these are equivalent terms): if spot changes, the implied volatility of an option with a given moneyness (delta) does not change. (Delta means here "Delta Volatility Adjustment", not Delta as Greek. In other words, relative volatility adjustment to ATM strike volatility which always set to be 100% moneyness as closest to the current underlying asset price and 0 for delta volatility adjustment.) So if spot moves from $100 to $120, sticky strike would predict that the implied volatility of a $120 strike option would be whatever it was before the move (though it has moved from being OTM to ATM), while sticky delta would predict that the implied volatility of the $120 strike option would be whatever the $100 strike option's implied volatility was before the move (as these are both ATM at the time). Modeling volatility Methods of modelling the volatility smile include stochastic volatility models and local volatility models. For a discussion as to the various alternate approaches developed here, see and . See also Volatility (finance) Stochastic volatility SABR volatility model Vanna Volga method Heston model Implied binomial tree Implied trinomial tree Edgeworth binomial tree Volatility risk References External links Emanuel Derman, The Volatility Smile and Its Implied Tree (RISK, 7-2 February 1994, pp. 139–145, pp. 32–39) (PDF) Mark Rubinstein, Implied Binomial Trees (PDF) Damiano Brigo, Fabio Mercurio, Francesco Rapisarda and Giulio Sartorelli, Volatility Smile Modeling with Mixture Stochastic Differential Equations (PDF) Visualization of the volatility smile C. Grunspan, "Asymptotics Expansions for the Implied Lognormal Volatility : a Model Free Approach" Y. Li, "A mean bound financial model and options pricing" examples of commodity volatility smiles/skews Mathematical finance Options (finance)
Volatility smile
[ "Mathematics" ]
2,226
[ "Applied mathematics", "Mathematical finance" ]
1,533,113
https://en.wikipedia.org/wiki/Sp%C3%B6rer%20Minimum
The Spörer Minimum is a hypothesized 90-year span of low solar activity, from about 1460 until 1550, which was identified and named by John A. Eddy in a landmark 1976 paper published in Science titled "The Maunder Minimum". It occurred before sunspots had been directly observed and was discovered instead by analysis of the proportion of carbon-14 in tree rings, which is strongly correlated with solar activity. It is named for the German astronomer Gustav Spörer. History of solar activity Solar variation can be quantified using sunspot counts, but this measure is only reliable for periods after records of sunspot observations were routinely made by western astronomers. For periods before sunspot records, solar activity can be found from proxy methods, most notably the production of radioisotopes in the Earth's atmosphere from interaction with cosmic rays, which are modulated by the solar activity. The carbon-14 method used by Spörer to identify the minimum makes use of the fact that high solar activity is correlated with low production of carbon-14 in the atmosphere. Wilfried Schröder published a table of observed aurora borealis during the Spörer Minimum which showed that the solar cycle was active. Miyahara et al. likewise found the 11-year solar cycle was still prominently detected in the carbon-14 record even during the minimum. The amplitude of the 11-year cycle seems to have been modulated only from 1455 to 1510. Jiang and Xu look at sunspot records and aurora sightings from China during the period and suggest that a minimum from 1450 to 1560 is specious. They suggest dates for the sunspot minimum of 1400 to 1510. Possible correlation with climate Like the subsequent Maunder Minimum, the Spörer Minimum coincided with a time when Earth's climate was colder than average. This correlation has generated hypotheses that low solar activity produces cooler-than-average global temperatures, although Jiang and Xu point out that while the period 1430-1520 (starting slightly before the Spörer minimum) was indeed colder than average in China, the period 1520-1620 (the second half of the minimum) was warmer than average. A specific mechanism by which solar activity results in climate change has not been established, One theory is modification of the Arctic Oscillation/North Atlantic Oscillation due to a change in solar output. References 15th century 16th century History of climate variability and change Solar phenomena
Spörer Minimum
[ "Physics" ]
503
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
1,533,133
https://en.wikipedia.org/wiki/Triplet%20state
In quantum mechanics, a triplet state, or spin triplet, is the quantum state of an object such as an electron, atom, or molecule, having a quantum spin S = 1. It has three allowed values of the spin's projection along a given axis mS = −1, 0, or +1, giving the name "triplet". Spin, in the context of quantum mechanics, is not a mechanical rotation but a more abstract concept that characterizes a particle's intrinsic angular momentum. It is particularly important for systems at atomic length scales, such as individual atoms, protons, or electrons. A triplet state occurs in cases where the spins of two unpaired electrons, each having spin s = 1/2, align to give S = 1, in contrast to the more common case of two electrons aligning oppositely to give S = 0, a spin singlet. Most molecules encountered in daily life exist in a singlet state because all of their electrons are paired, but molecular oxygen is an exception. At room temperature, O2 exists in a triplet state, which can only undergo a chemical reaction by making the forbidden transition into a singlet state. This makes it kinetically nonreactive despite being thermodynamically one of the strongest oxidants. Photochemical or thermal activation can bring it into the singlet state, which makes it kinetically as well as thermodynamically a very strong oxidant. Two spin-1/2 particles In a system with two spin-1/2 particlesfor example the proton and electron in the ground state of hydrogenmeasured on a given axis, each particle can be either spin up or spin down so the system has four basis states in all using the single particle spins to label the basis states, where the first arrow and second arrow in each combination indicate the spin direction of the first particle and second particle respectively. More rigorously where and are the spins of the two particles, and and are their projections onto the z axis. Since for spin-1/2 particles, the basis states span a 2-dimensional space, the basis states span a 4-dimensional space. Now the total spin and its projection onto the previously defined axis can be computed using the rules for adding angular momentum in quantum mechanics using the Clebsch–Gordan coefficients. In general substituting in the four basis states returns the possible values for total spin given along with their representation in the basis. There are three states with total spin angular momentum 1: which are symmetric and a fourth state with total spin angular momentum 0: which is antisymmetric. The result is that a combination of two spin-1/2 particles can carry a total spin of 1 or 0, depending on whether they occupy a triplet or singlet state. A mathematical viewpoint In terms of representation theory, what has happened is that the two conjugate 2-dimensional spin representations of the spin group SU(2) = Spin(3) (as it sits inside the 3-dimensional Clifford algebra) have tensored to produce a 4-dimensional representation. The 4-dimensional representation descends to the usual orthogonal group SO(3) and so its objects are tensors, corresponding to the integrality of their spin. The 4-dimensional representation decomposes into the sum of a one-dimensional trivial representation (singlet, a scalar, spin zero) and a three-dimensional representation (triplet, spin 1) that is nothing more than the standard representation of SO(3) on . Thus the "three" in triplet can be identified with the three rotation axes of physical space. See also Singlet state Doublet state Diradical Angular momentum Pauli matrices Spin multiplicity Spin quantum number Spin-1/2 Spin tensor Spinor References Quantum states Rotational symmetry Spectroscopy
Triplet state
[ "Physics", "Chemistry" ]
780
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Quantum mechanics", "Quantum states", "Spectroscopy", "Symmetry", "Rotational symmetry" ]
1,533,140
https://en.wikipedia.org/wiki/Dalton%20Minimum
The Dalton Minimum was a period of low sunspot count, representing low solar activity, named after the English meteorologist John Dalton, lasting from about 1790 to 1830 or 1796 to 1820, corresponding to the period solar cycle 4 to solar cycle 7. While the Dalton Minimum is often compared with the Maunder Minimum, its sunspot number was slightly higher and reported sunspots distributed in both solar hemispheres unlike the Maunder Minimum. The coronal streamers are visually confirmed in Ezra Ames and José Joaquin de Ferrer’s eclipse drawings in 1806 and indicates similarity of its magnetic field not with that of the Maunder Minimum but with that of the modern solar cycles. Temperature Like the Maunder Minimum and Spörer Minimum, the Dalton Minimum coincided with a period of lower-than-average global temperatures. During that period, there was a variation of temperature of about 1 °C in Germany. The cause of the lower-than-average temperatures and their possible relation to the low sunspot count are not well understood. Recent papers have suggested that a rise in volcanism was largely responsible for the cooling trend. While the Year Without a Summer, in 1816, occurred during the Dalton Minimum, the prime reason for that year's cool temperatures was the highly explosive eruption the previous year of Mount Tambora in Indonesia, which was one of the two largest eruptions in the past 2000 years. One must also consider that the rise in volcanism may have been triggered by lower levels of solar output as there is a weak but statistically significant link between decreased solar output and an increase in volcanism. See also Solar cycle Notes References Hayakawa, Hisashi et al. (2020a) "Thaddäus Derfflinger's Sunspot Observations during 1802–1824: A Primary Reference to Understand the Dalton Minimum", in The Astrophysical Journal, 890, 98. Hayakawa, Hisashi et al. (2020b) ""The Solar Corona during the Total Eclipse on 1806 June 16: Graphical Evidence of the Coronal Structure during the Dalton Minimum", in The Astrophysical Journal, 900, 114. Komitov, Boris and Vladimir Kaftan (2004) "The Sunspot Activity in the Last Two Millennia on the Basis of Indirect and Instrumental Indexes: Time Series Models and Their Extrapolations for the 21st Century", in Proceedings of the International Astronomical Union, 2004, pp. 113–114. Wagner, Sebastian and Eduardo Zorita (2005) "The influence of volcanic, solar and CO2 forcing on the temperatures in the Dalton Minimum (1790–1830): a model study", Climate Dynamics v. 25, pp. 205–218, doi 10.1007/s00382-005-0029-0. Wilson, Robert M. (nd) "Volcanism, Cold Temperature, and Paucity of Sunspot Observing Days (1818–1858): A Connection?", The Smithsonian/NASA Astrophysics Data System, accessed February 2009. 1790s 19th century History of climate variability and change Solar phenomena
Dalton Minimum
[ "Physics" ]
624
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
1,533,184
https://en.wikipedia.org/wiki/Chemical%20decomposition
Chemical decomposition, or chemical breakdown, is the process or effect of simplifying a single chemical entity (normal molecule, reaction intermediate, etc.) into two or more fragments. Chemical decomposition is usually regarded and defined as the exact opposite of chemical synthesis. In short, the chemical reaction in which two or more products are formed from a single reactant is called a decomposition reaction. The details of a decomposition process are not always well defined. Nevertheless, some activation energy is generally needed to break the involved bonds and as such, higher temperatures generally accelerates decomposition. The net reaction can be an endothermic process, or in the case of spontaneous decompositions, an exothermic process. The stability of a chemical compound is eventually limited when exposed to extreme environmental conditions such as heat, radiation, humidity, or the acidity of a solvent. Because of this chemical decomposition is often an undesired chemical reaction. However chemical decomposition can be desired, such as in various waste treatment processes. For example, this method is employed for several analytical techniques, notably mass spectrometry, traditional gravimetric analysis, and thermogravimetric analysis. Additionally decomposition reactions are used today for a number of other reasons in the production of a wide variety of products. One of these is the explosive breakdown reaction of sodium azide [(NaN3)2] into nitrogen gas (N2) and sodium (Na). It is this process which powers the life-saving airbags present in virtually all of today's automobiles. Decomposition reactions can be generally classed into three categories; thermal, electrolytic, and photolytic decomposition reactions. Reaction formula In the breakdown of a compound into its constituent parts, the generalized reaction for chemical decomposition is: AB → A + B (AB represents the reactant that begins the reaction, and A and B represent the products of the reaction) An example is the electrolysis of water to the gases hydrogen and oxygen: 2 H2O() → 2 H2() + O2() Additional examples An example of a spontaneous (without addition of an external energy source) decomposition is that of hydrogen peroxide which slowly decomposes into water and oxygen (see video at right): 2 H2O2 → 2 H2O + O2 This reaction is one of the exceptions to the endothermic nature of decomposition reactions. Other reactions involving decomposition do require the input of external energy. This energy can be in the form of heat, radiation, electricity, or light. The latter being the reason some chemical compounds, such as many prescription medicines, are kept and stored in dark bottles which reduce or eliminate the possibility of light reaching them and initiating decomposition. When heated, carbonates will decompose. A notable exception is carbonic acid, (H2CO3). Commonly seen as the "fizz" in carbonated beverages, carbonic acid will spontaneously decompose over time into carbon dioxide and water. The reaction is written as: H2CO3 → H2O + CO2 Other carbonates will decompose when heated to produce their corresponding metal oxide and carbon dioxide. The following equation is an example, where M represents the given metal: MCO3 → MO + CO2 A specific example is that involving calcium carbonate: CaCO3 → CaO + CO2 Metal chlorates also decompose when heated. In this type of decomposition reaction, a metal chloride and oxygen gas are the products. Here, again, M represents the metal: 2 MClO3 → 2 MCl+ 3 O2 A common decomposition of a chlorate is in the reaction of potassium chlorate where oxygen is the product. This can be written as: 2 KClO3 → 2 KCl + 3 O2 See also Analytical chemistry Thermal decomposition References External links https://quizlet.com/42968634/types-of-decomposition-reactions-flash-cards/ PDF Biodegradation database Inorganic chemistry Organic chemistry Chemical reactions
Chemical decomposition
[ "Chemistry" ]
817
[ "nan" ]
1,533,196
https://en.wikipedia.org/wiki/Photoelasticity
In materials science, photoelasticity describes changes in the optical properties of a material under mechanical deformation. It is a property of all dielectric media and is often used to experimentally determine the stress distribution in a material. History The photoelastic phenomenon was first discovered by the Scottish physicist David Brewster, who immediately recognized it as stress-induced birefringence. That diagnosis was confirmed in a direct refraction experiment by Augustin-Jean Fresnel. Experimental frameworks were developed at the beginning of the twentieth century with the works of E.G. Coker and L.N.G. Filon of University of London. Their book Treatise on Photoelasticity, published in 1930 by Cambridge Press, became a standard text on the subject. Between 1930 and 1940, many other books appeared on the subject, including books in Russian, German and French. Max M. Frocht published the classic two volume work, Photoelasticity, in the field. At the same time, much development occurred in the field – great improvements were achieved in technique, and the equipment was simplified. With refinements in the technology, photoelastic experiments were extended to determining three-dimensional states of stress. In parallel to developments in experimental technique, the first phenomenological description of photoelasticity was given in 1890 by Friedrich Pockels, however this was proved inadequate almost a century later by Nelson & Lax as the description by Pockels only considered the effect of mechanical strain on the optical properties of the material. With the advent of the digital polariscope – made possible by light-emitting diodes – continuous monitoring of structures under load became possible. This led to the development of dynamic photoelasticity, which has contributed greatly to the study of complex phenomena such as fracture of materials. Applications Photoelasticity has been used for a variety of stress analyses and even for routine use in design, particularly before the advent of numerical methods, such as finite elements or boundary elements. Digitization of polariscopy enables fast image acquisition and data processing, which allows its industrial applications to control quality of manufacturing process for materials such as glass and polymer. Dentistry utilizes photoelasticity to analyze strain in denture materials. Photoelasticity can successfully be used to investigate the highly localized stress state within masonry or in proximity of a rigid line inclusion (stiffener) embedded in an elastic medium. In the former case, the problem is nonlinear due to the contacts between bricks, while in the latter case the elastic solution is singular, so that numerical methods may fail to provide correct results. These can be obtained through photoelastic techniques. Dynamic photoelasticity integrated with high-speed photography is utilized to investigate fracture behavior in materials. Another important application of the photoelasticity experiments is to study the stress field around bi-material notches. Bi-material notches exist in many engineering application like welded or adhesively bonded structures. For example, some elements of Gothic cathedrals previously thought decorative were first proved essential for structural support by photoelastic methods. Formal definition For a linear dielectric material the change in the inverse permittivity tensor with respect to the deformation (the gradient of the displacement ) is described by where is the fourth-rank photoelasticity tensor, is the linear displacement from equilibrium, and denotes differentiation with respect to the Cartesian coordinate . For isotropic materials, this definition simplifies to where is the symmetric part of the photoelastic tensor (the photoelastic strain tensor), and is the linear strain. The antisymmetric part of is known as the roto-optic tensor. From either definition, it is clear that deformations to the body may induce optical anisotropy, which can cause an otherwise optically isotropic material to exhibit birefringence. Although the symmetric photoelastic tensor is most commonly defined with respect to mechanical strain, it is also possible to express photoelasticity in terms of the mechanical stress. Experimental principles The experimental procedure relies on the property of birefringence, as exhibited by certain transparent materials. Birefringence is a phenomenon in which a ray of light passing through a given material experiences two refractive indices. The property of birefringence (or double refraction) is observed in many optical crystals. Upon the application of stresses, photoelastic materials exhibit the property of birefringence, and the magnitude of the refractive indices at each point in the material is directly related to the state of stresses at that point. Information such as maximum shear stress and its orientation are available by analyzing the birefringence with an instrument called a polariscope. When a ray of light passes through a photoelastic material, its electromagnetic wave components are resolved along the two principal stress directions and each component experiences a different refractive index due to the birefringence. The difference in the refractive indices leads to a relative phase retardation between the two components. Assuming a thin specimen made of isotropic materials, where two-dimensional photoelasticity is applicable, the magnitude of the relative retardation is given by the stress-optic law: where Δ is the induced retardation, C is the , t is the specimen thickness, λ is the vacuum wavelength, and σ1 and σ2 are the first and second principal stresses, respectively. The retardation changes the polarization of transmitted light. The polariscope combines the different polarization states of light waves before and after passing the specimen. Due to optical interference of the two waves, a fringe pattern is revealed. The number of fringe order N is denoted as which depends on relative retardation. By studying the fringe pattern one can determine the state of stress at various points in the material. For materials that do not show photoelastic behavior, it is still possible to study the stress distribution. The first step is to build a model, using photoelastic materials, which has geometry similar to the real structure under investigation. The loading is then applied in the same way to ensure that the stress distribution in the model is similar to the stress in the real structure. Isoclinics and isochromatics Isoclinics are the loci of the points in the specimen along which the principal stresses are in the same direction. Isochromatics are the loci of the points along which the difference in the first and second principal stress remains the same. Thus they are the lines which join the points with equal maximum shear stress magnitude. Two-dimensional photoelasticity Photoelasticity can describe both three-dimensional and two-dimensional states of stress. However, examining photoelasticity in three-dimensional systems is more involved than two-dimensional or plane-stress system. So the present section deals with photoelasticity in a plane stress system. This condition is achieved when the thickness of the prototype is much smaller than the dimensions in the plane. Thus one is only concerned with stresses acting parallel to the plane of the model, as other stress components are zero. The experimental setup varies from experiment to experiment. The two basic kinds of setup used are plane polariscope and circular polariscope. The working principle of a two-dimensional experiment allows the measurement of retardation, which can be converted to the difference between the first and second principal stress and their orientation. To further get values of each stress component, a technique called stress-separation is required. Several theoretical and experimental methods are utilized to provide additional information to solve individual stress components. Plane polariscope setup The setup consists of two linear polarizers and a light source. The light source can either emit monochromatic light or white light depending upon the experiment. First the light is passed through the first polarizer which converts the light into plane polarized light. The apparatus is set up in such a way that this plane polarized light then passes through the stressed specimen. This light then follows, at each point of the specimen, the direction of principal stress at that point. The light is then made to pass through the analyzer and we finally get the fringe pattern. The fringe pattern in a plane polariscope setup consists of both the isochromatics and the isoclinics. The isoclinics change with the orientation of the polariscope while there is no change in the isochromatics. Circular polariscope setup In a circular polariscope setup two quarter-wave plates are added to the experimental setup of the plane polariscope. The first quarter-wave plate is placed in between the polarizer and the specimen and the second quarter-wave plate is placed between the specimen and the analyzer. The effect of adding the quarter-wave plate after the source-side polarizer is that we get circularly polarized light passing through the sample. The analyzer-side quarter-wave plate converts the circular polarization state back to linear before the light passes through the analyzer. The basic advantage of a circular polariscope over a plane polariscope is that in a circular polariscope setup we only get the isochromatics and not the isoclinics. This eliminates the problem of differentiating between the isoclinics and the isochromatics. See also Acousto-optic modulator Electrostriction Mechanochromism Photoelastic modulator Polarimetry References External links University of Cambridge Page on Photoelasticity. Laboratory for Physical Modeling of Structures and Photoelasticity (University of Trento, Italy) Build your own polariscope Materials science Mechanical engineering Mechanics Optics
Photoelasticity
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,967
[ "Applied and interdisciplinary physics", "Optics", "Materials science", "Mechanics", " molecular", "Mechanical engineering", "nan", "Atomic", " and optical physics" ]
1,533,197
https://en.wikipedia.org/wiki/RISKS%20Digest
The RISKS Digest or Forum On Risks to the Public in Computers and Related Systems is an online periodical published since 1985 by the Committee on Computers and Public Policy of the Association for Computing Machinery. The editor is Peter G. Neumann. It is a moderated forum concerned with the security and safety of computers, software, and technological systems. Security, and risk, here are taken broadly; RISKS is concerned not merely with so-called security holes in software, but with unintended consequences and hazards stemming from the design (or lack thereof) of automated systems. Other recurring subjects include cryptography and the effects of technically ill-considered public policies. RISKS also publishes announcements and Calls for Papers from various technical conferences, and technical book reviews (usually by Rob Slade, though occasionally by others). Although RISKS is a forum of a computer science association, most contributions are readable and informative to anyone with an interest in the subject. It is heavily read by system administrators, and computer security managers, as well as computer scientists and engineers. The RISKS Digest is published on a frequent but irregular schedule through the moderated Usenet newsgroup comp.risks, which exists solely to carry the Digest. Summaries of the forum appear as columns edited by Neumann in the ACM SIGSOFT Software Engineering Notes (SEN) and the Communications of the ACM (CACM). References External links RISKS Digest web archive RISKS Digest (Usenet newsgroup comp.risks) Google groups interface to comp.risks Risk Safety engineering Computer security procedures Magazines established in 1985 Association for Computing Machinery magazines Professional and trade magazines SRI International Engineering magazines Irregularly published magazines published in the United States 1985 establishments in the United States
RISKS Digest
[ "Engineering" ]
342
[ "Safety engineering", "Cybersecurity engineering", "Systems engineering", "Computer security procedures" ]
1,533,268
https://en.wikipedia.org/wiki/Waste%20hierarchy
Waste (management) hierarchy is a tool used in the evaluation of processes that protect the environment alongside resource and energy consumption from most favourable to least favourable actions. The hierarchy establishes preferred program priorities based on sustainability. To be sustainable, waste management cannot be solved only with technical end-of-pipe solutions and an integrated approach is necessary. The waste management hierarchy indicates an order of preference for action to reduce and manage waste, and is usually presented diagrammatically in the form of a pyramid. The hierarchy captures the progression of a material or product through successive stages of waste management, and represents the latter part of the life-cycle for each product. The aim of the waste hierarchy is to extract the maximum practical benefits from products and to generate the minimum amount of waste. The proper application of the waste hierarchy can have several benefits. It can help prevent emissions of greenhouse gases, reduce pollutants, save energy, conserve resources, create jobs and stimulate the development of green technologies. Life-cycle thinking All products and services have environmental impacts, from the extraction of raw materials for production to manufacture, distribution, use and disposal. Following the waste hierarchy will generally lead to the most resource-efficient and environmentally sound choice but in some cases refining decisions within the hierarchy or departing from it can lead to better environmental outcomes. Life cycle thinking and assessment can be used to support decision-making in the area of waste management and to identify the best environmental options. It can help policy makers understand the benefits and trade-offs they have to face when making decisions on waste management strategies. Life-cycle assessment provides an approach to ensure that the best outcome for the environment can be identified and put in place. It involves looking at all stages of a product's life to find where improvements can be made to reduce environmental impacts and improve the use or reuse of resources. A key goal is to avoid actions that shift negative impacts from one stage to another. Life cycle thinking can be applied to the five stages of the waste management hierarchy. For example, life-cycle analysis has shown that it is often better for the environment to replace an old washing machine, despite the waste generated, than to continue to use an older machine which is less energy-efficient. This is because a washing machine's greatest environmental impact is during its use phase. Buying an energy-efficient machine and using low-temperature detergent reduce environmental impacts. The European Union Waste Framework Directive has introduced the concept of life-cycle thinking into waste policies. This duality approach gives a broader view of all environmental aspects and ensures any action has an overall benefit compared to other options. The actions to deal with waste along the hierarchy should be compatible with other environmental initiatives. European Union The European waste hierarchy refers to the five steps included in the article 4 of the Waste Framework Directive: Prevention preventing and reducing waste generation. Reuse and preparation for reuse giving the products a second life before they become waste. This might involve reusing the product as-is, or reuse with modification, such as repurposing, refurbishing and remanufacture. Recycle any recovery operation by which waste materials are reprocessed into products, materials or substances whether for the original or other purposes. It includes composting and it does not include incineration. Recovery some waste incineration that upgrades the less inefficient incinerators. Disposal processes to dispose of waste be it landfilling, incineration, pyrolysis, gasification and other finalist solutions. According to the Waste Framework Directive the European Waste Hierarchy is legally binding except in cases that may require specific waste streams to depart from the hierarchy. This should be justified on the basis of life-cycle thinking. History The waste hierarchy is a concept of environmental literature and some EU member-states environmental legislation but before the [waste framework directive] of 2008 was not part of the European legislation. The waste framework directive of 1975 had no reference to a waste hierarchy. In 1975, The European Union's Waste Framework Directive (1975/442/EEC) introduced for the first time the elements of the waste hierarchy concept into European waste policy. It emphasized the importance of waste minimization, and the protection of the environment and human health, as a priority. Following the 1975 Directive, European Union policy and legislation adapted to the principles of the waste hierarchy. A fundament to the waste hierarchy concept is known as 'Lansink's Ladder', named after the Member of the Dutch Parliament who proposed it in 1979 to be incorporated into Dutch policy in 1993. In 1989, it was formalized into a hierarchy of management options in the European Commission's Community Strategy for Waste Management and this waste strategy was further endorsed in the Commission's review in 1996. In the first legislative proposals of 2006 the European Commission suggested a 3-step hierarchy composed of 1- Prevention and Reuse, 2- Recycling and Recovery (with incineration) and 3- Disposal. This was heavily criticised because it was putting recycling at the same level of incineration which was coherent with the traditional pro-incineration position from the European Commission. The pressure from NGOs and member states managed to turn the initial non-binding 3-step hierarchy into a quasi-binding 5-step hierarchy. In 2008, the European Union introduced a new five-step waste hierarchy to its waste legislation, Directive 2008/98/EC, which member states must introduce into national waste management laws. Article 4 of the directive lays down a five-step hierarchy of waste management options which must be applied by Member States in this priority order. Waste prevention, as the preferred option, is followed by reuse, recycling, recovery including energy recovery and as a last option, safe disposal. Among engineers, a similar hierarchy of waste management has been known as ARRE strategy: avoid, reduce, recycle, eliminate. Challenges for local and regional authorities The task of implementing the waste hierarchy in waste management practices within a country may be delegated to the different levels of government (national, regional, local) and to other possible factors including industry, private companies and households. Local and regional authorities can be particularly challenged by the following issues when applying the waste hierarchy approach. A coherent waste management strategy must be set up Separate collection and sorting systems for many different waste streams need to be established. Adequate treatment and disposal facilities must be established. An effective horizontal co-operation between local authorities and municipalities and a vertical co-operation between the different levels of government, local to regional and when beneficial, also at the national level need to established Finding financing for the establishing or upgrading of expensive sustainable waste management infrastructure to address the needs of managing waste A lack of data available on waste management strategies must be overcome and monitoring requirements must be met to implement the waste programs The enforcement and control of business plans and practices be established and applied to maximize benefits to the environment and human health A lack of administrative capacity at the regional and local level. The lack of finances, information, and technical expertise must be overcome for effective implementation and success of the waste management policies. Source reduction Source reduction involves efforts to reduce hazardous waste and other materials by modifying industrial production. Source reduction methods involve changes in manufacturing technology, raw material inputs, and product formulation. At times, the term "pollution prevention" may refer to source reduction. Another method of source reduction is to increase incentives for recycling. Many communities in the United States are implementing variable-rate pricing for waste disposal (also known as Pay As You Throw - PAYT) which has been effective in reducing the size of the municipal waste stream. Source reduction is typically measured by efficiencies and cutbacks in waste. Toxics use reduction is a more controversial approach to source reduction that targets and measures reductions in the upstream use of toxic materials. Toxic use reduction emphasizes the more preventive aspects of source reduction but, due to its emphasis on toxic chemical inputs, has been opposed more vigorously by chemical manufacturers. Toxic use reduction programs have been set up by legislation in some states, e.g., Massachusetts, New Jersey, and Oregon. The 3 R's represent the 'Waste Hierarchy' which lists the best ways of managing waste from the most to the least desirable. Many of the things we currently throw away could be reused again with just a little thought and imagination. See also Notes and references External links Waste unit in DG Environment Directive 2008/98/EC on waste (Waste Framework Directive) Directorate General for the Environment Letsrecycle, Letsrecycle.com article on European Debate on Waste Hierarchy Getting to Zero Waste Waste management Waste management concepts Waste legislation in the European Union Hierarchy Industrial ecology Reuse Recycling Waste minimisation
Waste hierarchy
[ "Chemistry", "Engineering" ]
1,755
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
1,533,475
https://en.wikipedia.org/wiki/South%20Pacific%20convergence%20zone
The South Pacific Convergence Zone (SPCZ), a reverse-oriented monsoon trough, is a band of low-level convergence, cloudiness and precipitation extending from the Western Pacific Warm Pool at the maritime continent south-eastwards towards French Polynesia and as far as the Cook Islands (160W, 20S). The SPCZ is a portion of the Intertropical Convergence Zone (ITCZ) which lies in a band extending east–west near the Equator but can be more extratropical in nature, especially east of the International Date Line. It is considered the largest and most important piece of the ITCZ, and has the least dependence upon heating from a nearby landmass during the summer than any other portion of the monsoon trough. The SPCZ can affect the precipitation on Polynesian islands in the southwest Pacific Ocean, so it is important to understand how the SPCZ behaves with large-scale, global climate phenomenon, such as the ITCZ, El Niño–Southern Oscillation, and the Interdecadal Pacific oscillation (IPO), a portion of the Pacific decadal oscillation. Position The SPCZ occurs where the southeast trades from transitory anticyclones to the south meet with the semipermanent easterly flow from the eastern South Pacific anticyclone. The SPCZ exists in summer and winter but can change its orientation and location. It is often distinct from the ITCZ over Australia, but at times they become one continuous zone of convergence. The location of the SPCZ is affected by ENSO and Interdecadal Pacific oscillation conditions. It generally stretches from the Solomon Islands through Vanuatu, Fiji, Samoa, and Tonga. Low-level convergence along this band forms cloudiness as well as showers and thunderstorms. Thunderstorm activity, or convection, within the band is dependent upon the season, as the more equatorward portion is most active in the Southern Hemisphere summer, and the more poleward portion is most active during transition seasons of fall and spring. The convergence zone shifts east or west depending on the existence of El Niño, or the phase of ENSO. Measuring SPCZ position The climatological position can be estimated by computing its mean position over 30 or more years. There are several metrics to measure the position of the SPCZ. The location of maximum rainfall, maximum of low level convergence, maxima of the 500 hPa vertical motion, and the minimum in outgoing longwave radiation (OLR) are four indicators of the SPCZ axis. Figure 1 shows qualitative agreement between all of these SPCZ indicators. Changes in SPCZ position The position of the SPCZ can change on seasonal, interannual, and possibly longer timescales. Observations Research into SPCZ movements of the 20th century are linked to changes in the IPO and ENSO. Folland et al., 2002 defined an index to describe the Interdecadal Pacific oscillation (IPO) with sea surface temperature and night marine air temperature to determine how the SPCZ varies with the IPO. When the IPO index has negative temperature anomalies, the SPCZ is displaced southwest and moves northeastward when the IPO index has positive temperature anomalies. The Southern Oscillation Index (SOI) is a metric for describing warm- and cold-phase conditions associated with the El Niño–Southern Oscillation (ENSO) and can also describe movements of the position of the SPCZ. Negative SOI index values are associated with warm-phase or El Niño-like conditions and a northeastward displacement of the SPCZ. Positive SOI index values, on the other hand, describe cold-phase or La Niña-like conditions and a southwestward displacement of the SPCZ. Determining the position of the SPCZ over longer timescales in the past (pre-20th century) has been studied using coral records of the southwest Pacific. Linsley et al. (2006) reconstructed sea-surface temperature and sea surface salinity in the southwest Pacific starting circa 1600CE by measuring the oxygen isotopic composition of four Porites coral records from Rarotonga and two from Fiji. Coral isotope measurements provide information on both sea surface temperature and sea surface salinity, so they can indicate times of increased or decreased temperature and/or precipitation associated with changes in the position of the SPCZ. Their coral oxygen isotope index indicated an eastward shift of the decadal mean position of the SPCZ since the mid 1800s. A shift of the SPCZ in this direction suggests there were more La Niña-like or cold-phase conditions in the Pacific, during this period, often called the Little Ice Age. Additional paleoclimate studies are still needed in order to test the reliability of these coral results. The IPO and ENSO can interact together to produce changes in the position of the SPCZ. West of about 140 W, both ENSO (measured with Southern Oscillation Index) and IPO strongly influence the SPCZ latitude, but farther east only ENSO is a significant factor. Only near 170 W is there any indication of an interaction between the two factors. Climate modelling Besides observations of the SPCZ and movement in its position, there have been modelling studies as well. Widlansky et al. (2012) used a number of climate models of differing complexity to simulate rainfall bands in the southwest Pacific and see how the magnitude and areal extent was affected by the SPCZ and ENSO. During El Niño or warm-phase conditions, the SPCZ typically shifted northeastward with dryer conditions on islands to the southwest, in agreement with observations. Conversely, a southwestward shift in rainfall accompanied La Niña or cold-phase events in the simulations. Widlanksy et al. (2012) argued the sea surface temperature biases in models created uncertainty in the rainfall projections and produce what has been named “the double ITCZ problem”. The impact of sea surface temperature bias was further investigated by using uncoupled atmospheric models with prescribed sea surface temperatures, and those 3 models each with differing complexity showed less severe double ITCZ bias than the ensemble of coupled models. Related oceanography At its southeast edge, the circulation around the feature forces a salinity gradient in the ocean, with fresher and warmer waters of the western Pacific lying to its west. Cooler and saltier waters lie to its east. See also El Niño–Southern Oscillation Monsoon trough Tropical cyclogenesis Tropical cyclone Coral bleaching References World Wide Web Tropical textbook : from trade winds to cyclone (2 vol) , 897 pp., Florent Beucher, 25 mai 2010, Météo-France, Print Tropical meteorology Atmospheric dynamics Regional climate effects Physical oceanography
South Pacific convergence zone
[ "Physics", "Chemistry" ]
1,371
[ "Atmospheric dynamics", "Applied and interdisciplinary physics", "Physical oceanography", "Fluid dynamics" ]
1,533,644
https://en.wikipedia.org/wiki/Reactive%20intermediate
In chemistry, a reactive intermediate or an intermediate is a short-lived, high-energy, highly reactive molecule. When generated in a chemical reaction, it will quickly convert into a more stable molecule. Only in exceptional cases can these compounds be isolated and stored, e.g. low temperatures, matrix isolation. When their existence is indicated, reactive intermediates can help explain how a chemical reaction takes place. Most chemical reactions take more than one elementary step to complete, and a reactive intermediate is a high-energy, hence unstable, product that exists only in one of the intermediate steps. The series of steps together make a reaction mechanism. A reactive intermediate differs from a reactant or product or a simple reaction intermediate only in that it cannot usually be isolated but is sometimes observable only through fast spectroscopic methods. It is stable in the sense that an elementary reaction forms the reactive intermediate and the elementary reaction in the next step is needed to destroy it. When a reactive intermediate is not observable, its existence must be inferred through experimentation. This usually involves changing reaction conditions such as temperature or concentration and applying the techniques of chemical kinetics, chemical thermodynamics, or spectroscopy. Reactive intermediates based on carbon are radicals, carbenes, carbocations, carbanions, arynes, and carbynes. Common features Reactive intermediates have several features in common: low concentration with respect to reaction substrate and final reaction product with the exception of carbanions, these intermediates do not obey the lewis octet rule, hence the high reactivity often generated on chemical decomposition of a chemical compound it is often possible to prove the existence of this species by spectroscopic means cage effects have to be taken into account often stabilisation by conjugation or resonance often difficult to distinguish from a transition state prove existence by means of chemical trapping Carbon Other reactive intermediates Carbenoid Ion-neutral complex Keto anions Nitrenes Oxocarbenium ions Phosphinidenes Phosphoryl nitride Tetrahedral intermediates in carbonyl addition reactions See also Activated complex Transition state References Extranol links Reaction mechanisms
Reactive intermediate
[ "Chemistry" ]
442
[ "Reaction mechanisms", "Organic compounds", "Physical organic chemistry", "Chemical kinetics", "Reactive intermediates" ]
1,533,679
https://en.wikipedia.org/wiki/Cage%20effect
In chemistry, the cage effect (also known as geminate recombination) describes how the properties of a molecule are affected by its surroundings. First introduced by James Franck and Eugene Rabinowitch in 1934, the cage effect suggests that instead of acting as an individual particle, molecules in solvent are more accurately described as an encapsulated particle. The encapsulated molecules or radicals are called cage pairs or geminate pairs. In order to interact with other molecules, the caged particle must diffuse from its solvent cage. The typical lifetime of a solvent cage is 10 seconds. Many manifestations of the cage effect exist. In free radical polymerization, radicals formed from the decomposition of an initiator molecule are surrounded by a cage consisting of solvent and/or monomer molecules. Within the cage, the free radicals undergo many collisions leading to their recombination or mutual deactivation. This can be described by the following reaction: After recombination, free radicals can either react with monomer molecules within the cage walls or diffuse out of the cage. In polymers, the probability of a free radical pair to escape recombination in the cage is 0.1 – 0.01 and 0.3-0.8 in liquids. In unimolecular chemistry, geminate recombination has first been studied in the solution phase using iodine molecules and heme proteins. In the solid state, geminate recombination has been demonstrated with small molecules trapped in noble gas solid matrices and in triiodide crystalline compounds. Cage recombination efficiency The cage effect can be quantitatively described as the cage recombination efficiency Fc where: Here Fc is defined as the ratio of the rate constant for cage recombination (kc) to the sum of the rate constants for all cage processes. According to mathematical models, Fc is dependent on changes on several parameters including radical size, shape, and solvent viscosity. It is reported that the cage effect will increase with an increase in radical size and a decrease in radical mass. Initiator efficiency In free radical polymerization, the rate of initiation is dependent on how effective the initiator is. Low initiator efficiency, ƒ, is largely attributed to the cage effect. The rate of initiation is described as: where Ri is the rate of initiation, kd is the rate constant for initiator dissociation, [I] is the initial concentration of initiator. Initiator efficiency represents the fraction of primary radicals R·, that actually contribute to chain initiation. Due to the cage effect, free radicals can undergo mutual deactivation which produces stable products instead of initiating propagation – reducing the value of ƒ. See also Solvent effects Carrier generation and recombination Rate-determining step References Chemistry theories Theoretical chemistry Reaction mechanisms
Cage effect
[ "Chemistry" ]
580
[ "Reaction mechanisms", "Theoretical chemistry", "nan", "Physical organic chemistry", "Chemical kinetics" ]
1,533,815
https://en.wikipedia.org/wiki/Ultra-low-sulfur%20diesel
Ultra-low-sulfur diesel (ULSD) is diesel fuel with substantially lowered sulfur content. Since 2006, almost all of the petroleum-based diesel fuel available in Europe and North America has been of a ULSD type. The move to lower sulfur content allows for the application of advanced emissions control technologies that substantially lower the harmful emissions from diesel combustion. Testing by engine manufacturers and regulatory bodies have found the use of emissions control devices in conjunction with ULSD can reduce the exhaust output of ozone precursors and particulate matter to near-zero levels. In 1993 the European Union began mandating the reduction of diesel sulfur content and implementing modern ULSD specifications in 1999. The United States started phasing in ULSD requirements for highway vehicles in 2006, with implementation for off-highway applications, such as locomotive and marine fuel, beginning in 2007. Lubricity Sulfur is not a lubricant in and of itself, but it can combine with the nickel content in many metal alloys to form a low-melting eutectic alloy that can increase lubricity. The process used to reduce the sulfur also reduces the fuel's lubricating properties. Lubricity is a measure of the fuel's ability to lubricate and protect the various parts of the engine's fuel injection system from wear. The processing required to reduce sulfur to 15 ppm also removes naturally occurring lubricity agents in diesel fuel. To manage this change ASTM International (formerly the American Society for Testing and Materials) adopted the lubricity specification defined in ASTM D975 for all diesel fuels and this standard went into effect January 1, 2005. The D975 standard defines two ULSD standards, Grade No. 2-D S15 (regular ULSD) and Grade No. 1-D S15 (a higher volatility fuel with a lower gelling temperature than regular ULSD). The refining process that removes the sulfur also reduces the aromatic content and density of the fuel, resulting in a minor decrease in the energy content, by about 1%. This decrease in energy content may result in slightly reduced peak power and fuel economy. The transition to ULSD is not without substantial costs. The US government estimated that pump prices for diesel fuel increased between as a result of the transition and, according to the American Petroleum Institute, the domestic refining industry has invested over $8 billion to comply with the new regulations. ULSD runs in any engine designed for the ASTM D975 diesel fuel, however, it is known to cause some seals to shrink, and may cause fuel pump failures in Volkswagen TDI engines used in 2006 to pre-2009 models. TDI engines from 2009 and on are designed to use ULSD exclusively; biodiesel blends are reported to prevent that failure. Africa Kenya Some filling stations in Kenya started offering 50 ppm diesel as of December 2010. As of 2018, Kenya has not fully implemented emission control systems. Mauritius As of June 2012, 50 ppm diesel is now standard across all filling stations, in a bid to reduce pollution. Morocco Morocco has started to introduce 50 ppm diesel to filling stations as of 2009. Since 2011, the 10 ppm diesel has been available in some filling stations. A generalization to all filling stations with the 10 ppm diesel is available since December 2015. South Africa 50 ppm sulfur content was first legislated by the South African Department of Minerals and Energy in early 2006, and has been widely available since then. South Africa's Clean Fuels 2 standard, expected to have begun in 2017, reducing the allowable sulfur content to 1 ppm. As of 2013, Sasol launched 10 ppm diesel at selected filling stations. Asia Saudi Arabia Euro-II gasoline and diesel standards. In 27th February 2024, the Saudi Ministry of Energy announced the successful introduction of Euro 5 standard diesel fuel and gasoline across the Kingdom of Saudi Arabia. China China has limited sulfur in diesel fuel to 150 ppm (equivalent to the Euro III standard). The limits of 10 ppm (equivalent to the Euro V standard), only apply in certain cities such as Beijing. From 2014 to 2017, China will limit sulfur in diesel fuel to 50 ppm. After 2017, the sulfur content in diesel fuel will be limited to 10ppm. Hong Kong In July 2000, Hong Kong became the first city in Asia to introduce ULSD, with sulfur content of 50 parts per million (ppm). In addition, new petrol private cars were asked to meet Euro III standards from 2001. Since the introduction of the law, all fuel station started supplying ULSD since August 2000. Sulfur content of regular diesel fuel was lowered from 500 ppm to 350 ppm on 1 January 2001. As part of the ULSD package, Hong Kong government lowered the tax for ULSD from HK$2.89 to $2.00 per litre in June 1998. The temporary concession was extended to 31 March 2000, then to 31 December 2000. On 19 June 2000, under Report of the Subcommittee on resolution under section 4(2) of the Dutiable Commodities Ordinance (Cap. 109), ULSD fuel tax was lowered to HK$1.11 per litre between 7 July 2000 and 31 December 2000, then increased to $2 in 2001, then $2.89 per litre on 1 January 2002. This resolution was passed on 27 June 2000. Castle Peak Power Station was designed to burn heavy fuel oil for boiler startup, flame stabilisation and occasionally as a secondary fuel. Since the early 2010s, all boilers were converted to burn ULSD to cut down sulfur dioxide emission. On the other hand, Black Point Power Station and Penny's Bay Power Station were designed to burn ULSD as a secondary and primary fuel respectively. So all power stations under CLP Power burn ULSD instead of higher sulfur alternatives now. Pakistan Pakistan began importing Euro-V standard fuel in mid 2020. The import of Euro-V petrol was started on August 10, 2020, while all diesel imports of the country will conform to Euro-V standard by January 2021. The shift was carried out directly from Euro-II to Euro-V. India Delhi first introduced 50 ppm sulfur diesel on 1 April 2010 as a step aimed at curbing vehicular pollution in the capital. This was done in 12 other cities at the same time. The sulfur content in the diesel being used was 350 ppm. There are two types of diesel available in India from year 2010. Bharat Stage IV (equivalent to Euro IV) specification having Sulfur level below 50 ppm is available all over the country and the Bharat Stage VI with ultra low sulfur was slowly introduced in New Delhi in April 2018. The Bharat Stage VI with ultra low sulfur content of less than 10 ppm will be standard across the country from April 2020. Singapore The National Environment Agency (NEA) defines ultra low sulfur diesel (ULSD) as diesel fuel with less than 50ppm, or 0.005 per cent, by July 2017 the limit will be 10 ppm. On 16 June 2005, NEA announced that the use of ULSD would be mandatory beginning 1 December 2005. The regulation also offered tax incentives for Euro IV diesel taxis, buses and commercial vehicles between 1 June 2004 and 3 September 2006, pending a mandatory conversion to Euro IV-compliant vehicles in 2007. Taiwan Beginning on 1 July 2007, Taiwan has limited sulfur in diesel fuel to 10 ppm. Europe European Union In the European Union, the "Euro IV" standard has applied since 2005, which specifies a maximum of 50 ppm of sulfur in diesel fuel for most highway vehicles; ultra-low-sulfur diesel with a maximum of 10 ppm of sulfur must “be available” from 2005 and was widely available as of 2008. In 2009, the Euro V fuel standard came into effect which reduced maximum sulfur to 10 ppm. In 2009, diesel fuel for most non-highway applications is also expected to conform to the Euro V standard for fuel. Various exceptions exist for certain uses and applications, most of which are being phased out over a period of several years. In particular, the so-called EU accession countries (primarily in Eastern Europe), have been granted certain temporary exemptions to allow for transition. Certain EU countries may apply higher standards or require faster transition. For example, Germany implemented a tax incentive of per litre of "sulfur free" fuel (both gasoline and diesel) containing less than 10 ppm beginning in January 2003 and average sulfur content was estimated in 2006 to be 3-5 ppm. Similar measures have been enacted in most of the Nordic countries, Benelux, Ireland and the United Kingdom to encourage early adoption of the 50 ppm and 10 ppm fuel standards. Sweden Since 1990, diesel fuel with a sulfur content of 50 ppm has been available on the Swedish market. From the year 1992, production started of a diesel fuel with 2 to 5 ppm of sulfur and a maximum of 5% by volume aromatics. There are certain tax incentives for using this fuel and from about year 2000, this low aromatic, low sulfur fuel has achieved 98-99% penetration of the Swedish diesel fuel market. Now RME (rapeseed methyl ester, also known as FAME (Fatty Acid Methyl Ester)) is a biofuel additive. Since 2003, a "zero" sulfur with very low aromatic content (less than 1% by volume) diesel fuel has been made available on the Swedish market under the name EcoPar. It is used wherever the working environment is highly polluted, an example being where diesel trucks are used in confined spaces such as in harbours, inside storage houses, during construction of road and rail tunnels & in vehicles that are predominantly run in city centres. Central and Eastern Europe (“Accession Countries”) As of 2008, most accession countries are expected to have made the transition to diesel fuel with 10 ppm sulfur or less. Slightly different times for transition have applied to each of the countries, but most have been required to reduce the maximum sulfur content to less than 50 ppm since 2005. Certain exemptions are expected for certain industries and applications, which will also be phased out over time. Compared to other EU countries, ULSD may be less widely available. Serbia In Serbia, an EU candidate country, all diesel fuel has been of the ultra-low-sulfur ("evrodizel") type since August 2013. Before that, there were two types of diesel fuel: D2 with 500 ppm sulfur or more, and low-sulfur "evrodizel". North America Canada Under Sulphur in Diesel Fuel Regulations (SOR/2002-254), the sulfur content of diesel fuel produced or imported was reduced to 15 ppm after 31 May 2006. This was followed by the reduction of sulfur in diesel fuel sold for use in on-road vehicles after 31 August 2006. For the designated Northern Supply Area, the deadline for reducing the sulfur content of diesel fuel for use in on-road vehicles was 31 August 2007. An amendment titled Regulations Amending the Sulphur in Diesel Fuel Regulations (SOR/2005-305) added following deadlines: concentration of sulfur in diesel fuel produced or imported for use in off-road engines shall not exceed 500 ppm from 1 June 2007 until 31 May 2010, and 15 ppm after that date. concentration of sulfur in diesel fuel sold for use in off-road engines shall not exceed 500 ppm from 1 October 2007 until 30 September 2010, and 15 ppm after that date. concentration of sulfur in diesel fuel sold in the northern supply area for use in off-road engines shall not exceed 500 ppm from 1 December 2008 until 30 November 2011, and 15 ppm after that date. concentration of sulfur in diesel fuel produced or imported for use in vessel engines or railway locomotive engines shall not exceed 500 parts per million (ppm) from 1 June 2007 until 31 May 2012, and 15 ppm after that date. An amendment titled Regulations Amending the Sulphur in Diesel Fuel Regulations (SOR/SOR/2006-163) allowed diesel with sulfur content up to 22 ppm to be sold for onroad vehicles between 1 September 2006 and 15 October 2006, then 15 ppm after that date. This amendment facilitated the introduction of 15 ppm sulfur diesel fuel for on-road use in 2006, by lengthening the period between the dates that the production/import limit and the sales limit come into effect. It provided additional time to fully turn over the higher-sulfur diesel fuel inventory for on-road use in the distribution system. The requirements of the Regulations were aligned, in level and timing, with those of the U.S. EPA. Mexico Mexico began introduction of ULSD throughout the country in 2006. United States Ultra-low-sulfur diesel fuel was proposed by EPA as a new standard for the sulfur content in on-road diesel fuel sold in the United States since October 15, 2006, except for rural Alaska which transferred in 2010. California has required it since September 1, 2006. This new regulation applies to all diesel fuel, diesel fuel additives and distillate fuels blended with diesel for on-road use, such as kerosene. Since December 1, 2010, all highway diesel fuel nationwide has been ULSD. Non-road diesel engine fuel moved to 500 ppm sulfur in 2007, and further to ULSD in 2010. Railroad locomotive and marine diesel fuel moved to 500 ppm sulfur in 2007, and changed to ULSD in 2012. There were exemptions for small refiners of non-road, locomotive and marine diesel fuel that allowed for 500 ppm diesel to remain in the system until 2014. After December 1, 2014 all highway, non-road, locomotive and marine diesel fuel is ULSD. The EPA mandated the use of ULSD fuel in model year 2007 and newer highway diesel fuel engines equipped with advanced emission control systems that required the new fuel. These advanced emission control technologies were required for marine diesel engines in 2014 and for locomotives in 2015. The allowable sulfur content for ULSD (15 ppm) is much lower than the previous U.S. on-highway standard for low sulfur diesel (LSD, 500 ppm) which allowed advanced emission control systems to be fitted that would otherwise be damaged and or rendered ineffective by these compounds. These systems can greatly reduce emissions of oxides of nitrogen and particulate matter. Because this grade of fuel is comparable to European grades, European engines will no longer have to be redesigned to cope with higher sulfur content in the U.S. These engines may use advanced emissions control systems which would otherwise be damaged by sulfur. It was hoped that the ULSD standard would increase the availability of diesel-fueled passenger cars in the U.S. In Europe, diesel-engined automobiles have been much more popular with buyers than has been the case in the U.S. Additionally, the EPA has assisted manufacturers with the transition to tougher emissions regulations by loosening them for model year 2007 to 2010 light-duty diesel engines. According to EPA estimates, with the implementation of the new fuel standards for diesel, nitrogen oxide emissions will be reduced by 2.6 million tons each year and soot or particulate matter will be reduced by 110,000 tons a year. On June 1, 2006, U.S. refiners were required to produce 80% of their annual output as ULSD (15 ppm), and petroleum marketers and retailers were required to label diesel fuel, diesel fuel additives and kerosene pumps with EPA-authorized language disclosing fuel type and sulfur content. Other requirements effective June 1, 2006, including EPA-authorized language on Product Transfer Documents and sulfur-content testing standards, are designed to prevent misfueling, contamination by higher-sulfur fuels and liability issues. The EPA deadline for industry compliance to a 15 ppm sulfur content was originally set for July 15, 2006 for distribution terminals, and by September 1, 2006 for retail. But on November 8, 2005, the deadline was extended by 1.5 months to September 1, 2006 for terminals and October 15, 2006 for retail. In California, the extension was not granted and followed the original schedule. As of December, 2006, the ULSD standard has been in effect according to the amended schedule, and compliance at retail locations was reported to be in place. South America Source: Argentina Argentina has three grades of diesel fuel, as follows: Grade 1, also known as AGRODIESEL or GASOIL AGRO, is intended mainly for agricultural equipment. Sale of Grade 1 diesel is optional at retail outlets. Grade 2, also known as GASOIL COMUN (common diesel fuel), is intended for the bulk of diesel fuelled vehicles. Grade 2 diesel fuel is available with 2 different sulfur levels depending on the population density of the location where it is retailed. Grade 3 diesel fuel, also known as GASOIL ULTRA, is the highest quality diesel fuel and is supposed to be available starting February 1, 2006. Sale of Grade 3 diesel at retail outlets is optional until 2008. At the time the regulation was published, the sulfur limits amounted to 3000 ppm for Grade 1, 1500/2500 ppm (depending on the area) for Grade 2, and 500 ppm for Grade 3. Sulfur limit reductions occur in 2008, 2009, 2011, and 2016. After the last reduction, in June 2016, the sulfur limits become 1000 ppm, 30 ppm, and 10 ppm for the three respective grades. Law 26.093 requires 5% biodiesel to be blended with diesel fuel starting January 1, 2010. Brazil Since january 2012, Brazilian service stations started offering two types of Diesel, 50 ppm and 500 ppm on most areas and 1800 ppm in remote areas. Since January 2013, the 10 ppm or EURO V Diesel replaced the 50 ppm Diesel, which is now widely used and can be found in the majority of service stations, and the 1800 ppm was discontinued. All vehicles produced or sold in Brazil since January 2012 must be able to use only 50 ppm or lower sulfur Diesel. Also, all Diesel available for purchase in Brazil contains 10% of biodiesel. Chile Chile requires <15 ppm in Santiago, for diesel since 2011, and the rest of the country requires <50 ppm. Colombia Since January 1, 2013, Colombia's diesel has <50 ppm for public and private transport. Uruguay Uruguay is expected to impose a 50 ppm ULSD limit by 2009. 70% of the fuel used in Uruguay is diesel. Oceania Australia Australia has had a limit of 10 ppm since 1 January 2009. The limit had been 50ppm. New Zealand New Zealand has had a limit of 10 ppm since 1 January 2009. Prior to that, the limit was 50 ppm. Russia and the former Soviet Union As of 2002, much of the former Soviet Union still applied limits on sulfur in diesel fuel substantially higher than in Western Europe. Maximum levels of 2,000 and 5,000 ppm were applied for different uses. In Russia, lower maximum levels of 350 ppm and 500 ppm sulfur in automotive fuel were enforced in certain areas, and Euro IV and Euro V fuel with a concentration of 50 ppm or less was available at certain fueling stations, at least in part to comply with emissions control equipment on foreign-manufactured cars and trucks, the number of which is increased every year, especially in big cities such as Moscow and Saint Petersburg. According to the technical regulation, selling a fuel with sulfur content over 50 ppm was allowed until 31 December 2011. Euro IV diesel in particular may be available at fueling stations selling to long-distance truck fleets servicing import and export flows between Russia and the EU. See also Biodiesel Diesel engine Diesel fuel EN 590 European emission standards Organosulfur compounds United States emission standards Volkswagen emissions scandal References Petroleum products Fuels
Ultra-low-sulfur diesel
[ "Chemistry" ]
4,044
[ "Petroleum", "Petroleum products", "Fuels", "Chemical energy sources" ]
1,534,136
https://en.wikipedia.org/wiki/NGC%204395
NGC 4395 is a nearby low surface brightness spiral galaxy located about 14 million light-years (or 4.3 Mpc) from Earth in the constellation Canes Venatici. The nucleus of NGC 4395 is active and the galaxy is classified as a Seyfert Type I known for its very low-mass supermassive black hole. Physical characteristics NGC 4395 has a halo that is about 8 in diameter. It has several patches of greater brightness running northwest to southeast. The one furthest southeast is the brightest. Three of the patches have their own NGC numbers: 4401, 4400, and 4399 running east to west. The galaxy is highly unusual for Seyfert galaxies, because it does not have a bulge and is considered to be a dwarf galaxy. Observational history NGC 4395 was imaged and classified as a "spiral nebula" in a 1920 paper by astronomer Francis G. Pease. Now, it is known to be a galaxy distinct from the Milky Way (see Great Debate). Along with several other nearby galaxies, resolved stars in NGC 4395 were used to measure the expansion rate of the Universe by Allan Sandage and Gustav Andreas Tammann in their 1974 paper. More recently, NGC 4395 was discovered to contain a very low-luminosity active galactic nucleus. Since then, its nucleus has been the subject of several academic papers and attempts to measure the mass of its central black hole. Nucleus NGC 4395 is one of the least luminous and nearest Seyfert galaxies known. The nucleus of NGC 4395 is notable for containing one of the smallest supermassive black holes with a well-measured mass. The central black hole has a mass of "only" 300,000 . However, a recent study found a black hole mass of just 10,000 . The low-mass black hole in NGC 4395 would make it a so-called "intermediate-mass black hole". The black hole may have a truncated disk. References External links NGC4395 Unbarred spiral galaxies M94 Group 4395 07542 40596 Astronomical objects discovered in 1786 Canes Venatici Seyfert galaxies Magellanic spiral galaxies
NGC 4395
[ "Astronomy" ]
458
[ "Canes Venatici", "Constellations" ]
1,534,314
https://en.wikipedia.org/wiki/Spin%20transistor
The magnetically sensitive transistor, also known as the spin transistor, spin field-effect transistor (spinFET), Datta–Das spin transistor or spintronic transistor (named for spintronics, the technology which this development spawned), originally proposed in 1990 by Supriyo Datta and Biswajit Das, is an alternative design on the common transistor invented in the 1940s. This device was considered one of the Nature milestones in spin in 2008. Description The spin transistor comes about as a result of research on the ability of electrons (and other fermions) to naturally exhibit one of two (and only two) states of spin: known as "spin up" and "spin down". Thus, spin transistors operate on electron spin as embodying a two-state quantum system. Unlike its namesake predecessor, which operates on an electric current, spin transistors operate on electrons on a more fundamental level; it is essentially the application of electrons set in particular states of spin to store information. One advantage over regular transistors is that these spin states can be detected and altered without necessarily requiring the application of an electric current. This allows for detection hardware (such as hard drive heads) that are much smaller but even more sensitive than today's devices, which rely on noisy amplifiers to detect the minute charges used on today's data storage devices. The potential result is devices that can store more data in less space and consume less power, using less costly materials. The increased sensitivity of spin transistors is also being researched in creating more sensitive automotive sensors, a move being encouraged by a push for environmentally friendlier vehicles. A second advantage of a spin transistor is that the spin of an electron is semi-permanent and can be used as means of creating cost-effective non-volatile solid state storage that does not require the constant application of current to sustain. It is one of the technologies being explored for magnetic random access memory (MRAM). Because of its high potential for practical use in the computer world, spin transistors are currently being researched in various firms throughout the world, such as in England and in Sweden. Recent breakthroughs have allowed the production of spin transistors, using readily available substances, that can operate at room temperature: a precursor to commercial viability. References Transistor types Spintronics
Spin transistor
[ "Physics", "Materials_science" ]
494
[ "Spintronics", "Condensed matter physics" ]
1,534,483
https://en.wikipedia.org/wiki/Motion%20estimation
In computer vision and image processing, motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill-posed problem as the motion happens in three dimensions (3D) but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image (global motion estimation) or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom. Related terms More often than not, the term motion estimation and the term optical flow are used interchangeably. It is also related in concept to image registration and stereo correspondence. In fact all of these terms refer to the process of finding corresponding points between two images or video frames. The points that correspond to each other in two views (images or frames) of a real scene or object are "usually" the same point in that scene or on that object. Before we do motion estimation, we must define our measurement of correspondence, i.e., the matching metric, which is a measurement of how similar two image points are. There is no right or wrong here; the choice of matching metric is usually related to what the final estimated motion is used for as well as the optimisation strategy in the estimation process. Each motion vector is used to represent a macroblock in a picture based on the position of this macroblock (or a similar one) in another picture, called the reference picture. The H.264/MPEG-4 AVC standard defines motion vector as: motion vector: a two-dimensional vector used for inter prediction that provides an offset from the coordinates in the decoded picture to the coordinates in a reference picture. Algorithms The methods for finding motion vectors can be categorised into pixel based methods ("direct") and feature based methods ("indirect"). A famous debate resulted in two papers from the opposing factions being produced to try to establish a conclusion. Direct methods Block-matching algorithm Phase correlation and frequency domain methods Pixel recursive algorithms Optical flow Indirect methods Indirect methods use features, such as corner detection, and match corresponding features between frames, usually with a statistical function applied over a local or global area. The purpose of the statistical function is to remove matches that do not correspond to the actual motion. Statistical functions that have been successfully used include RANSAC. Additional note on the categorization It can be argued that almost all methods require some kind of definition of the matching criteria. The difference is only whether you summarise over a local image region first and then compare the summarisation (such as feature based methods), or you compare each pixel first (such as squaring the difference) and then summarise over a local image region (block base motion and filter based motion). An emerging type of matching criteria summarises a local image region first for every pixel location (through some feature transform such as Laplacian transform), compares each summarised pixel and summarises over a local image region again. Some matching criteria have the ability to exclude points that do not actually correspond to each other albeit producing a good matching score, others do not have this ability, but they are still matching criteria. Affine motion estimation Affine motion estimation is a technique used in computer vision and image processing to estimate the motion between two images or frames. It assumes that the motion can be modeled as an affine transformation (translation + rotation + zooming), which is a linear transformation followed by a translation. Applications Video coding Applying the motion vectors to an image to synthesize the transformation to the next image is called motion compensation. It is most easily applied to discrete cosine transform (DCT) based video coding standards, because the coding is performed in blocks. As a way of exploiting temporal redundancy, motion estimation and compensation are key parts of video compression. Almost all video coding standards use block-based motion estimation and compensation such as the MPEG series including the most recent HEVC. 3D reconstruction In simultaneous localization and mapping, a 3D model of a scene is reconstructed using images from a moving camera. See also Moving object detection Graphics processing unit Vision processing unit Scale-invariant feature transform References Video processing Motion (physics) Motion in computer vision
Motion estimation
[ "Physics" ]
910
[ "Physical phenomena", "Motion (physics)", "Space", "Mechanics", "Motion in computer vision", "Spacetime" ]
1,535,090
https://en.wikipedia.org/wiki/Land%20development
Land development is the alteration of landscape in any number of ways such as: Changing landforms from a natural or semi-natural state for a purpose such as agriculture or housing Subdividing real estate into lots, typically for the purpose of building homes Real estate development or changing its purpose, for example by converting an unused factory complex into a condominium. History Land development has a history dating to Neolithic times around 8,000 BC. From the dawn of civilization, the process of land development has elaborated the progress of improvements on a piece of land based on codes and regulations, particularly housing complexes. Economic aspects In an economic context, land development is also sometimes advertised as land improvement or land amelioration. It refers to investment making land more usable by humans. For accounting purposes it refers to any variety of projects that increase the value of the process . Most are depreciable, but some land improvements are not able to be depreciated because a useful life cannot be determined. Home building and containment are two of the most common and the oldest types of development. In an urban context, land development furthermore includes: Road construction Access roads, walkways, and parking lots Bridges Landscaping Clearing, terracing, or land levelling Land preparation (development) for gardens Setup of fences and, to a lesser degree, hedges Service connections to municipal services and public utilities Drainage, canal systems External lighting (street lamps etc.) A landowner or developer of a project of any size, will often want to maximise profits, minimise risk, and control cash flow. This "profitable energy" means identifying and developing the best scheme for the local marketplace, whilst satisfying the local planning process. Development analysis puts development prospects and the development process itself under the microscope, identifying where enhancements and improvements can be introduced. These improvements aim to align with best design practice, political sensitivities, and the inevitable social requirements of a project, with the overarching objective of increasing land values and profit margins on behalf of the landowner or developer. Development analysis can add significantly to the value of land and development, and as such is a crucial tool for landowners and developers. It is an essential step in Kevin A. Lynch's 1960 book The Image of the City, and is considered to be essential to realizing the value potential of land. The landowner can share in additional planning gain (significant value uplift) via an awareness of the land's development potential. This is done via a residual development appraisal or residual valuation. The residual appraisal calculates the sale value of the end product (the gross development value or GDV) and hypothetically deducts costs, including planning and construction costs, finance costs and developer's profit. The "residue", or leftover proportion, represents the land value. Therefore, in maximising the GDV (that which one could build on the land), land value is concurrently enhanced. Land value is highly sensitive to supply and demand (for the end product), build costs, planning and affordable housing contributions, and so on. Understanding the intricacies of the development system and the effect of "value drivers" can result in massive differences in the landowner's sale value. Conversion of landforms Land development puts more emphasis on the expected economic development as a result of the process; "land conversion" tries to focus on the general physical and biological aspects of the land use change. "Land improvement" in the economic sense can often lead to land degradation from the ecological perspective. Land development and the change in land value does not usually take into account changes in the ecology of the developed area. While conversion of (rural) land with a vegetation carpet to building land may result in a rise in economic growth and rising land prices, the irreversibility of lost flora and fauna because of habitat destruction, the loss of ecosystem services and resulting decline in environmental value is only considered a priori in environmental full-cost accounting. Conversion to building land Conversion to building land is as a rule associated with road building, which in itself already brings topsoil abrasion, soil compaction and modification of the soil's chemical composition through soil stabilization, creation of impervious surfaces and, subsequently, (polluted) surface runoff water. Construction activity often effectively seals off a larger part of the soil from rainfall and the nutrient cycle, so that the soil below buildings and roads is effectively "consumed" and made infertile. With the notable exception of attempts at rooftop gardening and hanging gardens in green buildings (possibly as constituents of green urbanism), vegetative cover of higher plants is lost to concrete and asphalt surfaces, complementary interspersed garden and park areas notwithstanding. Conversion to farmland New creation of farmland (or 'agricultural land conversion') will rely on the conversion and development of previous forests, savannas or grassland. Recreation of farmland from wasteland, deserts or previous impervious surfaces is considerably less frequent because of the degraded or missing fertile soil in the latter. Starting from forests, land is made arable by assarting or slash-and-burn. Agricultural development furthermore includes: Hydrological measures (land levelling, drainage, irrigation, sometimes landslide and flood control) Soil improvement (fertilization, establishment of a productive chemical balance). Road construction Because the newly created farmland is more prone to erosion than soil stabilized by tree roots, such a conversion may mean irreversible crossing of an ecological threshold. The resulting deforestation is also not easily compensated for by reforestation or afforestation. This is because plantations of other trees as a means for water conservation and protection against wind erosion (shelterbelts), as a rule, lack the biodiversity of the lost forest, especially when realized as monocultures. These deforestation consequences may have lasting effects on the environment including soil stabilization and erosion control measures that may not be as effective in preserving topsoil as the previous intact vegetation. Restoration Massive land conversion without proper consideration of ecological and geological consequences may lead to disastrous results, such as: General soil degradation Catastrophic soil salination and solonchak formation, e.g., in Central Asia, as a consequence of irrigation by saline groundwater Desertification, soil erosion and ecological shifts due to drainage Leaching of saline soils Habitat loss for the wildlife. While deleterious effects can be particularly visible when land is developed for industrial or mining usage, agro-industrial and settlement use can also have a massive and sometimes irreversible impact on the affected ecosystem. Examples of land restoration/land rehabilitation counted as land development in the strict sense are still rare. However, renaturation, reforestation, stream restoration may all contribute to a healthier environment and quality of life, especially in densely populated regions. The same is true for planned vegetation like parks and gardens, but restoration plays a particular role, because it reverses previous conversions to built and agricultural areas. Environmental issues The environmental impact of land use and development is a substantial consideration for land development projects. On the local level an environmental impact report (EIR) may be necessary. In the United States, federally funded projects typically require preparation of an environmental impact statement (EIS). The concerns of private citizens or political action committees (PACs) can influence the scope, or even cancel, a project based on concerns like the loss of an endangered species’ habitat. In most cases, the land development project will be allowed to proceed if mitigation requirements are met. Mitigation banking is the most prevalent example, and necessitates that the habitat will have to be replaced at a greater rate than it is removed. This increase in total area helps to establish the new ecosystem, though it will require time to reach maturity. Biodiversity impacts The extent, and type of land use directly affects wildlife habitat and thereby impacts local and global biodiversity. Human alteration of landscapes from natural vegetation (e.g. wilderness) to any other use can result in habitat loss, degradation, and fragmentation, all of which can have devastating effects on biodiversity. Land conversion is the single greatest cause of extinction of terrestrial species. An example of land conversion being a chief cause of the critically endangered status of a carnivore is the reduction in habitat for the African wild dog, Lycaon pictus. Deforestation is also the reason for loss of a natural habitat, with large numbers of trees being cut down for residential and commercial use. Urban growth has become a problem for forests and agriculture, the expansion of structures prevents natural resources from producing in their environment. In order to prevent the loss of wildlife the forests must maintain a stable climate and the land must remain unaffected by development. Furthermore, forests can be sustained by different forest management techniques such as reforestation and preservation. Reforestation is a reactive approach designed to replant trees that were previously logged within the forest boundary in attempts to re-stabilize this ecosystem. Preservation on the other hand is a proactive idea that promotes the concept of leaving the forest as is, without using this area for its ecosystem goods and services. Both of these methods to mitigate deforestation are being used throughout the world. The U.S. Forest Service predicts that urban and developing terrain in the U.S. will expand by 41 percent in the year 2060. These conditions cause displacement for the wildlife and limited resources for the environment to maintain a sustainable balance. See also References R.J. Oosterbaan, International Institute for Land Reclamation and Improvement, Wageningen, The Netherlands. "Improvement of waterlogged and saline soils." Free downloads of software and articles on land drainage. Construction Urban planning Earthworks (engineering) Land management Real estate
Land development
[ "Engineering" ]
1,958
[ "Construction", "Urban planning", "Architecture" ]
1,535,719
https://en.wikipedia.org/wiki/Similarity%20invariance
In linear algebra, similarity invariance is a property exhibited by a function whose value is unchanged under similarities of its domain. That is, is invariant under similarities if where is a matrix similar to A. Examples of such functions include the trace, determinant, characteristic polynomial, and the minimal polynomial. A more colloquial phrase that means the same thing as similarity invariance is "basis independence", since a matrix can be regarded as a linear operator, written in a certain basis, and the same operator in a new basis is related to one in the old basis by the conjugation , where is the transformation matrix to the new basis. See also Invariant (mathematics) Gauge invariance Trace diagram Functions and mappings
Similarity invariance
[ "Mathematics" ]
151
[ "Mathematical analysis", "Functions and mappings", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations" ]
1,535,731
https://en.wikipedia.org/wiki/Fictionalism
Fictionalism is a view in philosophy that posits that statements appearing to be descriptions of the world should not be construed as such, but should instead be understood as cases of "make believe", thus allowing individuals to treat something as literally true (a "useful fiction"). Concept Fictionalism consists in at least the following three theses: Claims made within the domain of discourse are taken to be truth-apt; that is, true or false. The domain of discourse is to be interpreted at face value—not reduced to meaning something else. The aim of discourse in any given domain is not truth, but some other virtue(s) (e.g., simplicity, explanatory scope). Two important strands of fictionalism are: modal fictionalism developed by Gideon Rosen, which states that possible worlds, regardless of whether they exist or not, may be a part of a useful discourse, and mathematical fictionalism advocated by Hartry Field. Modal fictionalism is recognized as further refinement to the basic fictionalism as it holds that representations of possible worlds in texts are useful fictions. Conceptualization explains that it is a descriptive theorizing of what a text, such as the Bible, amounts to. It is also associated with linguistic ersatzism in the sense that both are views possible worlds. Fictionalism, on the other hand, in the philosophy of mathematics states that talk of numbers and other mathematical objects is nothing more than a convenience for computation. According to Field, there is no reason to treat parts of mathematics that involve reference to or quantification as true. In this discourse, mathematical objects are accorded the same metaphysical status as literary figures such as Macbeth. Also in meta-ethics, there is an equivalent position called moral fictionalism (championed by Richard Joyce). Many modern versions of fictionalism are influenced by the work of Kendall Walton in aesthetics. See also Color fictionalism Hans Vaihinger Noble lie Philosophy of color Quietism (philosophy) Further reading References External links Philosophical methodology Theories of deduction Theories of truth
Fictionalism
[ "Mathematics" ]
421
[ "Theories of deduction" ]
1,535,838
https://en.wikipedia.org/wiki/SGI%20Origin%20350
The SGI Origin 350 is a mid-range server computer developed and manufactured by SGI introduced in 2003. Their discontinuation in December 2006 brought to a close almost two decades of MIPS and IRIX computing. Hardware The Origin 350 is based on the NUMAflex architecture, where a system is constructed from a varying number of modules connected together using the NUMAlink3 interconnect via cables. A system can consist of 2 to 32 processors, 1 to 64 GB of memory and 4 to 62 PCI-X slots. For systems with more than 8 processors, a 2U NUMAlink module is required for routing. Modules for disk storage and further PCI slots were also available. Multiple modules are coordinated at power up by an L2 controller which communicated to the modules via USB ports. The L2 controller was an external PowerPC computer running Linux with console, USB, modem and Ethernet ports. Compute module The 2U compute module contained the processors, memory and four PCI-X slots on two buses. Each compute module features an IP53 node board, which contains two or four MIPS R16000 microprocessors clocked at 600 or 700 MHz with 4 MB of ECC L2 cache, eight DIMM slots for 1 to 8 GB of ECC memory, a Bedrock ASIC serving as the crossbar for enabling communication between the processors, memory and PCI-X slots. Two variants of the compute module exist, the base compute module and the system expansion compute modules. The difference between these two models is that the inclusion of a SCSI disk drive and an IO9 input/output card is mandatory in the base compute module, but optional in the system expansion compute module. The IO9 input/output card connects to a PCI-X slot and provides SCSI interfaces for two internal disks, an external SCSI port, audio I/O and a 10/100/1000BASE-T Ethernet connection. References SGI Origin 350 Server System User's Guide, 007-4566-001, June 16, 2003, Silicon Graphics External links SGI: SGI Origin 350 nekochan wiki: SGI Origin 350 Origin 350
SGI Origin 350
[ "Technology" ]
450
[ "Computing stubs", "Computer hardware stubs" ]
1,536,068
https://en.wikipedia.org/wiki/Ha-ha
A ha-ha ( or ), also known as a sunk fence, blind fence, ditch and fence, deer wall, or foss, is a recessed landscape design element that creates a vertical barrier (particularly on one side) while preserving an uninterrupted view of the landscape beyond from the other side. The name comes from viewers' surprise when seeing the construction. The design can include a turfed incline that slopes downward to a sharply vertical face (typically a masonry retaining wall). Ha-has are used in landscape design to prevent access to a garden by, for example, grazing livestock, without obstructing views. In security design, the element is used to deter vehicular access to a site while minimising visual obstruction. Etymology The name ha-ha is of French origin, and was first used in print in Dezallier d'Argenville's 1709 book The Theory and Practice of Gardening, in which he explains that the name derives from the exclamation of surprise that viewers would make on recognising the optical illusion. The name ha-ha is attested in toponyms in New France from 1686 (as seen today in Saint-Louis-du-Ha! Ha!), and is a feature of the gardens of the Château de Meudon, circa 1700. In a letter to Daniel Dering in 1724, John Perceval (grandfather to the prime minister Spencer Perceval), observed of Stowe: In the 18th century, they were often called a sunken or sunk fence, at least in formal writing, as by Horace Walpole, George Mason, and Humphry Repton. Walpole also referred to them as Kent-fences, named after William Kent. Walpole surmised that the name is derived from the response of ordinary folk on encountering them and that they were "then deemed so astonishing, that the common people called them Ha! Has! to express their surprise at finding a sudden and unperceived check to their walk." Thomas Jefferson, describing the garden at Stowe after his visit in April 1786, also uses the term with exclamation marks: "The inclosure is entirely by ha! ha!" George Washington called it both a "ha haw" and a "deer wall". Origins Before mechanical lawn mowers, a common way to keep large areas of grassland trimmed was to allow livestock, usually sheep, to graze the grass. A ha-ha prevented grazing animals on large estates from gaining access to the lawn and gardens adjoining the house, giving a continuous vista to create the illusion that the garden and landscape were one and undivided. The basic design of sunken ditches is of ancient origin, being a feature of deer parks first found in Anglo-Saxon England. The deer-leap or consisted of a ditch with one steep side surmounted by a pale (picket-style fence made of wooden stakes) or hedge, which allowed deer to enter the park but not to leave. Since the time of the Norman conquest of England the right to construct a deer-leap was granted by the king, with reservations made as to the depth of the foss or ditch and the height of the pale or hedge. On Dartmoor, the deer-leap was known as a "leapyeat". In Britain, the ha-ha is a feature of the landscape gardens laid out by Charles Bridgeman and William Kent and was an essential component of the "swept" views of Capability Brown. Horace Walpole credits Bridgeman with the invention of the ha-ha but was unaware of the earlier French origins. During his excavations at Iona in the period 1964–1984, Richard Reece discovered an 18th-century ha-ha designed to protect the abbey from cattle. Ice houses were sometimes built into ha-ha walls because they provide a subtle entrance that makes the ice house a less intrusive structure, and the ground provides additional insulation. Examples Most typically, ha-has are still found in the grounds of grand country houses and estates. They keep cattle and sheep out of the formal gardens, without the need for obtrusive fencing. They vary in depth from about (Horton House) to (Petworth House). Beningbrough Hall in Yorkshire is separated from its extensive grounds by a ha-ha to prevent sheep and cattle from entering the Hall's gardens or the Hall itself. An unusually long example is the ha-ha that separates the Royal Artillery Barracks Field from Woolwich Common in southeast London. This deep ha-ha was installed around 1774 to prevent sheep and cattle, grazing at a stopover on Woolwich Common on their journey to the London meat markets, from wandering onto the Royal Artillery gunnery range. A rare feature of this east-west ha-ha is that the normally hidden brick wall emerges above ground for its final 75 yards (70 metres) or so as the land falls away to the west, revealing a fine batter to the brickwork face of the wall, thus exposed. This final west section of the ha-ha forms the boundary of the Gatehouse by James Wyatt RA. The Royal Artillery ha-ha is maintained in a good state of preservation by the Ministry of Defence. It is a Listed Building, and is accompanied by Ha-Ha Road that runs alongside its full length. There is a shorter ha-ha in the grounds of the nearby Jacobean Charlton House. The Royal Crescent row of 30 terraced houses in Bath, Somerset, which were built between 1767 and 1774 in the Georgian architecture style, also feature a large ha-ha that provides an uninterrupted view of Royal Victoria Park. In Australia, ha-has were also used at Victorian-era lunatic asylums such as Yarra Bend Asylum, Beechworth Asylum, and Kew Lunatic Asylum in Victoria, and the Parkside Lunatic Asylum in South Australia. From the inside, the walls presented a tall face to patients, preventing them from escaping, while from outside they looked low so as not to suggest imprisonment. For the patients themselves, standing before the trench, it also enabled them to see the wider landscape. Kew Asylum has been redeveloped as apartments; however some of the ha-has remain, albeit partially filled in. Ha-has were also used in North America. Only two historic installations remain in Canada, one of which is on the grounds of Nova Scotia's Uniacke House (1813), a rural estate built by Richard John Uniacke, an Irish-born Attorney-General of Nova Scotia. Mount Vernon, the plantation of George Washington, incorporates ha-haws on its grounds as part of the landscaping for the mansion built by George Washington’s father, Augustine Washington. A later American president, Thomas Jefferson, "built a ha-ha at the southern end of the South Lawn [of the White House], which was an eight-foot wall with a sunken ditch meant to keep the livestock from grazing in his garden." A 21st-century use of a ha-ha is at the Washington Monument to minimise the visual impact of security measures. After 9/11 and another unrelated terror threat at the monument, authorities had put up jersey barriers to prevent large motor vehicles from approaching the monument. The temporary barriers were later replaced with a new ha-ha, a low 0.76 m (30-inch) granite stone wall that incorporated lighting and doubled as a seating bench. It received the 2005 Park/Landscape Award of Merit. In fiction In Jane Austen's Mansfield Park (1814), a ha-ha prevents the more sensible characters from getting around a locked gate and into the woodland beyond. In Anthony Trollope's Barchester Towers (1857), a ha-ha marks the social divisions in Miss Thorne's fête champêtre: "Two marquees had been erected for these two banquets: that for the quality on the esoteric or garden side of a certain deep ha-ha; and that for the non-quality on the exoteric or paddock side of the same." In J.J. Connington's 's 1934 detective novel The Ha-Ha Case a murder is committed in a ha-ha during a shooting party. In Terry Pratchett's Men at Arms, the grounds of the Patrician's Palace, created by the infamous Bloody Stupid Johnson, includes a ho-ho, which is described as like a ha-ha only much deeper. In the later book Snuff the grounds of the Ramkins' county estate have a both a ho-ho and a ha-ha, as well as a he-he and a ho-hum, implied to be much shallower. Legal Personal injury Due to the hidden nature of ha-has, they can pose potential injury to the public (especially considering their initial designs were to be invisible). In 2008, during a nighttime guided walk to watch bats at Hopetoun House (Scotland), a participant of the walk attempting to make his way back to the carpark fell off a ha-ha wall, and suffered a severe fracture to the ankle. A successful personal injury claim of £35,000 was settled upon, as the judge presiding the case deemed the ha-ha to be a dangerous man-made feature, and thus it was up to the groundskeepers to highlight the invisible danger that it presented. The presiding QC judge, Alastair Campbell, deemed a ha-ha wall to be outside the scope of the law regarding obvious dangers, such as cliffs or canals, where an occupier is not required to take precautions against a person being injured. This was due to its being an unusual man-made feature that the public would be very much unaware of, especially across a wide lawn. In 2014, a wedding guest at a British manor house fell off a ha-ha while making her way across the manor garden, displacing her right tibia and fibula bones. She brought a successful personal injury claim that was investigated by the environmental health department, which agreed that the area should have been lit in some way to avoid this kind of accident. The defendants in the litigation case were quick to admit liability for the incident, and settled for about £10,000. This was followed by radical changes to the signposting and lighting around the ha-ha to alert visitors of its presence. Preventive repairs Emergency repairs to the ha-ha wall at Sunbury Park in Spelthorne (England) took place in 2009, after the council realised that they would be liable for any injury or death caused by the ha-ha wall. Surrounding vegetation was removed two years before the works opened up the ha-ha to the public. However environmental services were made aware that the ha-ha was in a state of disrepair, and without appropriate warning signs. The total cost of repairs was thought to be around £65,000; environmental services contributed £9,000, and the rest of the funds was taken from capital funds. In 2016, the ha-ha wall in Dalzell estate (Scotland) was repaired after it became unsafe due to a collapse of the stonework. The council's Environmental Services Committee were concerned about potential liability and personal injury claims and enlisted the help of volunteers and staff from a local charity to repair the ha-ha wall within the estate. The repair project received funding from the environmental key fund and the heritage lottery fund via the Clyde and Avon Valley Landscape Partnership. See also Cattle guard Infinity pool Moat References Garden features Fences Types of wall Semi-subterranean structures
Ha-ha
[ "Engineering" ]
2,349
[ "Structural engineering", "Types of wall" ]
1,536,114
https://en.wikipedia.org/wiki/Pay%20toilet
A pay toilet is a public toilet that requires the user to pay. It may be street furniture or be inside a building, e.g. a shopping mall, department store, or railway station. The reason for charging money is usually for the maintenance of the equipment. Paying to use a toilet can be traced back almost 2000 years, to the first century BCE. The charge is often collected by an attendant or by inserting coins into an automatic turnstile; in some freestanding toilets in the street, the fee is inserted into a slot by the door. Mechanical coin operated locks are also used. Some more high tech toilets accept card or contactless payments. Sometimes, a token can be used to enter a pay toilet without paying the charge. Some municipalities offer these tokens to residents with disabilities so these groups aren't discriminated against by the pay toilet. Some establishments such as cafés and restaurants offer tokens to their customers so they can use the toilets for free but other users must pay the relevant charge. Examples Europe Pay toilets are especially common in Continental Europe. The Paris Métro operates coin-operated toilets in its underground stations; and even non-mechanized toilets occasionally have attendants who accept tips. In Germany, many lavatories at service stations on the Autobahn have pay toilets with turnstiles, though as in France, customers typically receive a voucher equal to the toilet fee. Elsewhere, while public toilets may not have a set fee, it is customary to provide change to restroom attendants for their services. Some service stations offer a voucher equal in value to the amount paid for use of a toilet, redeemable for other goods at that station or others in the same chain. In Eastern Europe, particularly in the former USSR, pay toilets are usually non-automatic and are like usual public toilets except that they have an attendant at the entrance to collect the money from visitors. In the United Kingdom, pay toilets tend to be common at bus and railway stations, but most public toilets are free to use. Technically, any toilets provided by local government may be subject to a charge by the provider. Pay toilets on the streets may provide men's urinals free of charge to prevent public urination. For example, in London, a few public conveniences are appearing in the form of pop-up toilets. During the daytime, these toilets are hidden beneath the streets, and only appear in the evening. The British English euphemism "to spend a penny" for "to urinate" derives from the use of a pre-decimal penny coin for pay toilet locks. Latin and South America In Argentina, pay toilets are not common. Toilets placed in public places are typically free to use but the attendant is seated outside with a dish by his side expecting a tip from the user, often with a sign saying "Su propina es nuestro sueldo" (your tip is our salary). It is customary to give a coin or a $2 bill, especially if the toilets requiring paper are used. In Mexico, the majority of pay toilets have turnstiles and an attendant at the entrance. The attendant gives out toilet paper and sometimes a paper towel. Asia In India, Sulabh International is the major operator of pay toilets (sulabh shauchalaya). These are provided with an attendant, and the fee is 2 rupees. They provide toilet as well as bathroom facilities. They are situated in public places like bus stations and major markets, but several sulabh shauchalayas also act as community toilets in areas with poor sanitation facilities. In Singapore, pay toilets are still common in "Hawker Centers"; the use of the toilet usually costs 10-20 cents. The fee is usually paid to an attendant behind a counter; however, certain hawker centres have a turnstile into which the coin is inserted. Sometimes toilet paper is also charged for, and given out at the entrance usually by the attendant, though most of the time there is a toilet paper holder in the cubicle (stall) itself. In some areas of Taiwan, mostly in subways, one must pay for the toilet paper, but the toilet itself is free. In Turkey pay toilets are common at bus stations and underground cities (but not single-building shopping malls), where a charge of between 5 lira and 10 lira is levied at a turnstile for entrance to the bathroom. U.S. In the United States, pay toilets became much less common from the 1970s, when they came under attack from feminists as well as from the plumbing industry. California legislator March Fong Eu argued that they discriminated against females because men and boys could use urinals for free whereas women and girls always had to pay a dime for a toilet "stall" (i.e. cubicle) in places where payment was mandatory. The American Restroom Association was a proponent of an amendment to the National Model Building Code to allow pay toilets only where there were also free toilets. A campaign by the Committee to End Pay Toilets in America (CEPTIA) resulted in laws prohibiting pay toilets in some cities and states. In 1973, Chicago became the first American city to enact a ban, at a time when, according to The Wall Street Journal, there were at least 50,000 units in America, mostly made by the Nik-O-Lok Company. CEPTIA was successful over the next few years in obtaining bans in New York, New Jersey, Minnesota, California, Florida and Ohio. Lobbying was successful in other states as well, and by the end of the decade, pay toilets were greatly reduced in America. However, they are still in use and produced by the Nik-O-Lok company; many of these laws have since been repealed, such as in Ohio. In 2007, legislators rescinded ORC Ordinance 4101:1-29-02.6.2, the ban on pay facilities, paving the way for operators to charge for public restroom use. Africa In Africa, pay toilets are particularly common in informal settlements lacking sewage systems. Of all countries, Ghana has the greatest reliance on public toilets. In Accra, lack of space makes private toilets unrealistic in low-income neighbourhoods. In Kumasi, it has been estimated that 36% of residents use pay toilets, and that "once-daily use of a public toilet by a family of four would cost between US$3.60 and $18 per month depending on the fee charged by the operator of the toilet they use." History Some of the earliest documented pay toilets were built around 74 AD in Rome. Emperor Titus Flavius Vespasianus created this method to ease the financial hardships resulting from the many wars that had been fought. This was not a popular choice with his people, and he was ridiculed for the decision, to which he reacted with the famous quote, Pecunia non olet, "Money does not stink". The Greco-Roman city of Ephesus was important in ancient times, becoming the trade centre and commercial hub of the ancient world. The Scholastica Baths were built in the 1st century AD, and contained all of the modern amenities for hygiene, including advanced public toilets with marble seats. One had to pay to enter these luxury conveniences, where one could enjoy the use of a pool, use the toilet or socialize. John Nevil Maskelyne, an English stage magician, invented the first modern pay toilet in the late 19th century. His door lock for London toilets required the insertion of a penny coin to operate it, hence the euphemism to "spend a penny". The first pay toilet in the United States was installed in 1910 in Terre Haute, Indiana. Cultural references Whether or not public toilets should require payment is a plot point in Noël Coward's 1949 play South Sea Bubble. Pay toilets are key to the 2001 American musical Urinetown. In the 1977 movie Smokey and the Bandit Frog says "I have to go 10-100, could I have a dime? To which he replies, "crawl under" In a 1979 episode of WKRP in Cincinnati, "Fish Story", Herb (dressed as a carp) tries to use a pay toilet at the University of Cincinnati without paying and is caught by a rival station's mascot. The 1983 Stephen King novel, ‘’Pet Sematary” involves a scene featuring pay toilet and a quote that reads, “JOHN CRAPPER WAS A SEXIST PIG!” written in grease pencil on the stall. Criticism People in developing countries or low incomes, for instance in Accra, may choose to defecate in the open or limit the number of times per day that they use a pay toilet, resulting in undesirable public health consequences. See also Committee to End Pay Toilets in America Outhouse Portable toilet Sanisette Urinal References Toilets Vending
Pay toilet
[ "Biology" ]
1,805
[ "Excretion", "Toilets" ]
1,536,124
https://en.wikipedia.org/wiki/Synagogue%20architecture
Synagogue architecture often follows styles in vogue at the place and time of construction. There is no set blueprint for synagogues and the architectural shapes and interior designs of synagogues vary greatly. According to tradition, the Shekhinah or divine presence can be found wherever there is a minyan, a quorum, of ten. A synagogue always contains a Torah ark where the Torah scrolls are kept, called the aron qodesh () by Ashkenazi Jews and the hekhal () by Sephardic Jews. Synagogues are buildings for congregational worship, and thus require a large central space (as do churches and mosques). They are generally designed with the Torah ark at one end, typically opposite the main entrance, and a bimah either in front of that, or more centrally placed. Raised galleries, usually for female worshipers, have been common. Beyond these points, there is little that dictates the design. Historically, synagogues were normally built in a version of the prevailing architectural style of their time and place. Thus, the synagogue in Kaifeng, China looked very like Chinese temples of that region and era, with its outer wall and open garden in which several buildings were arranged. Considerations The ark may be more or less elaborate, even a cabinet not structurally integral to the building or a portable arrangement whereby a Torah is brought into a space temporarily used for worship. There must also be a table, often on a raised platform, from which the Torah is read. The table/platform, called bimah by eastern Ashkenazim, almemmar (or balemmer) by Central and Western Ashkenazim and tebah by Sephardim, where the Torah is read (and from where the services are conducted in Sephardi synagogues) can range from an elaborate platform integral to the building (many early modern synagogues of central Europe featured bimahs with pillars that rose to support the ceiling), to elaborate free-standing raised platforms, to simple tables. A ner tamid, a constantly lit light as a reminder of the constantly lit menorah of the Temple in Jerusalem. Many synagogues, mainly in Ashkenazi communities, feature a pulpit facing the congregation from which to address the assembled. All synagogues require an amud (Hebrew for "post" or "column"), a desk facing the Ark from which the Hazzan (reader, or prayer leader) leads the prayers. A synagogue may or may not have artwork; synagogues range from simple, unadorned prayer rooms to elaborately decorated buildings in every architectural style. The synagogue, or if it is a multi-purpose building, prayer sanctuaries within the synagogue, are typically designed to have their congregation face towards Jerusalem. Thus sanctuaries in the Western world generally have their congregation face east, while those east of Israel have their congregation face west. Congregations of sanctuaries in Israel face towards Jerusalem. But this orientation need not be exact, and occasionally synagogues face other directions for structural reasons, in which case the community may face Jerusalem when standing for prayers. History The styles of the earliest synagogues resembled the temples of other sects of the eastern Roman Empire. The synagogues of Morocco are embellished with the colored tilework characteristic of Moroccan architecture. The surviving medieval synagogues in Budapest, Prague and the German lands are typical Gothic structures. For much of history, the constraints of antisemitism and the laws of host countries restricting the building of synagogues visible from the street, or forbidding their construction altogether, meant that synagogues were often built within existing buildings, or opening from interior courtyards. In both Europe and in the Muslim world, old synagogues with elaborate interior architecture can be found hidden within nondescript buildings. Where the building of synagogues was permitted, they were built in the prevailing architectural style of the time and place. Many European cities had elaborate Renaissance synagogues, of which a few survive. In Italy, there were many synagogues in the style of the Italian Renaissance (see Leghorn; Padua; and Venice). With the coming of the Baroque era, Baroque synagogues appeared across Europe. The emancipation of Jews in European countries and of Jews in Muslim countries colonized by European countries gave Jews the right to build large, elaborate synagogues visible from the public street. Synagogue architecture blossomed. Large Jewish communities wished to show not only their wealth but also their newly acquired status as citizens by constructing magnificent synagogues. Handsome nineteenth synagogues form the period of Jewish imagination stand in virtually every country where there were Jewish communities. Most were built in revival styles then in fashion, such as Neoclassical, Neo-Byzantine, Romanesque Revival Moorish Revival, Gothic Revival, and Greek Revival. There are Egyptian Revival synagogues and even one Mayan Revival synagogue. In the nineteenth and early twentieth century heyday of historicist architecture, however, most historicist synagogues, even the most magnificent ones, did not attempt a pure style, or even any particular style, and are best described as eclectic. Chabad Lubavitch has made a practice of designing some of its Chabad Houses and centers as replicas of or homages to the architecture of 770 Eastern Parkway. Central Europe: Polish–Lithuanian Commonwealth The great exceptions to the rule that synagogues are built in the prevailing style of their time and place are the Wooden synagogues in the Polish–Lithuanian Commonwealth and two forms of masonry synagogues: synagogues with bimah-support and nine-field synagogues (the latter not totally confined to synagogues). Wooden synagogues The wooden synagogues were a unique Jewish artistic and architectural form.> Characteristic features include the independence of the pitched roof from the design of the interior domed ceiling. They had elaborately carved, painted, domed, balconied and vaulted interiors. The architectural interest of the exterior lay in the large scale of the buildings, the multiple, horizontal lines of the tiered roofs, and the carved corbels that supported them. Wooden synagogues featured a single, large hall. In contrast to contemporary churches, there was no apse. Moreover, while contemporary churches featured imposing vestibules, the entry porches of the wooden synagogues was a low annex, usually with a simple lean-to roof. In these synagogues, the emphasis was on constructing a single, large, high-domed worship space. According to art historian Stephen S. Kayser, the wooden synagogues of Poland with their painted and carved interiors were "a truly original and organic manifestation of artistic expression—the only real Jewish folk art in history." According to Louis Lozowick, writing in 1947, the wooden synagogues were unique because, unlike all previous synagogues, they were not built in the architectural style of their region and era, but in a newly evolved and uniquely Jewish style, making them "a truly original folk expression," whose "originality does not lie alone in the exterior architecture, it lies equally in the beautiful and intricate wood carving of the interior." Moreover, while in many parts of the world Jews were proscribed from entering the building trades and even from practicing the decorative arts of painting and woodcarving, the wooden synagogues were actually built by Jewish craftsmen. Art historian Ori Z. Soltes points out that the wooden synagogues, unusual for that period in being large, identifiably Jewish buildings not hidden in courtyards or behind walls, were built not only during a Jewish "intellectual golden age" but in a time and place where "the local Jewish population was equal to or even greater than the Christian population. Synagogues with bimah-support In the second half of the 16th century masonry synagogues whose interiors present an original structural solution, found in no other kind of building, were constructed in the Polish–Lithuanian Commonwealth. These were synagogue halls whose bimah was surrounded by four pillars. Placed upon a podium, connected above by arcading, in one powerful pier, the pillars constituted the bimah-support (or bimah-tower) supporting the vault, consisting of four barrels with lunettes intersecting at the corners. The bases of the vault-rips rested on the podium or were transmitted through a balustrade, solid or pierced. A small cupola covered the field above the bimah. These cupolas were occasionally significantly lowered in comparison with the remaining fields of vaulting. Thus a kind of inner chapel, built inside the bimah-tower, was created. One of the first synagogues with a bimah-support was the Old Synagogue (Przemyśl), which was destroyed during World War II. Synagogues with a bimah-tower were built up to the 19th century and the concept was adopted in various Central European countries. Nine-field synagogues Around the beginning of the 1630s the first synagogues with nine-field vaulting were constructed. This design has a set of four large columns or piers placed squarely in a rectangular central space, supporting three rows of three vaults on the ceiling. They allowed for much greater halls than hitherto and were also called nine-bay synagogues. The Great Suburb Synagogue in Lviv and the synagogue in Ostroh were erected virtually at the same time (1625 and 1627). In these halls the vaulting rested on four tall pillars and on corresponding wall pilasters. The columns and the pilasters were situated in equal spacing and dividing the roof-area into nine equal fields. In these synagogues the bimah was a free-standing podium or a bower situated within the central field between the pillars. Egyptian Revival Egyptian Revival style synagogues were popular in the early nineteenth century. Rachel Wischnitzer argues that they were part of the fashion for Egyptian style inspired by Napoleon's invasion of Europe. According to Carol Herselle Krinsky, they were meant as imitations of the Temple of Solomon and intended by architects and governments to insult Jews by portraying Judaism as a primitive faith. According to Diana Muir Appelbaum, they were expressions of Jewish identity intended to advertise Jewish origins in ancient Israel. Moorish influence In medieval Spain (both Al-Andalus and the Christian kingdoms), a host of synagogues were built, and it was usual to commission them from Moorish and later Mudéjar architects. Very few of these medieval synagogues, built with Moorish techniques and style, are conserved. The two best known Spanish synagogues are in Toledo, one known as El Tránsito, the other as Santa María la Blanca, and are now preserved as national monuments. The former is a small building containing very rich decorations; the latter is especially noteworthy. It is based upon Almohad style and contains long rows of octagonal columns with curiously carved capitals, from which spring Moorish arches supporting the roof. Another significant Mudéjar synagogue is the one at Córdoba built in 1315. As in El Tránsito, the vegetal and geometrical stucco decorations are purely Moorish, but unlike the former, the epigraphic texts are in Hebrew. After the expulsion from Spain there was a general feeling among wealthy Sephardim that Moorish architecture was appropriate in synagogues. By the mid-19th century, the style was adopted by the Ashkenazim of Central and Eastern Europe, who associated Moorish and Mudéjar architectural forms with the golden age of Jewry in Al-Andalus. As a consequence, Moorish Revival spread around the globe as a preferred style of synagogue architecture, although Moorish architecture is by no means Jewish, either in fact or in feeling. The Alhambra has furnished inspiration for innumerable synagogues, but seldom have its graceful proportions or its delicate modeling and elaborate ornamentation been successfully copied. Moorish style, when adapted by the Ashkenazi was believed to have been a reference to the Golden Age of Spanish Jewry, it was not the primary intention of the Jews and architects who chose to build in the Moorish style. Rather, the choice to use the Moorish style was reflective of pride in their Semitic or oriental heritage. This pride in their heritage and understanding of Jews as "semitic" or "oriental" led architects like Gottfried Semper (Semper Synagogue Dresden, Germany) and Ludwig Förster (Tempelgasse or Leopoldstädter Tempel, Vienna, Austria and Dohány Street Synagogue, Budapest, Hungary) to build their synagogues in the Moorish style. Moorish Style remained a popular choice for synagogues throughout the rest of the 19th and early 20th century. Modern synagogue architecture In the modern period, synagogues have continued to be built in every popular architectural style, including Art Nouveau, Art Deco, International style, and all contemporary styles. In the post-World War II period "a period of post-war modernism," came to the fore, "characterized by assertive architectural gestures that had the strength and integrity to stand alone, without applied artwork or Jewish iconography." A notable work of Art Nouveau, pre–World War I Hungarian synagogue architecture is Budapest's Kazinczy Street Synagogue. In the UK, synagogues built in the early 1960s, such as a Carmel College (Oxfordshire) in the UK, designed by the British architect,Thomas Hancock, were decorated with the stained glass of windows of Israeli artist, Nehemia Azaz. The stained glass windows were praised by art and architecture scholar Nikolaus Pevsner as using "extraordinary technique with rough pieces of coloured glass like crystals" and by Historic England as "brilliant and innovative artistic glass". The interior The most common general plan for the interior of the synagogue is an Ark at the eastern end opposite the entrance, and with an almemar or pulpit. In older or Orthodox synagogues with separate seating, there may be benches for the men on either side, and a women's gallery reached by staircases from the outer vestibule. Variations of this simple plan abound: the vestibule became larger, and the staircases to the women's gallery were separated from the vestibule and given more importance. As the buildings became larger, rows of columns were required to support the roof, but in every case the basilican form was retained. The Ark, formerly allowed a mere niche in the wall, was developed into the main architectural feature of the interior, and was flanked with columns, covered with a canopy and richly decorated. The almemar in many cases was joined to the platform in front of the Ark, and elaborate arrangements of steps were provided. The Ark The Torah Ark (usually called Aron Hakodesh or Hekhál) is the most important feature of the interior, and is generally dignified by proper decoration and raised upon a suitable platform, reached by at least three steps, but often by more. It is usually crowned by the Ten Commandments and the Torah. The position of the pulpit varies; it may be placed on either side of the Ark and is occasionally found in the center of the steps. Other interior arrangements The modern synagogue, besides containing the minister's study, trustees' rooms, choir-rooms, and organ-loft devote much space to school purposes; generally, the entire lower floor is used for classrooms. The interior treatment of the synagogue allows great latitude in design. For the thirty-three synagogues of India, American architect and professor of architecture Jay A. Waronker has learned that these buildings tend to follow the Sephardic traditions of the tevah (or bimah, the raised platform where the service is led and Torah read) being freestanding and roughly in the middle of the sanctuary and the ark (called the hekhal by Sephardim and the aron ha-kodesh by Ashkenazim) engaged along the wall that is closest to Jerusalem. The hekhals are essentially cabinets or armoires storing the sefer Torahs. Seating, in the form of long wooden benches, is grouped around and facing the tevah. Men sit together on the main level of the sanctuary while women sit in a dedicated zone on the same level in the smaller synagogues or upstairs in a women's gallery. Interesting architectural and planning exceptions to this common Sephardic formula are the Cochin synagogues in Kerala of far southwestern India. Here, on the gallery level and adjacent to the space provided for women and overlooking the sanctuary below, is a second tevah. This tevah was used for holidays and unique occasions. It is therefore interesting that on more special events, the women are closest to the point where the religious service is being led. In Baghdadi synagogues of India, the hekhals appear to be standard-sized cabinets from the outside (the side facing the sanctuary), but when opened a very large space is revealed. They are essentially walk-in rooms with a perimeter shelf holding up to one hundred sefer Torahs. Interior decoration There are but few emblems that may be used that are characteristically Jewish; the Star of David, the lion of Judah, and flower and fruit forms alone are generally allowable in Orthodox synagogues. The ner tamid hangs in front of the Ark; the tables of the Law surmount it. The seven-branched candlestick, or menorah, may be placed at the sides. Occasionally the shofar, and even the lulav, may be utilized in the design. Hebrew inscriptions are sparingly or seldom-used; stained-glass windows, at one time considered the special property of the Church, are now employed but figured subjects are not used. Gallery See also Jewish architecture List of Jewish architects Oldest synagogues in the world References Further reading de Breffny, Brian, The Synagogue, Macmillan, 1st American ed., 1978, . Goldman, Bernard, The Sacred Portal: a primary symbol in ancient Judaic art, Detroit: Wayne State University Press, 1966 Carol Herselle Krinsky, Synagogues of Europe; Architecture, History, Meaning, MIT Press, 1985; revised edition, MIT Press, 1986; Dover reprint, 1996 Stolzman, Henry & Daniel Stolzman (2004). Tami Hausman, Ed. Synagogue Architecture in America: Faith, Spirit, and Identity. Images Publishing. . Rachel Wischnitzer, Synagogue Architecture in the United States, Jewish Publication Society of America, 1955 Rachel Wischnitzer, Architecture of the European Synagogue, Jewish Publication Society, 1964 External links Synagogue Architecture, Jewish Encyclopedia Early Synagogue Architecture, My Jewish Learning American Synagogue Architecture Slideshow of Contemporary American Synagogue Sanctuary Architectural Elements Sacral architecture
Synagogue architecture
[ "Engineering" ]
3,770
[ "Sacral architecture", "Architecture" ]
1,536,137
https://en.wikipedia.org/wiki/Ammonium%20sulfate
Ammonium sulfate (American English and international scientific usage; ammonium sulphate in British English); (NH4)2SO4, is an inorganic salt with a number of commercial uses. The most common use is as a soil fertilizer. It contains 21% nitrogen and 24% sulfur. Uses Agriculture The primary use of ammonium sulfate is as a fertilizer for alkaline soils. In the soil, the ammonium ion is released and forms a small amount of acid, lowering the pH balance of the soil, while contributing essential nitrogen for plant growth. One disadvantage to the use of ammonium sulfate is its low nitrogen content relative to ammonium nitrate, which elevates transportation costs. It is also used as an agricultural spray adjuvant for water-soluble insecticides, herbicides, and fungicides. There, it functions to bind iron and calcium cations that are present in both well water and plant cells. It is particularly effective as an adjuvant for 2,4-D (amine), glyphosate, and glufosinate herbicides. Laboratory use Ammonium sulfate precipitation is a common method for protein purification by precipitation. As the ionic strength of a solution increases, the solubility of proteins in that solution decreases. Being extremely soluble in water, ammonium sulfate can "salt out" (precipitate) proteins from aqueous solutions. Precipitation by ammonium sulfate is a result of a reduction in solubility rather than protein denaturation, thus the precipitated protein can be resolubilized through the use of standard buffers. Ammonium sulfate precipitation provides a convenient and simple means to fractionate complex protein mixtures. In the analysis of rubber lattices, volatile fatty acids are analyzed by precipitating rubber with a 35% ammonium sulfate solution, which leaves a clear liquid from which volatile fatty acids are regenerated with sulfuric acid and then distilled with steam. Selective precipitation with ammonium sulfate, opposite to the usual precipitation technique which uses acetic acid, does not interfere with the determination of volatile fatty acids. Food additive As a food additive, ammonium sulfate is considered generally recognized as safe (GRAS) by the U.S. Food and Drug Administration, and in the European Union it is designated by the E number E517. It is used as an acidity regulator in flours and breads. Other uses Ammonium sulfate is a precursor to other ammonium salts, especially ammonium persulfate. Ammonium sulfate is listed as an ingredient for many United States vaccines per the Centers for Disease Control. Ammonium sulfate has also been used in flame retardant compositions acting much like diammonium phosphate. As a flame retardant, it increases the combustion temperature of the material, decreases maximum weight loss rates, and causes an increase in the production of residue or char. Preparation Ammonium sulfate is made by treating ammonia with sulfuric acid: A mixture of ammonia gas and water vapor is introduced into a reactor that contains a saturated solution of ammonium sulfate and about 2% to 4% of free sulfuric acid at 60 °C. Concentrated sulfuric acid is added to keep the solution acidic, and to retain its level of free acid. The heat of reaction keeps reactor temperature at 60 °C. Dry, powdered ammonium sulfate may be formed by spraying sulfuric acid into a reaction chamber filled with ammonia gas. The heat of reaction evaporates all water present in the system, forming a powdery salt. Approximately 6,000 million tons were produced in 1981. Ammonium sulfate also is manufactured from gypsum (CaSO4·2H2O). Finely divided gypsum is added to an ammonium carbonate solution. Calcium carbonate precipitates as a solid, leaving ammonium sulfate in the solution. Ammonium sulfate occurs naturally as the rare mineral mascagnite in volcanic fumaroles and due to coal fires on some dumps. Ammonium sulfate is a byproduct in the production of methyl methacrylate. Properties Ammonium sulfate becomes ferroelectric at temperatures below –49.5 °C. At room temperature it crystallises in the orthorhombic system, with cell sizes of a = 7.729 Å, b = 10.560 Å, c = 5.951 Å. When chilled into the ferrorelectric state, the symmetry of the crystal changes to space group Pna21. Reactions Ammonium sulfate decomposes upon heating above , first forming ammonium bisulfate. Heating at higher temperatures results in decomposition into ammonia, nitrogen, sulfur dioxide, and water. As a salt of a strong acid (H2SO4) and weak base (NH3), its solution is acidic; the pH of 0.1 M solution is 5.5. In aqueous solution the reactions are those of and ions. For example, addition of barium chloride, precipitates out barium sulfate. The filtrate on evaporation yields ammonium chloride. Ammonium sulfate forms many double salts (ammonium metal sulfates) when its solution is mixed with equimolar solutions of metal sulfates and the solution is slowly evaporated. With trivalent metal ions, alums such as ferric ammonium sulfate are formed. Double metal sulfates include ammonium cobaltous sulfate, ferrous diammonium sulfate, ammonium nickel sulfate which are known as Tutton's salts and ammonium ceric sulfate. Anhydrous double sulfates of ammonium also occur in the Langbeinites family. The ammonia produced has a pungent smell and is toxic. Airborne particles of evaporated ammonium sulfate comprise approximately 30% of fine particulate pollution worldwide. It reacts with additional sulfuric acid to give triammonium hydrogen disulphate,, . Legislation and control In November 2009, a ban on ammonium sulfate, ammonium nitrate and calcium ammonium nitrate fertilizers was imposed in the former Malakand Division—comprising the Upper Dir, Lower Dir, Swat, Chitral and Malakand districts of the North West Frontier Province (NWFP) of Pakistan, by the NWFP government, following reports that they were used by militants to make explosives. In January 2010, these substances were also banned in Afghanistan for the same reason. See also Ammonium sulfate precipitation References Further reading Properties: UNIDO and International Fertilizer Development Center (1998), Fertilizer Manual, Kluwer Academic Publishers, . External links Calculators: surface tensions, and densities, molarities and molalities of aqueous ammonium sulfate Ammonium compounds Sulfates Fire suppression agents Inorganic fertilizers Food additives Food stabilizers E-number additives
Ammonium sulfate
[ "Chemistry" ]
1,394
[ "Sulfates", "Ammonium compounds", "Salts" ]
1,536,216
https://en.wikipedia.org/wiki/Teletraffic%20engineering
Teletraffic engineering, or telecommunications traffic engineering is the application of transportation traffic engineering theory to telecommunications. Teletraffic engineers use their knowledge of statistics including queuing theory, the nature of traffic, their practical models, their measurements and simulations to make predictions and to plan telecommunication networks such as a telephone network or the Internet. These tools and knowledge help provide reliable service at lower cost. The field was created by the work of A. K. Erlang for circuit-switched networks but is applicable to packet-switched networks, as they both exhibit Markovian properties, and can hence be modeled by e.g. a Poisson arrival process. The crucial observation in traffic engineering is that in large systems the law of large numbers can be used to make the aggregate properties of a system over a long period of time much more predictable than the behaviour of individual parts of the system. In PSTN architectures The measurement of traffic in a public switched telephone network (PSTN) allows network operators to determine and maintain the quality of service (QoS) and in particular the grade of service (GoS) that they promise their subscribers. The performance of a network depends on whether all origin-destination pairs are receiving a satisfactory service. Networks are handled as: loss systems, where calls that cannot be handled are given equipment busy tone, or queuing systems, where calls that cannot be handled immediately are queued. Congestion is defined as the situation when exchanges or circuit groups are inundated with calls and are unable to serve all the subscribers. Special attention must be given to ensure that such high loss situations do not arise. To help determine the probability of congestion occurring, operators should use the Erlang formulas or the Engset calculation. Exchanges in the PSTN make use of trunking concepts to help minimize the cost of the equipment to the operator. Modern switches generally have full availability and do not make use of grading concepts. Overflow systems make use of alternative routing circuit groups or paths to transfer excess traffic and thereby reduce the possibility of congestion. A very important component in PSTNs is the SS7 network used to route signalling traffic. As a supporting network, it carries all the signalling messages necessary to set up, break down or provide extra services. The signalling enables the PSTN to control the manner in which traffic is routed from one location to another. Transmission and switching of calls is performed using the principle of time-division multiplexing (TDM). TDM allows multiple calls to be transmitted along the same physical path, reducing the cost of infrastructure. In call centers A good example of the use of teletraffic theory in practice is in the design and management of a call center. Call centers use teletraffic theory to increase the efficiency of their services and overall profitability through calculating how many operators are really needed at each time of the day. Queueing systems used in call centers have been studied as a science. For example, completed calls are put on hold and queued until they can be served by an operator. If callers are made to wait too long, they may lose patience and default from the queue (hang up), resulting in no service being provided. In broadband networks Teletraffic engineering is a well-understood discipline in the traditional voice network, where traffic patterns are established, growth rates can be predicted, and vast amounts of detailed historical data are available for analysis. However, in modern broadband networks, the teletraffic engineering methodologies used for voice networks are inappropriate. Long-tail traffic Of great importance is the possibility that extremely infrequent occurrences are more likely than anticipated. This situation is known as long-tail traffic. In some designs, the network might be required to withstand the unanticipated traffic. Teletraffic economics and forecasting As mentioned in the introduction, the purpose of teletraffic theory is to reduce cost in telecommunications networks. An important tool in achieving this goal is forecasting. Forecasting allows network operators to calculate the potential cost of a new network / service for a given QoS during the planning and design stage, thereby ensuring that costs are kept to a minimum. An important method used in forecasting is simulation, which is described as the most common quantitative modelling technique in use today. An important reason for this is that computing power has become far more accessible, making simulation the preferred analytical method for problems that are not easily solved mathematically. As in any business environment, network operators must charge tariffs for their services. These charges must be balanced with the supplied QoS. When operators supply services internationally, this is described as trade in services and is governed by the General Agreement on Trade in Services (GATS). See also Asynchronous Transfer Mode Busy hour call attempts Cellular traffic Erlang (unit) Flow control (disambiguation) Long-tail traffic Mobile QoS Routing RSVP-TE Traffic mix Traffic generation model Traffic contract Traffic shaping References External links "Deploying IP and MPLS QoS for Multiservice Networks: Theory and Practice" by John Evans, Clarence Filsfils (Morgan Kaufmann, 2007, ) V. B. Iversen, Teletraffic Engineering handbook, () M. Zukerman, Introduction to Queueing Theory and Stochastic Teletraffic Models, PDF) Queueing theory Broadband Telecommunications de:Traffic Engineering ja:通信トラヒック工学
Teletraffic engineering
[ "Technology" ]
1,114
[ "Information and communications technology", "Telecommunications" ]
1,536,236
https://en.wikipedia.org/wiki/Nonconformist%20register
A Nonconformist register is broadly similar to a parish register, but deriving from a nonconformist church or chapel. Nonconformist churches do not conform to the doctrines of the Church of England. In other words, these Protestant churches dissent from the established church. Examples include the Baptist, Methodist, Presbyterian, and Unitarian denominations, and the Quakers (formally, the Society of Friends). Following the Marriage Act 1753, all English and Welsh marriages (except those of Quakers and Jews) had to take place in a Church of England parish church. However, any baptisms and burials (or equivalent ceremonies) from other denominations might take place within their own churches and chapels, and these were often recorded in their own nonconformist registers. Nevertheless, it is worth remembering that there was no legal obligation for them to record any such events. A significant number of early nonconformist chapels never maintained any such registers, or they maintained them only sporadically. In earlier centuries such omissions might sometimes be partly due to fear of persecution. Occasionally marriages in places of worship elsewhere might also be recorded (sometimes involving more than one ceremony), although such entries originally had no strict legal status. Registers of baptisms, marriages and burials of many nonconformist churches were collected and validated by the British government in 1837. These may be viewed at the Public Record Office in series RG 4. This followed long pressure for such unofficial registers to be given a measure of legal recognition. It had already resulted in an earlier system of limited registration for dissenters being established at Dr Williams's Library in London. Some local chapels promptly abandoned keeping their own registers, at least for a while, after this date (which coincided with the start of civil registration in England and Wales). However a second tranche of nonconformist registers was transferred to London after 1857, following a further report by a government commission. After the Marriage Act 1836 marriages could take place in many other licensed nonconformist chapels, provided that the local Superintendent Registrar was in attendance. Eventually the Marriage Act 1898 enabled some of these chapels to dispense with this requirement, provided that they designated an Appointed Person (usually the minister or priest), who would be responsible for maintaining an official marriage register. Many nonconformist registers have now been deposited in approved repositories, such as the local county record office. However different churches operate different policies, and it will often be found that rates of creation and survival for such records are less good than for other types of parish register. A number of such registers have also started to appear online. Sources Genealogy Register
Nonconformist register
[ "Biology" ]
536
[ "Phylogenetics", "Genealogy" ]
1,536,483
https://en.wikipedia.org/wiki/Connection-Oriented%20Network%20Service
Connection-Oriented Network Service (CONS) is one of the two Open Systems Interconnection (OSI) network-layer protocols, the other being Connectionless-mode Network Service (CLNS). It is basically X.25, with a few adjustments. Protocols providing CONS Some protocols that provide the CONS service: X.25, as specified in ITU-T Recommendation X.223 is a Public Data Network protocol that provides the Connection Oriented Network Service as described in ITU-T Recommendation X.213. Signalling Connection Control Part (SCCP), as specified in ITU-T Recommendation Q.711 is a Signaling System 7 protocol that provides the Connection Oriented Network Service as described in ITU-T Recommendation X.213. Service Specific Connection Oriented Protocol (SSCOP), as specified in ITU-T Recommendation Q.2110 is an Asynchronous Transfer Mode protocol that provides the Connection Oriented Network Service as described in ITU-T Recommendation X.213. OSI protocols Network layer protocols
Connection-Oriented Network Service
[ "Technology" ]
214
[ "Computing stubs", "Computer network stubs" ]
1,536,553
https://en.wikipedia.org/wiki/Main%20stage
Main stage or mainstage refers to the largest or most prestigious space of a theatre building and to the productions performed in that space. Mainstage theatre has been historically distinguished from smaller-scale studio theatre. It is usually performed in a proscenium theatre or on a thrust stage. Main stage is also used to describe the performance space with the largest audience capacity at a performing arts festival or other venues. Historical usage In the 19th and early 20th centuries almost all theatres were built on the proscenium model. With the growth of studio theatres from the 1920s and their increasing adoption by traditional theatres as an ancillary space for smaller productions, theatrical management began to differentiate between its "main theatre" and "studio theatre." The concept of the main theatre became unattractive to those members of the profession working on large-scale events and others who felt that it was a diminishing part of modern theatre. The phrase "main theatre" lacked significance for those institutions that had a single traditional stage only. By the end of the 20th century the term "main stage" was well established as a description of traditional western theatres and the productions performed in them. Modern usage Music festivals A music festival is a festival oriented towards music that is sometimes presented with a theme such as musical genre, nationality or locality of musicians, or holiday. They are commonly held outdoors, and are often inclusive of other attractions such as food and merchandise vending machines, performance art, and social activities. Large music festivals such as Lollapalooza are constructed around well known main stage acts and lesser known musicians and bands on side stages. Many festivals are annual, or repeat at some other interval, and have modular staging of many types. Each year Lollapalooza often features multiple acts on its main and side stages. Strip clubs In strip clubs, the main stage is where the currently featured performer will dance as part of a rotation. In most clubs the main stage is a dominant feature of the layout. During each set of one or more songs, the current performer will dance on stage in exchange for tips. Dancers collect tips from customers either while on stage or after the dancer has finished a stage show and is mingling with the audience. A customary tip (where customers can do so at the stage) is a dollar bill folded lengthwise and placed in the dancer's garter from the tip rail. The area of the tip rail is equivalent to the apron in traditional theatre. The most common type of strip club main stage is the thrust stage, but the other major forms are also used regularly. Theatre in the round is also a popular form of strip club staging for its main stage. References Parts of a theatre
Main stage
[ "Technology" ]
546
[ "Parts of a theatre", "Components" ]
1,536,554
https://en.wikipedia.org/wiki/Aiguille%20du%20Midi
The Aiguille du Midi (, "Needle at midday") is a mountain in the Mont Blanc massif within the French Alps. It is a popular tourist destination and can be directly accessed by cable car from Chamonix that takes visitors close to Mont Blanc. Cable car The idea for a cable car to the summit, the Téléphérique de l'Aiguille du Midi, was originally proposed around 1909, but did not come into operation until 1955 when it held the title of the world's highest cable car for about two decades. It still holds the record as the highest vertical ascent cable car in the world, from . There are two sections: from Chamonix to Plan de l'Aiguille at and then directly, without any support pillar, to the upper station at 3,777 m (the building contains an elevator to the summit). The span of the second section is measured directly, but only measured horizontally. Thus it remains the second longest span width, measured directly. The cable car travels from Chamonix to the top of the Aiguille du Midi – an altitude gain of over – in 20 minutes, costing around €75 for an adult ticket from Chamonix and back. There is access by cable car to a nearby peak on the Italian side, called Skyway Monte Bianco, with a vertical lift of 2,166 m, and a cable car from that peak to Aiguille du Midi. This is only open in the summer. Summit At the mountain's summit there is a panoramic viewing platform, a snack bar, a café, a restaurant, and a gift shop. Even in summer, temperatures in the open viewing areas can fall to , and visitors require both warm clothing and protection from very bright sunlight. Because of the danger, tourists are unable to leave the visitor facilities on the Midi's summit. However, mountaineers and skiers can pass through a tunnel to reach the steep and extremely exposed ice ridge to descend to the glacier below. In December 2013, a glass skywalk called "Step into the Void" opened at the top of the Aiguille du Midi peak. The view is straight down, and one can see Mont Blanc to the south. A further tourist attraction called "Le Tube" opened in 2016. It consists of an enclosed tubular walkway that completely circles the summit. During summer months only, the Vallée Blanche Cable Car crosses "peak-to-peak" from Aiguille du Midi to Pointe Helbronner () at the Italian side of the Mont Blanc Massif. Pointe Helbronner is served by another cable car, Skyway Monte Bianco, to Entrèves, near the Italian town of Courmayeur in the Aosta Valley. This makes it possible to travel "by air" from Chamonix, France to Courmayeur, Italy – a route normally traversed by the highway running through the Mont Blanc Tunnel. Mountaineering Several routes for fit, experienced mountaineers can either start or finish at the Aiguille du Midi, although the nearby Cosmiques Refuge is the best starting point for the longer routes: Arête des Cosmiques (also known as the Cosmiques Ridge) is a short, mixed rock and ice route that can be reached from the Midi station and which returns there in an unusual manner, exiting from the top of a ladder onto the Midi station viewing platform. It is very popular, and therefore busy, and is often used as an alpine training climb as it requires all-around mountaineering skills. Graded at PD+ to AD, the round-trip can easily be completed from Chamonix in one day. First ascent by George and Maxwell Finched on 2 August 1911. Midi-Plan traverse traverse from the Aiguille du Midi to the Aiguille du Plan, either returning to the cable car or descending the Mer de Glace from the 'Plan' to the Requin Hut or continuing to Montenvers. Mont Blanc du Tacul Usually done from the Cosmiques Hut, but fit parties sometimes take the first-morning telepherique and descend from the 'Midi'. Alternatively, it can be combined with a return to the cable car station via an ascent of the Cosmiques ridge. The Traverse of Mont Blanc, also known in French as La Voie des Trois Mont Blancs or just La Traversée, is a long route, graded at PD+ which starts from the Cosmiques Hut. This popular route is less exposed to danger than the Gouter Route, but under certain conditions, both Mont Blanc du Tacul and Mont Maudit can develop slopes with very high avalanche risk. The Vallée Blanche ski run is a long, unmarked off-piste ski route which begins very steeply from the Aiguille du Midi station and, because of its complexity across crevassed glaciated terrain and needs for route-finding, is best undertaken with a mountain guide. Gallery See also List of mountains of the Alps above 3000 m References External links Alpine three-thousanders Mountains of the Alps Vertical transport devices Mountains of Haute-Savoie Mont Blanc massif Tourist attractions in Haute-Savoie Chamonix
Aiguille du Midi
[ "Technology" ]
1,061
[ "Vertical transport devices", "Transport systems" ]
1,536,701
https://en.wikipedia.org/wiki/Evacuation%20simulation
Evacuation simulation is a method to determine evacuation times for areas, buildings, or vessels. It is based on the simulation of crowd dynamics and pedestrian motion. The number of evacuation software have been increased dramatically in the last 25 years. A similar trend has been observed in term of the number of scientific papers published on this subject. One of the latest survey indicate the existence of over 70 pedestrian evacuation models. Today there are two conferences dedicated to this subject: "Pedestrian Evacuation Dynamics" and "Human Behavior in Fire". The distinction between buildings, ships, and vessels on the one hand and settlements and areas on the other hand is important for the simulation of evacuation processes. In the case of the evacuation of a whole district, the transport phase (see emergency evacuation) is usually covered by queueing models (see below). Pedestrian evacuation simulation are popular in the fire safety design of building when a performance based approach is used. Simulations are not primarily methods for optimization. To optimize the geometry of a building or the procedure with respect to evacuation time, a target function has to be specified and minimized. Accordingly, one or several variables must be identified which are subject to variation. Classification of models Modelling approaches in the field of evacuation simulation: Cellular automaton: discrete, microscopic models, where the pedestrian is represented by a cell state. In this case both statics and dynamic floor fields (i.e., distance maps) are used to navigate agents toward exits moving from a cell to adjacent cells which can have different shapes. There exist models for ship evacuation processes, bi-directional pedestrian flows, general models with bionics aspects Agent-based models: microscopic models, where the pedestrian is represented by an agent. The agents can have human attributes besides the coordinates. Their behavior can integrate stochastic nature. There exist general models with spatial aspects of pedestrian steps Social Force Model: continuous, microscopic model, based on equations from physics Queuing models: macroscopic models which are based on the graphical representation of the geometry. The movement of the persons is represented as a flow on this graph. Particle swarm optimization models: microscopic model, based on a fitness function which minimizes some properties of the evacuation (distance between pedestrians, distance between pedestrians and exits) Fluid-dynamic models: continuous, macroscopic models, where large crowds are modeled with coupled, nonlinear, partial differential equations Simulation of evacuations Buildings (train stations, sports stadia), ships, aircraft, tunnels, and trains are similar concerning their evacuation: the persons are walking towards a safe area. In addition, persons might use slides or similar evacuation systems and for ships the lowering of life-boats. Tunnels Tunnels are unique environments with their own specific characteristics: underground spaces, unknown to users, no natural light, etc. which affect different aspects of evacuees behaviours such as pre-evacuation times (e.g. occupants' reluctance to leave the vehicles), occupant–occupant and occupant–environment interactions, herding behaviour and exit selection. Ships Four aspects are particular for ship evacuation: Ratio of number of crew to number of passengers, Ship motion, Floating position The evacuation system (e.g., slides, life-boats). Ship motion and/or abnormal floating position may decrease the ability to move. This influence has been investigated experimentally and can be taken into account by reduction factors. The evacuation of a ship is divided into two separate phases: assembly phase and embarkation phase. Aircraft The American Federal Aviation Administration requires that aircraft have to be able to be evacuated within 90 seconds. This criterion has to be checked before approval of the aircraft. The 90-second rule requires the demonstration that all passengers and crew members can safely abandon the aircraft cabin in less than 90 seconds, with half of the usable exits blocked, with the minimum illumination provided by floor proximity lighting, and a certain age-gender mix in the simulated occupants. The rule was established in 1965 with 120 seconds, and has been evolving over the years to encompass the improvements in escape equipment, changes in cabin and seat material, and more complete and appropriate crew training. References Literature A. Schadschneider, W. Klingsch, H. Klüpfel, T. Kretz, C. Rogsch, and A. Seyfried. Evacuation Dynamics: Empirical Results, Modeling and Applications. In R.A. Meyers, editor, Encyclopedia of Complexity and System Science. Springer, Berlin Heidelberg New York, 2009. (to be published in April 2009, available at arXiv:0802.1620v1). Lord J, Meacham B, Moore A, Fahy R, Proulx G (2005). Guide for evaluating the predictive capabilities of computer egress models, NIST Report GCR 06-886. http://www.fire.nist.gov/bfrlpubs/fire05/PDF/f05156.pdf E. Ronchi, P. Colonna, J. Capote, D. Alvear, N. Berloco, A. Cuesta. The evaluation of different evacuation models for road tunnel safety analyses. Tunnelling and Underground Space Technology Vol. 30, July 2012, pp74–84. Kuligowski ED, Peacock RD, Hoskins, BL (2010). A Review of Building Evacuation Models NIST, Fire Research Division. 2nd edition. Technical Note 1680 Washington, US. International Maritime Organization (2007). Guidelines for Evacuation Analyses for New and Existing Passenger Ships, MSC/Circ.1238, International Maritime Organization, London, UK. R. Lovreglio, E. Ronchi, M. J. Kinsey (2019). An online survey of pedestrian evacuation model usage and users. Fire Technology. https://doi.org/10.1007/s10694-019-00923-8 Emergency simulation Stochastic simulation Social physics
Evacuation simulation
[ "Physics" ]
1,221
[ "Social physics", "Applied and interdisciplinary physics" ]
1,536,920
https://en.wikipedia.org/wiki/Proximity%20space
In topology, a proximity space, also called a nearness space, is an axiomatization of the intuitive notion of "nearness" that hold set-to-set, as opposed to the better known point-to-set notion that characterize topological spaces. The concept was described by but ignored at the time. It was rediscovered and axiomatized by V. A. Efremovič in 1934 under the name of infinitesimal space, but not published until 1951. In the interim, discovered a version of the same concept under the name of separation space. Definition A is a set with a relation between subsets of satisfying the following properties: For all subsets implies implies implies implies ( or ) (For all or ) implies Proximity without the first axiom is called (but then Axioms 2 and 4 must be stated in a two-sided fashion). If we say is near or and are ; otherwise we say and are . We say is a or of written if and only if and are apart. The main properties of this set neighborhood relation, listed below, provide an alternative axiomatic characterization of proximity spaces. For all subsets implies implies ( and ) implies implies implies that there exists some such that A proximity space is called if implies A or is one that preserves nearness, that is, given if in then in Equivalently, a map is proximal if the inverse map preserves proximal neighborhoodness. In the same notation, this means if holds in then holds in Properties Given a proximity space, one can define a topology by letting be a Kuratowski closure operator. If the proximity space is separated, the resulting topology is Hausdorff. Proximity maps will be continuous between the induced topologies. The resulting topology is always completely regular. This can be proven by imitating the usual proofs of Urysohn's lemma, using the last property of proximal neighborhoods to create the infinite increasing chain used in proving the lemma. Given a compact Hausdorff space, there is a unique proximity space whose corresponding topology is the given topology: is near if and only if their closures intersect. More generally, proximities classify the compactifications of a completely regular Hausdorff space. A uniform space induces a proximity relation by declaring is near if and only if has nonempty intersection with every entourage. Uniformly continuous maps will then be proximally continuous. See also References External links Closure operators General topology
Proximity space
[ "Mathematics" ]
507
[ "General topology", "Order theory", "Topology", "Closure operators" ]
1,536,938
https://en.wikipedia.org/wiki/Frontage
Frontage is the boundary between a plot of land or a building and the road onto which the plot or building fronts. Frontage may also refer to the full length of this boundary. This length is considered especially important for certain types of commercial and retail real estate, in applying zoning bylaws and property tax. In the case of contiguous buildings individual frontages are usually measured to the middle of any party wall. In some parts of the United States, particularly New England and Montana, a frontage road is one which runs parallel to a major road or highway, and is intended primarily for local access to and egress from those properties which line it. A "river frontage" or "ocean frontage" is the length of a plot of land that faces directly onto a river or ocean respectively. Consequently, the amount of such frontage may affect the value of the plot. See also Façade References Real estate terminology
Frontage
[ "Engineering" ]
187
[ "Architecture stubs", "Architecture" ]
1,536,947
https://en.wikipedia.org/wiki/Kruskal%E2%80%93Katona%20theorem
In algebraic combinatorics, the Kruskal–Katona theorem gives a complete characterization of the f-vectors of abstract simplicial complexes. It includes as a special case the Erdős–Ko–Rado theorem and can be restated in terms of uniform hypergraphs. It is named after Joseph Kruskal and Gyula O. H. Katona, but has been independently discovered by several others. Statement Given two positive integers N and i, there is a unique way to expand N as a sum of binomial coefficients as follows: This expansion can be constructed by applying the greedy algorithm: set ni to be the maximal n such that replace N with the difference, i with i − 1, and repeat until the difference becomes zero. Define Statement for simplicial complexes An integral vector is the f-vector of some -dimensional simplicial complex if and only if Statement for uniform hypergraphs Let A be a set consisting of N distinct i-element subsets of a fixed set U ("the universe") and B be the set of all -element subsets of the sets in A. Expand N as above. Then the cardinality of B is bounded below as follows: Lovász' simplified formulation The following weaker but useful form is due to . Let A be a set of i-element subsets of a fixed set U ("the universe") and B be the set of all -element subsets of the sets in A. If then . In this formulation, x need not be an integer. The value of the binomial expression is . Ingredients of the proof For every positive i, list all i-element subsets a1 < a2 < … ai of the set N of natural numbers in the colexicographical order. For example, for i = 3, the list begins Given a vector with positive integer components, let Δf be the subset of the power set 2N consisting of the empty set together with the first i-element subsets of N in the list for i = 1, …, d. Then the following conditions are equivalent: Vector f is the f-vector of a simplicial complex Δ. Δf is a simplicial complex. The difficult implication is 1 ⇒ 2. History The theorem is named after Joseph Kruskal and Gyula O. H. Katona, who published it in 1963 and 1968 respectively. According to , it was discovered independently by , , , , and . writes that the earliest of these references, by Schützenberger, has an incomplete proof. See also Sperner's theorem References . Reprinted in . Reprinted in . . . . . . External links Kruskal-Katona theorem on the polymath1 wiki Algebraic combinatorics Hypergraphs Families of sets Theorems in combinatorics Extremal combinatorics
Kruskal–Katona theorem
[ "Mathematics" ]
588
[ "Theorems in combinatorics", "Extremal combinatorics", "Theorems in discrete mathematics", "Combinatorics", "Families of sets", "Basic concepts in set theory", "Fields of abstract algebra", "Algebraic combinatorics" ]
1,536,956
https://en.wikipedia.org/wiki/Lineworker
A lineworker (also called a lineman or powerline worker) constructs and maintains the electric transmission and distribution facilities that deliver electrical energy to industrial, commercial, and residential establishments. A lineworker installs, services, and emergency repairs electrical lines in the case of lightning, wind, ice storm, or ground disruptions. Whereas those who install and maintain electrical wiring inside buildings are electricians, lineworkers generally work at outdoor installations. History The occupation had begun in 1844 when the first telegraph wires were strung between Washington, D.C., and Baltimore carrying the famous message of Samuel Morse, "What hath God wrought?" The first telegraph station was built in Chicago in 1848, by 1861 a web of lines spanned the United States and in 1868 the first permanent telegraph cable was successfully laid across the Atlantic Ocean. Telegraph lines could be strung on trees, but wooden poles were quickly adopted as the preferred method. The term lineworker was used for those who set wooden poles and strung wire. The term continued in use with the invention of the telephone in the 1870s and the beginning of electrification in the 1890s. This new electrical power work was more hazardous than telegraph or telephone work because of the risk of electrocution. Between the 1890s and the 1930s, line work was considered one of the most hazardous jobs. This led to the formation of labor organizations to represent the workers and advocate for their safety. This also led to the establishment of apprenticeship programs and the establishment of more stringent safety standards, starting in the late 1930s. The union movement in the United States was led by lineworker Henry Miller, who in 1890 was elected president of the Electrical Wiremen and Linemen's Union, No. 5221 of the American Federation of Labor. United States The rural electrification drive during the New Deal led to a wide expansion in the number of jobs in the electric power industry. Many powerline workers during that period traveled around the country following jobs as they became available in tower construction, substation construction, and wire stringing. They often lived in temporary camps set up near the project they were working on, or in boarding houses if the work was in a town or city, and relocating every few weeks or months. The occupation was lucrative at the time, but the hazards and the extensive travel limited its appeal. A brief drive to electrify some railroads on the East Coast of the US-led to the development of specialization of powerline workers who installed and maintained catenary overhead lines. Growth in this branch of linework declined after most railroads favored diesel over electric engines for replacement of steam engines. The occupation evolved during the 1940s and 1950s with the expansion of residential electrification. This led to an increase in the number of powerline workers needed to maintain power distribution circuits and provide emergency repairs. Maintenance powerline workers mostly stayed in one place, although sometimes they were called to travel to assist repairs. During the 1950s, some electric lines began to be installed in tunnels, expanding the scope of the work. Duties Powerline workers work on electrically energized (live) and de-energized (dead) power lines. They may perform several tasks associated with power lines, including installation or replacement of distribution equipment such as capacitor banks, distribution transformers on poles, insulators and fuses. These duties include the use of ropes, knots, and lifting equipment. These tasks may have to be performed with primitive manual tools where accessibility is limited. Such conditions are common in rural or mountainous areas that are inaccessible to trucks. High voltage transmission lines can be worked live with proper setups. The lineworker must be isolated from the ground. The lineworker wears special conductive clothing that is connected to the live power line, at which point the line and the lineworker are at the same potential, allowing the lineworker to handle the wire. The lineworker may still be electrocuted if he or she completes an electrical circuit, for example by handling both ends of a broken conductor. Such work is often done by helicopter by specially trained powerline workers. Isolated line work is only used for transmission-level voltages and sometimes for the higher distribution voltages. Live wire work is common on low voltage distribution systems within the UK and Australia as all linesmen are trained to work 'live'. Live wire work on high voltage distribution systems within the UK and Australia is carried out by specialist teams. Training Becoming a lineworker usually involves starting as an apprentice and a four-year training program before becoming a "Journey Lineworker". Apprentice powerline workers are trained in all types of work from operating equipment and climbing to proper techniques and safety standards. Schools throughout the United States offer a pre-apprentice lineworker training program such as Southeast Lineman Training Center and Northwest Lineman College. Safety Lineworkers, especially those who deal with live electrical apparatus, use personal protective equipment (PPE) as protection against inadvertent contact. This includes rubber gloves, rubber sleeves, bucket liners, and protective blankets. When working with energized power lines, powerline workers must use protection to eliminate any contact with the energized line. The requirements for PPEs and associated permissible voltage depends on applicable regulations in the jurisdiction as well as company policy. Voltages higher than those that can be worked using gloves are worked with special sticks known as hot-line tools or hot sticks, with which power lines can be safely handled from a distance. Powerline workers must also wear special rubber insulating gear when working with live wires to protect against any accidental contact with the wire. The buckets powerline workers sometimes work from are also insulated with fiberglass. De-energized power lines can be hazardous as they can still be energized from another source such as interconnection or interaction with another circuit even when they appear to be shut off. For example, a higher-voltage distribution level circuit may feed several lower-voltage distribution circuits through transformers. If the higher voltage circuit is de-energized, but if lower-voltage circuits connected remain energized, the higher voltage circuit will remain energized. Another problem can arise when de-energized wires become energized through electrostatic or electromagnetic induction from energized wires nearby. All live line work PPE must be kept clean from contaminants and regularly tested for di-electric integrity. This is done by the use of high voltage electrical testing equipment. Other general items of PPE such as helmets are usually replaced at regular intervals. See also Overhead cable References External links Thomas M. Shoemaker and James E. Mack. (2002) The Lineman's and Cableman's Handbook. Edwin B. Kurtz. . "How Linemen Handle Hot Wires And Stay Alive" , July 1949, Popular Science basics explained on lineman safety for the general public Inter-Utility Overhead Trainers Association http://fallenlinemen.org/ Construction trades workers Crafts Electric power Skills
Lineworker
[ "Physics", "Engineering" ]
1,425
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
1,536,973
https://en.wikipedia.org/wiki/Electric%20resistance%20welding
Electric resistance welding (ERW) is a welding process in which metal parts in contact are permanently joined by heating them with an electric current, melting the metal at the joint. Electric resistance welding is widely used, for example, in manufacture of steel pipe and in assembly of bodies for automobiles. The electric current can be supplied to electrodes that also apply clamping pressure, or may be induced by an external magnetic field. The electric resistance welding process can be further classified by the geometry of the weld and the method of applying pressure to the joint: spot welding, seam welding, flash welding, projection welding, for example. Some factors influencing heat or welding temperatures are the proportions of the workpieces, the metal coating or the lack of coating, the electrode materials, electrode geometry, electrode pressing force, electric current and length of welding time. Small pools of molten metal are formed at the point of most electrical resistance (the connecting or "faying" surfaces) as an electric current (100–100,000 A) is passed through the metal. In general, resistance welding methods are efficient and cause little pollution, but their applications are limited to relatively thin materials. Spot welding Spot welding is a resistance welding method used to join two or more overlapping metal sheets, studs, projections, electrical wiring hangers, some heat exchanger fins, and some tubing. Usually power sources and welding equipment are sized to the specific thickness and material being welded together. The thickness is limited by the output of the welding power source and thus the equipment range due to the current required for each application. Care is taken to eliminate contaminants between the faying surfaces. Usually, two copper electrodes are simultaneously used to clamp the metal sheets together and to pass current through the sheets. When the current is passed through the electrodes to the sheets, heat is generated due to the higher electrical resistance where the surfaces contact each other. As the electrical resistance of the material causes a heat buildup in the work pieces between the copper electrodes, the rising temperature causes a rising resistance, and results in a molten pool contained most of the time between the electrodes. As the heat dissipates throughout the workpiece in less than a second (resistance welding time is generally programmed as a quantity of AC cycles or milliseconds) the molten or plastic state grows to meet the welding tips. When the current is stopped the copper tips cool the spot weld, causing the metal to solidify under pressure. The water cooled copper electrodes remove the surface heat quickly, accelerating the solidification of the metal, since copper is an excellent conductor. Resistance spot welding typically employs electrical power in the form of direct current, alternating current, medium frequency half-wave direct current, or high-frequency half wave direct current. If excessive heat is applied or applied too quickly, or if the force between the base materials is too low, or the coating is too thick or too conductive, then the molten area may extend to the exterior of the work pieces, escaping the containment force of the electrodes (often up to 30,000 psi). This burst of molten metal is called expulsion, and when this occurs the metal will be thinner and have less strength than a weld with no expulsion. The common method of checking a weld's quality is a peel test. An alternative test is the restrained tensile test, which is much more difficult to perform, and requires calibrated equipment. Because both tests are destructive in nature (resulting in the loss of salable material), non-destructive methods such as ultrasound evaluation are in various states of early adoption by many OEMs. The advantages of the method include efficient energy use, limited workpiece deformation, high production rates, easy automation, and no required filler materials. When high strength in shear is needed, spot welding is used in preference to more costly mechanical fastening, such as riveting. While the shear strength of each weld is high, the fact that the weld spots do not form a continuous seam means that the overall strength is often significantly lower than with other welding methods, limiting the usefulness of the process. It is used extensively in the automotive industry – cars can have several thousand spot welds. A specialized process, called shot welding, can be used to spot weld stainless steel. There are three basic types of resistance welding bonds: solid state, fusion, and reflow braze. In a solid state bond, also called a thermo-compression bond, dissimilar materials with dissimilar grain structure, e.g. molybdenum to tungsten, are joined using a very short heating time, high weld energy, and high force. There is little melting and minimum grain growth, but a definite bond and grain interface. Thus the materials actually bond while still in the solid state. The bonded materials typically exhibit excellent shear and tensile strength, but poor peel strength. In a fusion bond, either similar or dissimilar materials with similar grain structures are heated to the melting point (liquid state) of both. The subsequent cooling and combination of the materials forms a “nugget” alloy of the two materials with larger grain growth. Typically, high weld energies at either short or long weld times, depending on physical characteristics, are used to produce fusion bonds. The bonded materials usually exhibit excellent tensile, peel and shear strengths. In a reflow braze bond, a resistance heating of a low temperature brazing material, such as gold or solder, is used to join either dissimilar materials or widely varied thick/thin material combinations. The brazing material must “wet” to each part and possess a lower melting point than the two workpieces. The resultant bond has definite interfaces with minimum grain growth. Typically the process requires a longer (2 to 100 ms) heating time at low weld energy. The resultant bond exhibits excellent tensile strength, but poor peel and shear strength. Seam welding Resistance seam welding is a process that produces a weld at the faying surfaces of two similar metals. The seam may be a butt joint or an overlap joint and is usually an automated process. It differs from flash welding in that flash welding typically welds the entire joint at once and seam welding forms the weld progressively, starting at one end. Like spot welding, seam welding relies on two electrodes, usually made from copper, to apply pressure and current. The electrodes are often disc shaped and rotate as the material passes between them. This allows the electrodes to stay in constant contact with the material to make long continuous welds. The electrodes may also move or assist the movement of the material. A transformer supplies energy to the weld joint in the form of low voltage, high current AC power. The joint of the work piece has high electrical resistance relative to the rest of the circuit and is heated to its melting point by the current. The semi-molten surfaces are pressed together by the welding pressure that creates a fusion bond, resulting in a uniformly welded structure. Most seam welders use water cooling through the electrode, transformer and controller assemblies due to the heat generated. Seam welding produces an extremely durable weld because the joint is forged due to the heat and pressure applied. A properly welded joint formed by resistance welding can easily be stronger than the material from which it is formed. A common use of seam welding is during the manufacture of round or rectangular steel tubing. Seam welding has been used to manufacture steel beverage cans but is no longer used for this as modern beverage cans are seamless aluminum. There are two modes for seam welding: Intermittent and continuous. In intermittent seam welding, the wheels advance to the desired position and stop to make each weld. This process continues until the desired length of the weld is reached. In continuous seam welding, the wheels continue to roll as each weld is made. Low-frequency electric resistance welding Low-frequency electric resistance welding (LF-ERW) is an obsolete method of welding seams in oil and gas pipelines. It was phased out in the 1970s but as of 2015 some pipelines built with this method remained in service. Electric resistance welded (ERW) pipe is manufactured by cold-forming a sheet of steel into a cylindrical shape. Current is then passed between the two edges of the steel to heat the steel to a point at which the edges are forced together to form a bond without the use of welding filler material. Initially this manufacturing process used low frequency AC current to heat the edges. This low frequency process was used from the 1920s until 1970. In 1970, the low frequency process was superseded by a high frequency ERW process which produced a higher quality weld. Over time, the welds of low frequency ERW pipe were found to be susceptible to selective seam corrosion, hook cracks, and inadequate bonding of the seams, so low frequency ERW is no longer used to manufacture pipe. The high frequency process is still being used to manufacture pipe for use in new pipeline construction. Other methods Other ERW methods include flash welding, resistance projection welding, and upset welding. Flash welding is a type of resistance welding that does not use any filler metals. The pieces of metal to be welded are set apart at a predetermined distance based on material thickness, material composition, and desired properties of the finished weld. Current is applied to the metal, and the gap between the two pieces creates resistance and produces the arc required to melt the metal. Once the pieces of metal reach the proper temperature, they are pressed together, effectively forge welding them together. Projection welding is a modification of spot welding in which the weld is localized by means of raised sections, or projections, on one or both of the workpieces to be joined. Heat is concentrated at the projections, which permits the welding of heavier sections or the closer spacing of welds. The projections can also serve as a means of positioning the workpieces. Projection welding is often used to weld studs, nuts, and other threaded machine parts to metal plate. It is also frequently used to join crossed wires and bars. This is another high-production process, and multiple projection welds can be arranged by suitable designing and jigging. See also List of welding processes Shot welding References Bibliography Further reading O'Brien, R.L. (Ed.) (1991). Welding Handbook Vol. 2 (8th ed.). Miami: American Welding Society. External links Resistance Welding Manufacturing Alliance "Making Resistance Spot Welding Safer," from the Welding Journal "High-frequency electric resistance welding: An overview," from The Fabricator by American Welding Society Welding
Electric resistance welding
[ "Engineering" ]
2,180
[ "Welding", "Mechanical engineering" ]
1,537,097
https://en.wikipedia.org/wiki/Voltage-regulator%20tube
A voltage-regulator tube (VR tube) is an electronic component used as a shunt regulator to hold a voltage constant at a predetermined level. Physically, these devices resemble vacuum tubes, but there are two main differences: Their glass envelopes are filled with a gas mixture, and They have a cold cathode; the cathode is not heated with a filament to emit electrons. Electrically, these devices resemble Zener diodes, with the following major differences: They rely on gas ionization, rather than Zener breakdown The unregulated supply voltage must be 15–20% above the nominal output voltage to ensure that the discharge starts The output can be higher than nominal if the current through the tube is too low. When sufficient voltage is applied across the electrodes, the gas ionizes, forming a glow discharge around the cathode electrode. The VR tube then acts as a negative resistance device; as the current through the device increases, the amount of ionization also increases, reducing the resistance of the device to further current flow. In this way, the device conducts sufficient current to hold the voltage across its terminals to the desired value. Because the device would conduct a nearly unlimited amount of current, there must be some external means of limiting the current. Usually, this is provided by an external resistor upstream from the VR tube. The VR tube then conducts any portion of the current that does not flow into the downstream load, maintaining an approximately constant voltage across the VR tube's electrodes. The VR tube's regulation voltage was only guaranteed when conducting an amount of current within the allowable range. In particular, if the current through the tube is too low to maintain ionization, the output voltage can rise above the nominal output—as far as the input supply voltage. If the current through the tube is too high, it can enter an arc discharge mode where the voltage will be significantly lower than nominal and the tube may be damaged. Some voltage-regulator tubes contained small amounts of radionuclides to produce a more reliable ionization. The Corona VR tube is a high-voltage version that is filled with hydrogen at close to atmospheric pressure, and is designed for voltages ranging from 400 V to 30 kV at tens of microamperes. It has a coaxial form; the outer cylindrical electrode is the cathode and the inner one is the anode. The voltage stability depends on the gas pressure. A successful hydrogen voltage regulator tube, from 1925, was the Raytheon tube, which allowed radios of the time to be operated from AC power instead of batteries. Specific models In America, VR tubes were given RETMA tube part numbers. Lacking a heater (filament), the tube's part numbers began with "0" (zero). In Europe, VR tubes were given part numbers under the professional system ("ZZ1xxx") and under a dedicated system. In USSR, glow-discharge stabilitrons were given designation in Cyrillic with serial number of development. For example, "СГ21Б", "СГ204К" and i.e. VR tubes were only available in certain voltages. Common models were: Octal-based tubes, 5–40 mA current: 0A3 – 75 volts 0B3 – 90 volts 0C3 – 105 volts (best regulation of these four) 0D3 – 150 volts Miniature tubes, 5–30 mA current: 0A2 – 150 volts 0B2 – 108 volts (best regulation of these three) 0C2 – 72 volts Miniature tubes, 1–10 mA current: 85A2 – 85 volts (equivalents: 0G3, CV449, CV4048, QS83/3, QS1209) Voltage reference 1.5–3.0 mA current: 5651 – 87 volts (the most popular voltage reference ever made) 5651A – 85.5 volts Subminiature tubes: Various models such as the 991 that resembled neon lamps, but were optimized for more-accurate voltage regulation Miniature corona tubes, 5–55 μA current: CK1022 1 kV Wire-ended, subminiature corona tubes: CK1037 (6437) 700 volts, 5–125 μA CK1038 900 volts, 5–55 μA CK1039 (6438) 1.2 kV, 5–125 μA Design considerations Some voltage regulator tubes have an internal jumper connected between two of the pins. This jumper could be used in series with the secondary transformer winding. Then, if the tube was removed, rather than leaving the voltage unregulated, the output would turn off. Because the glow discharge is a "statistical" process, a certain amount of electrical noise is introduced into the regulated voltage as the level of ionization varies. In most cases, this can be easily filtered out by placing a small capacitor in parallel with the VR tube or using an RC decoupling network downstream of the VR tube. Too large a capacitance (>0.1 μF for an 0D3, for instance), however, and the circuit will form a relaxation oscillator, definitely ruining the voltage regulation and possibly causing the tube to fail catastrophically. VR tubes can be operated in series for greater voltage ranges. They cannot be operated in parallel: because of manufacturing variations, the current would not be shared equally among several tubes in parallel. (Note the equivalent behavior with series and parallel connected Zener diodes.) In the present day, VR tubes have been almost-entirely supplanted by solid state regulators based on Zener diodes and avalanche breakdown diodes. VR tube information Correctly operating VR tubes glow during normal operation. The color of the glow varies depending upon the gas mixture used to fill the tubes. Though they lack a heater, VR tubes often do become warm during operation due to the current and voltage drop through them. References Electrical breakdown Vacuum tubes Tube
Voltage-regulator tube
[ "Physics" ]
1,244
[ "Physical phenomena", "Physical quantities", "Voltage regulation", "Vacuum tubes", "Vacuum", "Electrical phenomena", "Electrical breakdown", "Voltage", "Matter" ]
1,537,176
https://en.wikipedia.org/wiki/Intercellular%20adhesion%20molecule
In molecular biology, intercellular adhesion molecules (ICAMs) and vascular cell adhesion molecule-1 (VCAM-1) are part of the immunoglobulin superfamily. They are important in inflammation, immune responses and in intracellular signalling events. The ICAM family consists of five members, designated ICAM-1 to ICAM-5. They are known to bind to leucocyte integrins CD11/CD18 such as LFA-1 and Macrophage-1 antigen, during inflammation and in immune responses. In addition, ICAMs may exist in soluble forms in human plasma, due to activation and proteolysis mechanisms at cell surfaces. Mammalian intercellular adhesion molecules include: ICAM-1 ICAM2 ICAM3 ICAM4 ICAM5 References Cell biology Protein families
Intercellular adhesion molecule
[ "Chemistry", "Biology" ]
172
[ "Cell biology", "Biotechnology stubs", "Protein classification", "Biochemistry stubs", "Biochemistry", "Protein families" ]
1,537,546
https://en.wikipedia.org/wiki/Redevelopment
Redevelopment is any new construction on a site that has pre-existing uses. It represents a process of land development uses to revitalize the physical, economic and social fabric of urban space. Description Variations on redevelopment include: Urban infill on vacant parcels that have no existing activity but were previously developed, especially on brownfield land, such as the redevelopment of an industrial site into a mixed-use development. Constructing with a denser land usage, such as the redevelopment of a block of townhouses into a large apartment building. Adaptive reuse, where older structures are converted for improved current market use, such as an industrial mill into housing lofts. Redevelopment projects can be small or large ranging from a single building to entire new neighborhoods or "new town in town" projects. Redevelopment also refers to state and federal statutes which give cities and counties the authority to establish redevelopment agencies and give the agencies the authority to attack problems of urban decay. The fundamental tools of a redevelopment agency include the authority to acquire real property, the power of eminent domain, to develop and sell property without bidding and the authority and responsibility of relocating persons who have interests in the property acquired by the agency. The financing/funding of such operations might come from government grants, borrowing from federal or state governments and selling bonds and from tax increment financing. Other terms sometimes used to describe redevelopment include urban renewal (urban revitalization). While efforts described as urban revitalization often involve redevelopment, they do not always involve redevelopment as they do not always involve the demolition of any existing structures but may instead describe the rehabilitation of existing buildings or other neighborhood improvement initiatives. A new example of other neighborhood improvement initiatives is the funding mechanism associated with high carbon footprint air quality urban blight. Assembly Bill AB811 is the State of California's answer to funding renewable energy and allows cities to craft their own sustainability action plans. These cutting edge action plans needs the funding structure; which can easily come forward through redevelopment funding. Urban renewal Some redevelopment projects and programs have been incredibly controversial including the Urban Renewal program in the United States in the mid-twentieth century or the urban regeneration program in Great Britain. Controversy usually results either from the use of eminent domain, from objections to the change in use or increases in density and intensity on the site or from disagreement on the appropriate use of taxpayer funds to pay for some element of the project. Urban redevelopment in the United States has been controversial because it can displace poor and lower middle class residents, often transferring residents' land and homes to developers for free or a below-market-value price. This is done on the condition that the developer will use that land to construct new commercial and residential developments. The residents displaced by redevelopment are often undercompensated, and some (notably month-to-month tenants and business owners) are not compensated at all. Historically, redevelopment agencies have been buying many properties in redevelopment areas for prices below fair market value, or even below the agencies' own appraisal figures because the displaced people are often unaware of their legal rights and lack the will and the funds to mount a proper legal defense in a valuation trial. Those who do so usually recover more in compensation than what is offered by the redevelopment agencies. The controversy over misuse of eminent domain for redevelopment reached a climax in the wake of the U.S. Supreme Court's 2005 decision in Kelo v. City of New London, which ruled that the general benefits a community enjoyed from economic growth qualified private redevelopment plans as a permissible "public use" under the Takings Clause of the Fifth Amendment. The Kelo decision was widely denounced and remains the subject of severe criticism. Remedial legislation to restrict the use of eminent domain for private development has been enacted or introduced in a number of states. Golf course redevelopment Golf course redevelopment, also known as golf course conversion is a real estate niche, in which investors purchase failing golf courses. Investors then subdivide the golf course into individual plots of lands. They then resell the plots of land for builders, or build on the plots then resell it to residential home buyers. This process is usually done with the assistance of a real estate broker. The main challenge of this niche is the difficulties that investors face in requesting a variance from cities. Notable examples North America: Atlantic Station, Atlanta, Georgia Atlantic Yards, Brooklyn, New York American Tobacco Historic District, Durham, North Carolina CFB Downsview -> Downsview Park, Toronto, Ontario CFB Griesbach -> Griesbach, Edmonton, Alberta CN rail yard -> Station Lands (Edmonton), MacEwan University, Edmonton downtown arena; Edmonton, Alberta Edmonton City Centre (Blatchford Field) Airport -> Blatchford, Edmonton, Alberta HOPE VI Hudson Yards, New York, New York Lincoln Center for the Performing Arts, New York, New York Midtown Detroit, Michigan Mission Bay, Treasure Island, Western Addition, and the part of South of Market that become Moscone Center and Yerba Buena Gardens in San Francisco, California Pearl District, Portland, Oregon Old Port of Montreal, Quebec Downtown San Diego, California Central Park, Denver, Colorado, on a former airport site Toronto Waterfront, Toronto, Canada West End, Boston, Massachusetts World Trade Center site in Lower Manhattan following the September 11 attacks Europe: Canary Wharf, London (UK) Edinburgh Waterfront, UK Redevelopment of Norrmalm (Sweden) Liverpool One, Liverpool (UK) Greenwich Millennium Village, London (UK) Tigné Point, Sliema (Malta) Porta Nuova, Milan (Italy) Asia: Taichung's seventh Redevelopment Zone, Taichung, Taiwan Beijing Olympic Village, Beijing, China Sheung Wan, Hong Kong, China Central America: Panama in Casco Antiguo (Casco Viejo) See also References Urban decay
Redevelopment
[ "Engineering" ]
1,180
[ "Construction", "Redevelopment" ]
1,537,611
https://en.wikipedia.org/wiki/Mechanical%20floor
A mechanical floor, mechanical penthouse, mechanical layer or mechanical level is a story of a high-rise building that is dedicated to mechanical and electronics equipment. "Mechanical" is the most commonly used term, but words such as utility, technical, service, and plant are also used. They are present in all tall buildings, including the world's tallest skyscrapers, with significant structural, mechanical and aesthetics concerns. While most buildings have mechanical rooms, typically in the basement, tall buildings require dedicated floors throughout the structure for this purpose, for a variety of reasons discussed below. Because they use up valuable floor area (just like elevator shafts), engineers try to minimize the number of mechanical floors while allowing for sufficient redundancy in the services they provide. As a rule of thumb, skyscrapers require a mechanical floor for every 10 tenant floors (10%), although this percentage can vary widely (see examples below). In some buildings, they are clustered in groups that divide the building into blocks, while in others they are spread evenly through the structure, and in still others, they are mostly concentrated at the top. Mechanical floors are generally counted in the building's floor numbering (this is required by some building codes) but are accessed only by service elevators. Some zoning regulations exclude mechanical floors from a building's maximum area calculation, permitting a significant increase in building sizes; this is the case in New York City. Sometimes buildings are designed with a mechanical floor located on the thirteenth floor, to avoid problems in renting the space due to superstitions about the number. Structural concerns Some skyscrapers have narrow building cores that require stabilization to prevent collapse. Typically, this is accomplished by joining the core to the external supercolumns at regular intervals using outrigger trusses. The triangular shape of the struts precludes the laying of tenant floors, so these sections house mechanical floors instead, typically in groups of two. Additional stabilizer elements such as tuned mass dampers also require mechanical floors to contain or service them. This layout is usually reflected in the internal elevator zoning. Since nearly all elevators require machine rooms above the last floor they service, mechanical floors are often used to divide shafts that are stacked on top of each other to save space. A transfer level or skylobby is sometimes placed just below those floors. Elevators that reach the top tenant floor also require overhead machine rooms; those are sometimes put into full-size mechanical floors but most often into a mechanical penthouse, which can also contain communications gear and window-washing equipment. On most building designs, this is a simple "box" on the roof, while on others it is concealed inside a decorative spire. A consequence of this is that if the topmost mechanical floors are counted in the total, there can be no such thing as a true "top-floor office" in a skyscraper with this design. Mechanical concerns Besides structural support and elevator management, the primary purpose of mechanical floors is services such as heating, ventilation, and air conditioning (HVAC). They contain air handling units, cooling towers (in mechanical penthouses), electrical generators, chiller plants, water pumps, and so on. In particular, the problem of bringing and keeping water on the upper floors is an important constraint in the design of skyscrapers. Water is necessary for tenant use, air conditioning, equipment cooling, and basic firefighting through sprinklers (especially important since ground-based firefighting equipment usually cannot reach higher than a dozen floors or so). It is inefficient, and seldom feasible, for water pumps to send water directly to a height of several hundred meters, so intermediate pumps and water tanks are used. The pumps on each group of mechanical floors act as a relay to the next one up, while the tanks hold water in reserve for normal and emergency use. Usually the pumps have enough power to bypass a level if the pumps there have failed, and send water two levels up. Special care is taken towards fire safety on mechanical floors that contain generators, compressors, and elevator machine rooms, since oil is used as either a fuel or lubricant in those elements. Mechanical floors also contain communication and control systems that service the building and sometimes outbound communications, such as through a large rooftop antenna (which is also physically held in place inside the top-floor mechanical levels). Modern computerized HVAC control systems minimize the problem of equipment distribution among floors by enabling central remote control. Aesthetics concerns Most mechanical floors require external vents or louvers for ventilation and heat rejection along most or all of their perimeter, precluding the use of glass windows. The resulting visible "dark bands" can disrupt the overall facade design, especially if it is fully glass-clad. Different architectural styles approach this challenge in different ways. In the Modern and International styles of the 1960s and 1970s where form follows function, the vents' presence is not seen as undesirable. Rather it emphasizes the functional layout of the building by dividing it neatly into equal blocks, mirroring the layout of the elevators and offices inside. This could be clearly seen on the Twin Towers of the World Trade Center and can be seen on the Willis Tower. In the IDS Tower in Minneapolis, the lowest mechanical floor serves as a visual separation from the street- and skyway-level Crystal Court shopping center and the office tower above; the upper mechanical floor (above the 50th and 51st floors, the uppermost occupied floors) serves as a "crown" to the building. Conversely, designers of the recent postmodern-style skyscrapers strive to mask the vents and other mechanical elements in various ways. This is accomplished through such means as complex wall angles (Petronas Towers), intricate latticework cladding (Jin Mao Building), or non-glassed sections that appear to be ornamental (Taipei 101, roof of Jin Mao Building). Some low-rise, residential (usually apartment buildings or dormitories), or non-residential buildings, especially built in an architecture style that promotes the use of elements such as sloped roofs and/or bell towers, may have mechanical floors disguised as attics or towers. In this case, the ventilation systems of the mechanical floor are seen as gable vents, dormers, or abat-sons (louvers in a bell tower). Examples include some buildings in UCLA, like Dodd Hall, which has a mechanical floor disguised as an attic and a bell tower. Examples These are examples of above-ground mechanical floor layouts for some of the world's tallest buildings. In each case, mechanical penthouses and spires are counted as floors, leading to higher total floor counts than usual. Taipei 101: Floors 7–8, 17–18, 25–26, 34, 42, 50, 58, 66, 74, 82, 87, 90, 92 to 100 in the penthouse – total 17/102, 17%. The official count of 11 corresponds to the number of groups in the office section. Floors 92–100 contain communications equipment and so are not typically counted as mechanical since they do not service the building itself. One World Trade Center: Floors 2–19, 92–99, and 103–104 – total 28/104, 26%). Previous WTC (Twin Towers): Floors 7–8, 41–42, 75–76, and 108–109 – total 8/110, 7%. The 110th Floor of 1 WTC (North Tower) housed television and radio transmission equipment. Some sources erroneously mention 12 floors, in groups of three, due to the height of the vents (actually the ceilings there were higher) and because levels 44 and 78 were skylobbies which in many buildings sit directly on top of the mechanical floors. However, the twin towers had one occupied office floor under each skylobby, accessible through escalators. Willis Tower (Formerly Sears Tower): Levels 29–32, 64–65, 88–89, 104–108, 109 (penthouse), and 110 (penthouse roof) – total 15/110, 13%. Petronas Towers: Floors 6–7, 38–40, 43, 84, 87–88 – total 9/88, 10% Jin Mao Building: Floors 51–52, and 89–93 in the penthouse – total 7/93, 7.5% Burj Khalifa: Floors 17–18, 40–42, 73–75, 109–111, 136–138, 155, and 160–168, in the penthouse – total 25/168, 15% John Hancock Center: Floors 16–17, 42–43, 93, 99–100 (penthouse) – total 7/100, 7% Empire State Building: Floors 87–101 – total 15/102, 14% International Commerce Center: Floors 6–7, 17–18, 24–25, 34, 43, 52, 61, 70, 79, 88, 97, 104, and 114 – total 17/108, 14% Shanghai World Financial Center: Floors 6, 18, 30, 42, 54, 66, 78, 89, and 90 – total 9/101, 9% Lotte World Tower: Floors 3–4, 13, 21–23, 39–41, 59, 60, 72–75, 83, 84, 102–106, 115, and 116 – total 24/123, 20% References External links Case study for Hong Kong's Central Plaza by the Department of Architecture of Hong Kong University, Energy Features section Skyscrapers Heating, ventilation, and air conditioning Rooms Floors
Mechanical floor
[ "Engineering" ]
1,942
[ "Rooms", "Floors", "Structural engineering", "Architecture" ]
1,537,638
https://en.wikipedia.org/wiki/Magic%20Link
The Magic Link was a Personal Intelligent Communicator marketed by Sony from 1994, based on General Magic's Magic Cap operating system. The Magic Link PIC-1000 was brought to market by Jerry Fiala Sr at Sony. The "Link" part of the name refers to the device's ability to send and receive data over a modem. A competing product to the Magic Link was the Motorola Envoy. In 1995, the Magic Link won the PC World World Class Award. Magic Link PIC-2000 was released in 1996. Applications Messages Address Book Clock and Calendar Notebook Spreadsheet Datebook Phone Fax machine (Kobes Japan model only) Pocket Quicken Sony AV Remote Commander Calculator AT&T PersonaLink Services America Online mail client Documentary film The device features prominently in the documentary film General Magic about the epic rise and fall of General Magic. References Personal digital assistants Sony hardware
Magic Link
[ "Technology" ]
181
[ "Mobile computer stubs", "Computing stubs", "Mobile technology stubs", "Computer hardware stubs" ]
1,537,736
https://en.wikipedia.org/wiki/VIRGOHI21
VIRGOHI21 is an extended region of neutral hydrogen (HI) in the Virgo cluster discovered in 2005. Analysis of its internal motion indicates that it may contain a large amount of dark matter, as much as a small galaxy. Since VIRGOHI21 apparently contains no stars, this would make it one of the first detected dark galaxies. Skeptics of this interpretation argue that VIRGOHI21 is simply a tidal tail of the nearby galaxy NGC 4254. Observational properties VIRGOHI21 was detected through radio telescope observations of its neutral hydrogen 21 cm emissions. The detected hydrogen has a mass of about 100 million solar masses and is about 50 million light-years away. By analyzing the Doppler shift of the emissions, astronomers determined that the gas has a high velocity-profile width; that is, different parts of the cloud are moving at high speed relative to other parts. Follow-up Hubble Space Telescope deep observations of the region have detected very few stars (a few hundred). Dark galaxy interpretation If the high velocity-profile width of VIRGOHI21 is interpreted as rotation, it is far too fast to be consistent with the gravity of the detected hydrogen. Rather, it implies the presence of a dark matter halo with tens of billions of solar masses. Given the very small number of stars detected, this implies a mass-to-light ratio of about 500, far greater than that of a normal galaxy (around 50). The large gravity of the dark matter halo in this interpretation explains the perturbed nature of the nearby spiral galaxy NGC 4254 and the bridge of neutral hydrogen extending between the two entities. Under this interpretation, VIRGOHI21 would be the first discovery of the dark galaxies anticipated by simulations of dark-matter theories. Although other dark-galaxy candidates have previously been observed, follow-up observations indicated that these were either very faint ordinary galaxies or tidal tails. VIRGOHI21 is considered the best current candidate for a dark galaxy. Tidal tail interpretation Sensitive maps covering a much wider area, obtained at Westerbork Synthesis Radio Telescope (WSRT) and at Arecibo Observatory revealed that VIRGOHI21 is embedded within a much more extensive tail originating in NGC 4254. Both the distribution of the HI gas and its velocity field can be reproduced by a model involving NGC 4254 in a high-speed collision with another galaxy (probably NGC 4192), which is now somewhat distant. Other debris tails of this magnitude have been found to be common features in the Virgo cluster, where the high density of galaxies makes interactions frequent. These results suggest that VIRGOHI21 is not an unusual object, given its location at the edge of the densest region of the Virgo cluster. The original paper describing VIRGOHI21 as a dark galaxy provides several objections to the tidal-tail interpretation: that high-velocity interactions do not generally produce significant tails, that the high velocity needed is out-of-place in this part of the Virgo cluster and that the observed velocity profile is opposite from that expected in a tidal tail. In addition, according to Robert Minchin of the Arecibo Observatory, "If the hydrogen in VIRGOHI21 had been pulled out of a nearby galaxy, the same interaction should have pulled out stars as well". Proponents of the tidal-tail interpretation counter these objections with simulations and argue that the apparently inverted velocity profile is due to the orientation of the tail with respect to Earth-based observers. Although the nature of VIRGOHI21 remains a contentious issue, its identification as a dark galaxy seems much less certain now than immediately after its discovery. See also LSB galaxy HVC 127-41-330 References External links Astronomers find star-less galaxy (BBC News), 2005 A multibeam HI survey of the Virgo cluster - two isolated HI clouds?, (abstract), Davies, J, et al., 2004 A Dark Hydrogen Cloud in the Virgo Cluster / Astrophys.J. 622 (2005) L21-L24, arXiv:astro-ph/0502312 First Invisible Galaxy Discovered in Cosmology Breakthrough, (SPACE.com), 2005 Astronomers spot first ever dark galaxy (The Register), 2005 Dark Matter Galaxy? (UniverseToday) Arecibo Survey Produces Dark Galaxy Candidate (SpaceDaily), 2006 3D Animation from neutral Hydrogen data Astronomical objects discovered in 2005 Coma Berenices Virgo Cluster Dark galaxies
VIRGOHI21
[ "Physics", "Astronomy" ]
903
[ "Dark matter", "Unsolved problems in physics", "Constellations", "Dark galaxies", "Coma Berenices" ]
1,537,936
https://en.wikipedia.org/wiki/RX%20J1856.5%E2%88%923754
RX J1856.5−3754 (also called RX J185635−3754, RX J185635−375, and various other designations) is a neutron star in the constellation Corona Australis. At approximately 400 light-years from Earth, it is the closest neutron star discovered to date. Discovery and location RX J1856.5−3754 is thought to have formed in a supernova explosion of its companion star about one million years ago and is moving across the sky at 108 km/s. It was discovered in 1992, and observations in 1996 confirmed that it is a neutron star, the closest to Earth discovered to date. It was originally thought to be about 150–200 light-years away, but further observations using the Chandra X-ray Observatory in 2002 indicate that its distance is greater—about 400 light-years. RX J1856 is one of the Magnificent Seven, a group of young neutron stars at distances between of Earth. Quark star hypothesis By combining Chandra X-ray Observatory and Hubble Space Telescope data, astronomers previously estimated that RX J1856 radiates like a solid body with a temperature of 700,000 °C and has a diameter of about 4–8 km. This estimated size was too small to reconcile with the standard models of neutron stars, and it was therefore suggested that it might be a quark star. However, later refined analysis of improved Chandra and Hubble observations revealed that the surface temperature of the star is lower, only 434,000 °C, and, respectively, the radius is larger, about 14 km (the observed radius with account of the effects of general relativity appears about 17 km). Thus, RX J1856.5–3754 is now excluded from the list of quark star candidates. A subsequent more accurate parallax estimation has led to the correction of this result to km for the true radius (and about 15 km for the observed radius). Vacuum birefringence In 2016 a team of astronomers from Italy, Poland, and the U.K. using the Very Large Telescope reported observational indications of vacuum birefringence from RX J1856.5−3754. A degree of polarization of about 16% was measured from the visible spectrum being large enough to support evidence but not discovery due to the low accuracy of star model and the uncertain direction of the neutron magnetization axis. Its inferred magnetic effect of 1013 G should produce a greater effect at X-ray wavelength which could be measured by future planned polarimeters such as NASA's Imaging X-ray Polarimeter Explorer (IXPE), NASA's Polarimetry of Relativistic X-ray Sources (PRAXYS) or ESA's X-ray Imaging Polarimetry Explorer (XIPE). See also 3C 58, a possible quark star References RX-J185635-375 at jumk.de RX J1856.5-3754 and 3C58: Cosmic X-rays May Reveal New Form of Matter Chandra X-ray Observatory. July 16, 2009. Walter, Frederick M.; Lattimer, James M., The Astrophysical Journal, 2002 External links Is RX J185635-375 a Quark Star? Bare Quark Stars or Naked Neutron Stars? The Case of RX J1856.5-3754 RX J185635-3754 - an Isolated Neutron Star News Release STScI-1997-32: Hubble Sees a Neutron Star Alone in Space Corona Australis Radio-quiet neutron stars Neutron stars ROSAT objects
RX J1856.5−3754
[ "Astronomy" ]
749
[ "Corona Australis", "Constellations" ]
1,537,992
https://en.wikipedia.org/wiki/Descent%20direction
In optimization, a descent direction is a vector that points towards a local minimum of an objective function . Computing by an iterative method, such as line search defines a descent direction at the th iterate to be any such that , where denotes the inner product. The motivation for such an approach is that small steps along guarantee that is reduced, by Taylor's theorem. Using this definition, the negative of a non-zero gradient is always a descent direction, as . Numerous methods exist to compute descent directions, all with differing merits, such as gradient descent or the conjugate gradient method. More generally, if is a positive definite matrix, then is a descent direction at . This generality is used in preconditioned gradient descent methods. See also Directional derivative References Mathematical optimization
Descent direction
[ "Mathematics" ]
159
[ "Mathematical optimization", "Mathematical analysis" ]
1,538,007
https://en.wikipedia.org/wiki/Matrix%20chain%20multiplication
Matrix chain multiplication (or the matrix chain ordering problem) is an optimization problem concerning the most efficient way to multiply a given sequence of matrices. The problem is not actually to perform the multiplications, but merely to decide the sequence of the matrix multiplications involved. The problem may be solved using dynamic programming. There are many options because matrix multiplication is associative. In other words, no matter how the product is parenthesized, the result obtained will remain the same. For example, for four matrices A, B, C, and D, there are five possible options: ((AB)C)D = (A(BC))D = (AB)(CD) = A((BC)D) = A(B(CD)). Although it does not affect the product, the order in which the terms are parenthesized affects the number of simple arithmetic operations needed to compute the product, that is, the computational complexity. The straightforward multiplication of a matrix that is by a matrix that is requires ordinary multiplications and ordinary additions. In this context, it is typical to use the number of ordinary multiplications as a measure of the runtime complexity. If A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix, then computing (AB)C needs (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations, while computing A(BC) needs (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations. Clearly the first method is more efficient. With this information, the problem statement can be refined as "how to determine the optimal parenthesization of a product of n matrices?" The number of possible parenthesizations is given by the (n–1)th Catalan number, which is O(4n / n3/2), so checking each possible parenthesization (brute force) would require a run-time that is exponential in the number of matrices, which is very slow and impractical for large n. A quicker solution to this problem can be achieved by breaking up the problem into a set of related subproblems. A dynamic programming algorithm To begin, let us assume that all we really want to know is the minimum cost, or minimum number of arithmetic operations needed to multiply out the matrices. If we are only multiplying two matrices, there is only one way to multiply them, so the minimum cost is the cost of doing this. In general, we can find the minimum cost using the following recursive algorithm: Take the sequence of matrices and separate it into two subsequences. Find the minimum cost of multiplying out each subsequence. Add these costs together, and add in the cost of multiplying the two result matrices. Do this for each possible position at which the sequence of matrices can be split, and take the minimum over all of them. For example, if we have four matrices ABCD, we compute the cost required to find each of (A)(BCD), (AB)(CD), and (ABC)(D), making recursive calls to find the minimum cost to compute ABC, AB, CD, and BCD. We then choose the best one. Better still, this yields not only the minimum cost, but also demonstrates the best way of doing the multiplication: group it the way that yields the lowest total cost, and do the same for each factor. However, this algorithm has exponential runtime complexity making it as inefficient as the naive approach of trying all permutations. The reason is that the algorithm does a lot of redundant work. For example, above we made a recursive call to find the best cost for computing both ABC and AB. But finding the best cost for computing ABC also requires finding the best cost for AB. As the recursion grows deeper, more and more of this type of unnecessary repetition occurs. One simple solution is called memoization: each time we compute the minimum cost needed to multiply out a specific subsequence, we save it. If we are ever asked to compute it again, we simply give the saved answer, and do not recompute it. Since there are about n2/2 different subsequences, where n is the number of matrices, the space required to do this is reasonable. It can be shown that this simple trick brings the runtime down to O(n3) from O(2n), which is more than efficient enough for real applications. This is top-down dynamic programming. The following bottom-up approach computes, for each 2 ≤ k ≤ n, the minimum costs of all subsequences of length k using the costs of smaller subsequences already computed. It has the same asymptotic runtime and requires no recursion. Pseudocode: // Matrix A[i] has dimension dims[i-1] x dims[i] for i = 1..n MatrixChainOrder(int dims[]) { // length[dims] = n + 1 n = dims.length - 1; // m[i,j] = Minimum number of scalar multiplications (i.e., cost) // needed to compute the matrix A[i]A[i+1]...A[j] = A[i..j] // The cost is zero when multiplying one matrix for (i = 1; i <= n; i++) m[i, i] = 0; for (len = 2; len <= n; len++) { // Subsequence lengths for (i = 1; i <= n - len + 1; i++) { j = i + len - 1; m[i, j] = MAXINT; for (k = i; k <= j - 1; k++) { cost = m[i, k] + m[k+1, j] + dims[i-1]*dims[k]*dims[j]; if (cost < m[i, j]) { m[i, j] = cost; s[i, j] = k; // Index of the subsequence split that achieved minimal cost } } } } } Note : The first index for dims is 0 and the first index for m and s is 1. A Python implementation using the memoization decorator from the standard library: from functools import cache def matrixChainOrder(dims: list[int]) -> int: @cache def a(i, j): return min((a(i, k) + dims[i] * dims[k] * dims[j] + a(k, j) for k in range(i + 1, j)), default=0) return a(0, len(dims) - 1) More efficient algorithms There are algorithms that are more efficient than the O(n3) dynamic programming algorithm, though they are more complex. Hu & Shing An algorithm published by T. C. Hu and M.-T. Shing achieves O(n log n) computational complexity. They showed how the matrix chain multiplication problem can be transformed (or reduced) into the problem of triangulation of a regular polygon. The polygon is oriented such that there is a horizontal bottom side, called the base, which represents the final result. The other n sides of the polygon, in the clockwise direction, represent the matrices. The vertices on each end of a side are the dimensions of the matrix represented by that side. With n matrices in the multiplication chain there are n−1 binary operations and Cn−1 ways of placing parentheses, where Cn−1 is the (n−1)-th Catalan number. The algorithm exploits that there are also Cn−1 possible triangulations of a polygon with n+1 sides. This image illustrates possible triangulations of a regular hexagon. These correspond to the different ways that parentheses can be placed to order the multiplications for a product of 5 matrices. For the example below, there are four sides: A, B, C and the final result ABC. A is a 10×30 matrix, B is a 30×5 matrix, C is a 5×60 matrix, and the final result is a 10×60 matrix. The regular polygon for this example is a 4-gon, i.e. a square: The matrix product AB is a 10x5 matrix and BC is a 30x60 matrix. The two possible triangulations in this example are: The cost of a single triangle in terms of the number of multiplications needed is the product of its vertices. The total cost of a particular triangulation of the polygon is the sum of the costs of all its triangles: (AB)C: (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 multiplications A(BC): (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 multiplications Hu & Shing developed an algorithm that finds an optimum solution for the minimum cost partition problem in O(n log n) time. Their proof of correctness of the algorithm relies on "Lemma 1" proved in a 1981 technical report and omitted from the published paper. The technical report's proof of the lemma is incorrect, but Shing has presented a corrected proof. Other O(n log n) algorithms Wang, Zhu and Tian have published a simplified O(n log m) algorithm, where n is the number of matrices in the chain and m is the number of local minimums in the dimension sequence of the given matrix chain. Nimbark, Gohel, and Doshi have published a greedy O(n log n) algorithm, but their proof of optimality is incorrect and their algorithm fails to produce the most efficient parentheses assignment for some matrix chains. Chin-Hu-Shing approximate solution An algorithm created independently by Chin and Hu & Shing runs in O(n) and produces a parenthesization which is at most 15.47% worse than the optimal choice. In most cases the algorithm yields the optimal solution or a solution which is only 1-2 percent worse than the optimal one. The algorithm starts by translating the problem to the polygon partitioning problem. To each vertex V of the polygon is associated a weight w. Suppose we have three consecutive vertices , and that is the vertex with minimum weight . We look at the quadrilateral with vertices (in clockwise order). We can triangulate it in two ways: and , with cost and with cost . Therefore, if or equivalently we remove the vertex from the polygon and add the side to the triangulation. We repeat this process until no satisfies the condition above. For all the remaining vertices , we add the side to the triangulation. This gives us a nearly optimal triangulation. Generalizations The matrix chain multiplication problem generalizes to solving a more abstract problem: given a linear sequence of objects, an associative binary operation on those objects, and a way to compute the cost of performing that operation on any two given objects (as well as all partial results), compute the minimum cost way to group the objects to apply the operation over the sequence. A practical instance of this comes from the ordering of join operations in databases; see . Another somewhat contrived special case of this is string concatenation of a list of strings. In C, for example, the cost of concatenating two strings of length m and n using strcat is O(m + n), since we need O(m) time to find the end of the first string and O(n) time to copy the second string onto the end of it. Using this cost function, we can write a dynamic programming algorithm to find the fastest way to concatenate a sequence of strings. However, this optimization is rather useless because we can straightforwardly concatenate the strings in time proportional to the sum of their lengths. A similar problem exists for singly linked lists. Another generalization is to solve the problem when parallel processors are available. In this case, instead of adding the costs of computing each factor of a matrix product, we take the maximum because we can do them simultaneously. This can drastically affect both the minimum cost and the final optimal grouping; more "balanced" groupings that keep all the processors busy are favored. There are even more sophisticated approaches. See also Associahedron Tamari lattice References Optimization algorithms and methods Matrices Dynamic programming Articles with example Python (programming language) code
Matrix chain multiplication
[ "Mathematics" ]
2,680
[ "Matrices (mathematics)", "Mathematical objects" ]
1,538,038
https://en.wikipedia.org/wiki/Performativity
Performativity is the concept that language can function as a form of social action and have the effect of change. The concept has multiple applications in diverse fields such as anthropology, social and cultural geography, economics, gender studies (social construction of gender), law, linguistics, performance studies, history, management studies and philosophy. The concept is first described by philosopher of language John L. Austin when he referred to a specific capacity: the capacity of speech and communication to act or to consummate an action. Austin differentiated this from constative language, which he defined as descriptive language that can be "evaluated as true or false". Common examples of performative language are making promises, betting, performing a wedding ceremony, an umpire calling a foul, or a judge pronouncing a verdict. Influenced by Austin, gender studies philosopher Judith Butler argued that gender is socially constructed through commonplace speech acts and nonverbal communication that are performative, in that they serve to define and maintain identities. This view of performativity reverses the idea that a person's identity is the source of their secondary actions (speech, gestures). Instead, it views actions, behaviors, and gestures as both the result of an individual's identity as well as a source that contributes to the formation of one's identity which is continuously being redefined through speech acts and symbolic communication. This view was also influenced by philosophers such as Michel Foucault and Louis Althusser. Defining performance Performance is a bodily practice that produces meaning. It is the presentation or 're-actualization' of symbolic systems through living bodies as well as lifeless mediating objects, such as architecture. In the academic field, as opposed to the domain of the performing arts, the concept of performance is generally used to highlight dynamic interactions between social actors or between a social actor and their immediate environment. Performance is an equivocal concept and for the purpose of analysis it is useful to distinguish between two senses of 'performance'. In the more formal sense, performance refers to a framed event. Performance in this sense is an enactment out of convention and tradition. Founder of the discipline of performance studies Richard Schechner dubs this category 'is-performance'. In a weaker sense, performance refers to the informal scenarios of daily life, suggesting that everyday practices are 'performed'. Schechner called this the 'as-performance'. Generally the performative turn is concerned with the latter, although the two senses of performance should be seen as ends of a spectrum rather than distinct categories. History The performative turn is a paradigmatic shift in the humanities and social sciences that affected such disciplines as anthropology, archaeology, linguistics, ethnography, history and the relatively young discipline of performance studies. Previously used as a metaphor for theatricality, performance is now often employed as a heuristic principle to understand human behaviour. The assumption is that all human practices are 'performed', so that any action at whatever moment or location can be seen as a public presentation of the self. This methodological approach entered the social sciences and humanities in the 1990s but is rooted in the 1940s and 1950s. Underlying the performative turn was the need to conceptualize how human practices relate to their contexts in a way that went beyond the traditional sociological methods that did not problematize representation. Instead of focusing solely on given symbolic structures and texts, scholars stress the active, social construction of reality as well as the way that individual behaviour is determined by the context in which it occurs. Performance functions both as a metaphor and an analytical tool and thus provides a perspective for framing and analysing social and cultural phenomena. Origins The origins of the performative turn can be traced back to two strands of theorizing about performance as a social category that surfaced in the 1940s and 1950s. The first strand is anthropological in origin and may be labelled the dramaturgical model. Kenneth Burke (1945) expounded a 'dramatistic approach' to analyse the motives underlying such phenomena as communicative actions and the history of philosophy. Anthropologist Victor Turner focussed on cultural expression in staged theatre and ritual. In his highly influential The Presentation of Self in Everyday Life (1959), Erving Goffman emphasized the link between social life and performance by stating that 'the theatre of performances is in public acts'. Within the performative turn, the dramaturgical model evolved from the classical concept of 'society as theatre' into a broader category that considers all culture as performance. The second strand of theory concerns a development in the philosophy of language launched by John Austin in the 1950s. In How to do things with words he introduced the concept of the 'performative utterance', opposing the prevalent principle that declarative sentences are always statements that can be either true or false. Instead he argued that 'to say something is to do something'. In the 1960s John Searle extended this concept to the broader field of speech act theory, where due attention is paid to the use and function of language. In the 1970s Searle engaged in polemics with postmodern philosopher Jacques Derrida, about the determinability of context and the nature of authorial intentions in a performative text. J. L. Austin The term derives from the founding work in speech act theory by ordinary language philosopher J. L. Austin. In the 1950s, Austin gave the name performative utterances to situations where saying something was doing something, rather than simply reporting on or describing reality. The paradigmatic case here is speaking the words "I do". Austin did not use the word performativity. Breaking with analytic philosophy, Austin argued in How to Do Things With Words that a "performative utterance" cannot be said to be either true or false as a constative utterance might be: it can only be judged either "happy" or "infelicitous" depending upon whether the conditions required for its success have been met. In this sense, performativity is a function of the pragmatics of language. Having shown that all utterances perform actions, even apparently constative ones, Austin famously discarded the distinction between "performative" and "constative" utterances halfway through the lecture series that became the book and replaced it with a three-level framework: locution (the actual words spoken, that which the linguists and linguistic philosophers of the day were mostly interested in analyzing) illocutionary force (what the speaker is attempting to do in uttering the locution) perlocutionary effect (the actual effect the speaker actually has on the interlocutor by uttering the locution) For example, if a speech act is an attempt to distract someone, the illocutionary force is the attempt to distract and the perlocutionary effect is the actual distraction caused by the speech act in the interlocutor. Influence of Austin Austin's account of performativity has been subject to extensive discussion in philosophy, literature, and beyond. Jacques Derrida, Shoshana Felman, Judith Butler, and Eve Kosofsky Sedgwick are among the scholars who have elaborated upon and contested aspects of Austin's account from the vantage point of deconstruction, psychoanalysis, feminism, and queer theory. Particularly in the work of feminists and queer theorists, performativity has played an important role in discussions of social change (Oliver 2003). The concept of performativity has also been used in science and technology studies and in economic sociology. Andrew Pickering has proposed to shift from a "representational idiom" to a "performative idiom" in the study of science. Michel Callon has proposed to study the performative aspects of economics, i.e. the extent to which economic science plays an important role not only in describing markets and economies, but also in framing them. Karen Barad has argued that science and technology studies deemphasize the performativity of language in order to explore the performativity of matter (Barad 2003). Other uses of the notion of performativity in the social sciences include the daily behavior (or performance) of individuals based on social norms or habits. Philosopher and feminist theorist Judith Butler has used the concept of performativity in their analysis of gender development, as well as in analysis of political speech. Eve Kosofsky Sedgwick describes queer performativity as an ongoing project for transforming the way we may define—and break—boundaries to identity. Through her suggestion that shame is a potentially performative and transformational emotion, Sedgwick has also linked queer performativity to affect theory. Also innovative in Sedgwick's discussion of the performative is what she calls periperformativity (2003: 67–91), which is effectively the group contribution to the success or failure of a speech act. Postmodernism The performative turn is anchored in the broader cultural development of postmodernism. An influential current in modern thought, postmodernism is a radical reappraisal of the assumed certainty and objectivity of scientific efforts to represent and explain reality. Postmodern scholars argue that society itself both defines and constructs reality through experience, representation and performance. From the 1970s onwards, the concept of performance was integrated into a variety of theories in the humanities and social sciences, such as phenomenology, critical theory (the Frankfurt school), semiotics, Lacanian psychoanalysis, deconstructionism and feminism. The conceptual shift became manifest in a methodology oriented towards culture as a dynamic phenomenon as well as in the focus on subjects of study that were neglected before, such as everyday life. For scholars, the concept of performance is a means to come to grips with human agency and to better understand the way social life is constructed. Jean-François Lyotard In The Postmodern Condition: A Report on Knowledge (1979, English translation 1986), philosopher and cultural theorist Jean-François Lyotard defined performativity as the defining mode of legitimation of postmodern knowledge and social bonds, that is, power. In contrast to the legitimation of modern knowledge through such grand narratives as Progress, Revolution, and Liberation, performativity operates by system optimization or the calculation of input and outputs. In a footnote, Lyotard aligns performativity with Austin's concept of performative speech act. Postmodern knowledge must not only report: it must do something and do it efficiently by maximizing input/output ratios. Lyotard uses Wittgenstein's notion of language games to theorize how performativity governs the articulation, funding, and conduct of contemporary research and education, arguing that at bottom it involves the threat of terror: "be operational (that is commensurable) or disappear" (xxiv). While Lyotard is highly critical of performativity, he notes that it calls on researchers to explain not only the worth of their work but also the worth of that worth. Lyotard associated performativity with the rise of digital computers in the post-World War II period. In Postwar: A History of Europe Since 1945, historian Tony Judt cites Lyotard to argue that the Left has largely abandoned revolutionary politics for human rights advocacy. The widespread adoption of performance reviews, organizational assessments, and learning outcomes by different social institutions worldwide has led social researchers to theorize "audit culture" and "global performativity". Against performativity and Jürgen Habermas' call for consensus, Lyotard argued for legitimation by paralogy, or the destabilizing, often paradoxical, introduction of difference into language games. John Searle In A Taxonomy of Illocutionary Acts, John Searle takes up and reformulates the ideas of his colleague J. L. Austin. Though Searle largely supports and agrees with Austin's theory of speech acts, he has a number of critiques, which he outlines: "In sum, there are (at least) six related difficulties with Austin's taxonomy; in ascending order of importance: there is a persistent confusion between verbs and acts, not all the verbs are illocutionary verbs, there is too much overlap of the categories, too much heterogeneity within the categories, many of the verbs listed in the categories don't satisfy the definition given for the category and, most important, there is no consistent principle of classification." His last key departure from Austin lies in Searle's claim that four of his universal 'acts' do not need 'extra-linguistic' contexts to succeed. As opposed to Austin who thinks all illocutionary acts need extra-linguistic institutions, Searle disregards the necessity of context and replaces it with the "rules of language". Jacques Derrida Philosopher Jacques Derrida drew on Austin's theory of performative speech act while deconstructing its logocentric and phonocentric premises and reinscribing it within the operations of generalized writing. In contrast to structuralism's focus on linguistic form, Austin had introduced the force of speech acts, which Derrida aligns with Nietzsche's insights on language. In "Signature, Event, Context," Derrida focused on Austin's privileging of speech and the accompanying presumptions of the presence of a speaker ("signature") and the bounding of a performative's force by an act or a context. In a passage that would become a touchstone of poststructuralist thought, Derrida stresses the citationality or iterability of any and all signs.Every sign, linguistic or nonlinguistic, spoken or written (in the current sense of this opposition), in a small or large unit, can be cited, put between quotation marks; in doing so it can break with every given context, engendering an infinity of new contexts in a manner which is absolutely illimitable. This does not imply that the mark is valid outside of a context, but on the contrary that there are only contexts without any center or absolute anchorage [ancrage]. This citationality, this duplication or duplicity, this iterability of the mark is neither an accident nor an anomaly, it is that (normal/abnormal) without which a mark could not even have a function called "normal." What would a mark be that could not be cited? Or one whose origins would not get lost along the way?Derrida's stress on the citational dimension of performativity would be taken up by Judith Butler and other theorists. While he addressed the performativity of individual subject formation, Derrida also raised such questions as whether we can mark when the event of the Russian revolution went awry, thus scaling up the field of performativity to historical dimensions. Judith Butler Philosopher and feminist theorist Judith Butler offered a new, more Continental (specifically, Foucauldian) reading of the notion of performativity, which has its roots in linguistics and philosophy of language. They describe performativity as "that reiterative power of discourse to produce the phenomena that it regulates and constrains." They have largely used this concept in their analysis of gender development. The concept places emphasis on the manners by which identity is passed or brought to life through discourse. Performative acts are types of authoritative speech. This can only happen and be enforced through the law or norms of the society. These statements, just by speaking them, carry out a certain action and exhibit a certain level of power. Examples of these types of statements are declarations of ownership, baptisms, inaugurations, and legal sentences. Something that is key to performativity is repetition. The statements are not singular in nature or use and must be used consistently in order to exert power. Performance theory and gender perspectives Butler explains gender as constructed by repeated acts. Acts that people come to perform in the mode of belief which cite existing norms, analogous to a script. Butler sees gender not as an expression of what one is but as something that one does. The appearance of a gendered essence is merely a "performative accomplishment". Furthermore, they do not see it as socially imposed on a self that is prior to gender, as the self is not distinct from the categories which constitute it. According to Butler's theory, homosexuality and heterosexuality are not fixed categories. For Butler, a person is merely in a condition of "doing straightness" or "doing queerness," where these categories are not natural but historical and socially constititued. "For Butler, the distinction between the personal and the political or between private and public is itself a fiction designed to support an oppressive status quo: our most personal acts are, in fact, continually being scripted by hegemonic social conventions and ideologies". Theoretical criticisms Several criticisms have been raised regarding Butler's concept of performativity. The first is that the theory is individual in nature and does not take into consideration such factors as the space within which the performance occurs, the others involved, and how others might see or interpret what they witness. It has also been argued that Butler overlooks the unplanned effects of the performance act and the contingencies surrounding it. Another criticism is that Butler is not clear about the concept of subject. It has been said that in Butler's writings, the subject sometimes only exists tentatively, sometimes possesses a "real" existence, and other times is socially active. Also, some observe that the theory might be better suited to literary analysis as opposed to social theory. Others criticize Butler for taking ethnomethodological and symbolic interactionist sociological analyses of gender and merely reinventing them in the concept of performativity. For example, A. I. Green argues that the work of Kessler and McKenna (1978) and West and Zimmerman (1987) builds directly from Garfinkel (1967) and Goffman (1959) to deconstruct gender into moments of attribution and iteration in a continual social process of "doing" masculinity and femininity in the performative interval. These latter works are premised on the notion that gender does not precede but, rather, follows from practice, instantiated in micro-interaction Elaborations and related concepts The concept of performance has been developed by such scholars as Richard Schechner, Victor Turner, Clifford Geertz, Erving Goffman, John Austin, John Searle, Pierre Bourdieu, Stern and Henderson, and Judith Butler. Performance studies Performance studies emerged through the work of, among others, theatre director and scholar Richard Schechner, who applied the notion of performance to human behaviour beyond the performing arts. His interpretation of performance as non-artistic yet expressive social behaviour and his collaboration in 1985 with anthropologist Victor Turner led to the beginning of performance studies as a separate discipline. Schechner defines performance as 'restored behaviour', to emphasize the symbolic and coded aspects of culture. Schechner understands performance as a continuum. Not everything is meant to be a performance, but everything, from performing arts to politics and economics, can be studied as performance. Performativity A related concept that emphasizes the political aspect of performance and its exercise of power is performativity. It is associated with philosopher and gender theorist Judith Butler. It is an anti-essentialist theory of subjectivity in which a performance of the self is repeated and dependent upon a social audience. In this way, these unfixed and precarious performances come to have the appearance of substance and continuity. A key theoretical point that was most radical in regards to theories of subjectivity and performance is that there is no performer behind the performance. Butler derived this idea from Nietzsche's concept of "no doer behind the deed." This is to say that there is no self before the performance of the self, but rather that the performance has constitutive powers. This is how categories of the self for Judith Butler, such as gender, are seen as something that one "does," rather than something one "is." Habitus In the 1970s, Pierre Bourdieu introduced the concept of 'habitus' or regulated improvisation, in a reaction against the structuralist notion of culture as a system of rules (Bourdieu 1972). Culture in his perspective undergoes a shift from 'a productive to a reproductive social order in which simulations and models constitute the world so that the distinction between real and appearance becomes erased'. Though Bourdieu himself does not often employ the term 'performance', the notion of the bodily habitus as a formative site has been a source of inspiration for performance theorists. Occasionalism The cultural historian Peter Burke suggested using the term 'occasionalism' to stress the implication of the idea of performance that '[...] on different occasions or in different situations the same person behaves in different ways'. Non-representational theory Within the social sciences and humanities, an interdisciplinary strand that has contributed to the performative turn is non-representational theory. It is a 'theory of practices' that focuses on repetitive ways of expression, such as speech and gestures. As opposed to representational theory, it argues that human conduct is a result of linguistic interplay rather than of codes and symbols that are consciously planned. Non-representational theory interprets actions and events, such as dance or theatre, as actualisations of knowledge. It also intends to shift the focus away from the technical aspects of representation, to the practice itself. Various applications Performance offers a tremendous interdisciplinary archive of social practices. It offers methods to study such phenomena as body art, ecological theatre, multimedia performance and other kinds of performance arts. Performance also provides a new registry of kinaesthetic effects, enabling a more conscientious observation of the moving body. The changing experience of movement, for example as a result of new technologies, has become an important subject of research. Moreover, the performative turn has helped scholars to develop an awareness of the relations between everyday life and stage performances. For example, at conferences and lectures, on the street and in other places where people speak in public, performers tend to use techniques derived from the world of theatre and dance. Performance allows us to study nature and other apparently 'immovable' and 'objectified' elements of the human environment (e.g. architecture) as active agents, rather than only as passive objects. Thus, in recent decades environmental scholars have acknowledged the existence of a fluid interaction between man and nature. The performative turn has provided additional tools to study everyday life. A household for example may be considered as a performance, in which the relation between wife and husband is a role play between two actors. Economics and finance In economics, the "performativity thesis" is the claim that the assumptions and models used by professionals and popularizers affect the phenomena they purport to describe; bringing the world more into line with theory. It also refers, more largely, to the idea of economic reality as a ceaselessly provoked reality and of things such as performance indicators, valuation formulas, consumer tests, stock prices or financial contracts constituting what they refer to. This theory was developed by Michel Callon in The Laws of the Markets, before being further developed in Do Economists Make Markets edited by Donald Angus MacKenzie, Fabian Muniesa and Lucia Siu, and in Enacting Dismal Science edited by Ivan Boldyrev and Ekaterina Svetlova. The most important work in the field is that of Donald MacKenzie and Yuval Millo on the social construction of financial markets. In a seminal article, they showed that the option pricing theory called BSM (Black-Scholes-Merton) has been successful empirically not because of the discovery of preexisting price regularities, but because participants used it to set option prices, so that it made itself true. The thesis of performativity of economics has been extensively criticized by Nicolas Brisset in Economics and Performativity. Brisset defends the idea that the notion of performativity used by Callonian and Latourian sociologists leads to an overly relativistic view of the social world. Drawing on the work of John Austin and David Lewis, Brisset theorizes the idea of limits to performativity. To do this, Brisset considers that a theory, in order to be "performative", must become a convention. This requires conditions to be met. To take a convention status, a theory will have to: Provide social actors with a representation of their social world allowing them to choose among several actions ("Empiricity" condition); Indicate an option considered relevant when the agreement is generalised ("Self-fulfilling" condition); Be compatible with all the conventions constituting the social environment ("Coherency" condition); Based on this framework, Brisset criticized the seminal work of MacKenzie and Millo on the performativity of the Black-Scholes-Merton financial model. Drawing on the work of Pierre Bourdieu, Brisset also uses the notion of Speech Act to study economic models and their use in political power relations. MacKenzie's approach was also criticized by Uskali Maki for not using the concept of performativity in accordance with Austin's formulation. This point gave rise to a debate in economic philosophy. Management studies In management, the concept of performativity has also been mobilized, relying on its diverse conceptualizations (Austin, Barad, Barnes, Butler, Callon, Derrida, Lyotard, etc.). In the study of management theories, performativity shows how actors use theories, how they produce effects on organizational practices and how these effects shape these practices. For instance, by building on Michel Callon's perspective, the concept of performativity has been mobilized to show how the concept of Blue Ocean Strategy transformed organizational practices. Journalism The German news anchorman Hanns Joachim Friedrichs once argued that a good journalist should never act in collusion with anything, not even with a good thing. In the evening of November 9, 1989, the evening of the fall of the Berlin Wall, however, Friedrichs reportedly broke his own rule when he announced: "The gates of the wall are wide open." („Die Tore in der Mauer stehen weit offen.”) In reality, the gates were still closed. According to a historian, it was this announcement that encouraged thousands of East Berliners to march towards the wall, finally forcing the border guards to open the gates. In the sense of performativity, Friedrichs's words became a reality. Video art Theories of performativity have extended across multiple disciplines and discussions. Notably, interdisciplinary theorist José Esteban Muñoz has related video to theories of performativity. Specifically, Muñoz looks at the 1996 documentary by Susana Aiken and Carlos Aparicio, "The Transformation." Although historically and theoretically related to performance art, video art is not an immediate performance; it is mediated, iterative and citational. In this way, video art raises questions of performativity. Additionally, video art frequently puts bodies and display, complicating borders, surfaces, embodiment, and boundaries and so indexing performativity. Issues and debates Despite cogent attempts at definition, the concept of performance continues to be plagued by ambiguities. Most pressing seems to be the paradox between performance as the consequence of following a script (cf. Schechners restored behaviour) and performance as a fluid activity with ample room for improvisation. Another problem involves the discrepancy between performance as a human activity that constructs culture (e.g. Butler and Derrida) on the one hand and performance as a representation of culture on the other (e.g. Bourdieu and Schechner). Another issue, important to pioneers such as Austin but now deemed irrelevant by postmodernism, concerns the sincerity of the actor. Can performance be authentic, or is it a product of pretence? See also Dramaturgy (sociology) Erving Goffman Frame analysis John Searle Performance Performance studies Performative text Performative utterances Speech act References Bibliography and further reading Austin, J. L. 1962. How to Do Things with Words. Oxford: Clarendon Press. Austin, J. L. 1970. "Performative Utterances." In Austin, "Philosophical Papers", 233–52. London: Oxford University Press. Bakhtin, Mikhail. "Discourse in the Novel", The dialogic imagination : four essays; edited by Michael Holquist; translated by Caryl Emerson and Michael Holquist Austin: University of Texas Press, c1981. Bamberg, M., Narrative. State of the Art (2007). Barad, Karen. 2003. "Posthumanist Performativity: Toward and Understanding of How Matter Comes to Matter." Signs: Journal of Women in Culture and Society 28.3: 801–831. Boldyrev, Ivan and Svetlova, Ekaterina. 2016. Enacting Dismal Science: New Perspectives on the Performativity of Economics. Basingstoke: Palgrave Macmillan. Bourdieu, P., Outlines of a Theory of Practice (Cambridge 1972). Burke, Peter, 'Performing history: the importance of occasions', in: Rethinking history 9 afl. 1 (2005), pp. 35–52. Brickell, Chris. 2005. "Masculinities, Performativity, and Subversion: A Sociological Reappraisal." Men and Masculinities 8.1: 24–43. Brisset, Nicolas. 2017. "On performativity: Option Theory and the Resistance of Financial Phenomena". Journal of the History of Economic Thought. 39(4) : 549–569. DOI: https://doi.org/10.1017/S1053837217000128 Brisset, Nicolas. 2019. Economics and Performativity. Exploring limits,Theories and Cases. Routledge INEM Advances in Economic Methodology. Butler, Judith. 1993. Bodies that Matter. On the Discursive Limits of Sex. London and New York: Routledge. Butler, Judith. 1997. Excitable Speech: A Politics of the Performative. London and New York: Routledge. Butler, Judith. 2000. "Critically Queer", in Identity: A Reader. London: Sage Publications. Butler, Judith. 2010. "Performative Agency", in Journal of Cultural Economy 3:2, 147–161. . Callon, Michel. 1998. "Introduction: the Embeddedness of Economic Markets in Economics". In M. Callon (ed.), The Laws of the Markets. Oxford: Blackwell. Carlson, M., Performance: A Critical Introduction (London 1996). Chaney, D., Fictions of Collective Life (London 1993). Crane, M. T. 'What was performance?', in: Criticism 43, afl. 2 (2001), pp. 169–187. Davidson, M., Ghostlier Demarcations. Modern Poetry and the Material Word (Berkeley 1997). Davis, T. C., The Cambridge Companion to Performance Studies (Illinois 2008). Derrida, Jacques. 1971. "Signature, Event, Context", in Limited, inc., Evanston: Northwestern Univ. Press, 1988. Dirksmeier, P & I. Helbrecht, 'Time, Non-representational Theory and the "Performative Turn"—Towards a New Methodology in Qualitative Social Research', Forum: Qualitative Social Research 9 (2008), pp. 1–24. Dunn, R.G. 1997. "Self, Identity and Difference: Mead and the Poststructuralists." Sociological Quarterly 38.4: 687–705. Farnell, B., 'Moving Bodies: acting selves', Annual Review in Anthropology 28 (1999), pp. 341–373. Felluga, Dino. "Modules on Butler". Retrieved on 10/30/06 from Modules on Butler II: Performativity. Felman, Shoshana. 1980/2003. The Scandal of the Speaking Body: Don Juan With J.L. Austin, or Seduction in Two Languages. Translated by Catherine Porter. Stanford: Stanford University Press. Garfinkel, Harold. 1967. Studies in Ethnomethodology. Englewood Cliffs, NJ: Prentice Hall. Geertz, C., Negara: the Theatre State in Nineteenth-Century Bali (Princeton 1980). Goffman, Erving. 1959. The Presentation of Self in Everyday Life. Garden City, NY: Anchor. Glass, Michael & Rose-Redwood, Reuben. 2014. Performativity, Politics, and the Production of Social Space. New York: Routledge. Goffman, Erving. 1976. "Gender Display" and "Gender Commercials." Gender Advertisements. New York: Harper and Row. Goffman, Erving. 1983. "Frame Analysis of Talk." The Goffman Reader, Lemert and Branaman, eds., Blackwell, 1997. Green, Adam Isaiah. 2007. "Queer Theory and Sociology: Locating the Subject and the Self in Sexuality Studies." Sociological Theory 25.1: 26–45. Green, B., Spectacular Confession: Autobiography, Performative Activism and the Sites of Suffrage, 1905–1938 (London 1997). Hall, Stuart. 2000. "Who Needs Identity?" In Identity: A Reader. London: Sage Publications. Hawkes, David. 2020. The Reign of Anti-logos: Performance in Postmodernity (Palgrave Insights into Apocalypse Economics), London and New York: Palgrave Macmillan. Hymes, D., 'Breakthrough into performance', in: D. Ben-Amos and K.S. Goldstein (eds.) Folklore: Performance and Communication (The Hague 1975). Ingold, T., 'The temporality of Landscape'. World Archeology 25 (1993), pp. 152–174. Kapchan, D., 'Performance' in: Journal of American Folklore 108, pp. 479–508. Kessler, Suzanne, and Wendy McKenna. 1978. Gender: An Ethnomethodological Approach. Chicago: University of Chicago Press. Pdf. Lloyd, Moya. 1999. "Performativity, Parody, Politics", Theory, Culture & Society, 16(2), 195–213. McKenzie, J., 'Performance studies', The Johns Hopkins Guide to Literary Theory and Criticism (2005). Matynia, Elzbieta. 2009. Performative Democracy. Boulder: Paradigm. Membretti, Andrea. 2009. "Per un uso performativo delle immagini nella ricerca-azione sociale", Lo Squaderno n.12 (http://www.losquaderno.professionaldreamers.net/?p=1101) McKenzie, Jon. "Perform or Else: From Discipline to Performance." London: Routledge, 2001. McKenzie, Jon, Heike Roms, and C. J. Wan-ling. Wee. "Contesting Performance: Global Sites of Research." Basingstoke, UK: Palgrave Macmillan, 2010. Muñoz, Performing Disidentifications. Disidentifications: Queers of Color and the Performance of Politics. 1999. Oliver, Kelly. 2003. "What Is Transformative about the Performative? From Repetition to Working Through." In Ann Cahill and Jennifer Hansen, eds., Continental Feminism Reader. Parker and Sedgwick, Introduction: Performativity and Performance. Performativity and Performance. 1995. Pickering, Andrew. 1995. The Mangle of Practice: Time, Agency and Science. Chicago: University of Chicago Press. Porter, J.N., 'Review Postmodernism by Mike Featherstone', Contemporary sociology 19 (1990) 323. Robinson, Douglas. 2003. Performative Linguistics: Speaking and Translating as Doing Things With Words. London and New York: Routledge. Robinson, Douglas. 2006. Introducing Performative Pragmatics. London and New York: Routledge. Roudavski, Stanislav. 2008. Staging Places as Performances: Creative Strategies for Architecture (PhD, University of Cambridge) Rosaldo, Michele. 1980. The things we do with words: Ilongot speech acts and speech act theory in philosophy. Language in Society 11: 203–237. Schechner, Richard, Performance Studies. An Introduction (New York 2006). Schieffelin, E., 'Problematising Performance', in: Hughes-Freeland, F., (ed) Ritual, Performance, Media (London 1998), pp. 194–207. Searle, John. 1969. "Speech Acts: An Essay in the Philosophy of Language". Cambridge: Cambridge University Press. Sedgwick, Eve Kosovsky. 2003. Touching Feeling: Affect, Pedagogy, Performativity. Durham, NC: Duke University Press. Stern and Henderson, Performance: Texts and Contexts (Londen 1993). Thrift, N. en J. Dewsbury, 'Dead geographies – and how to make them live', Environment and Planning D: Society and Space 18 (2000), pp. 411–432. Thrift, N. J., 'The still point: resistance, expressive embodiment and dance', in: Pile, S. (ed), Geographies of Resistance (London 1997), pp. 125–151. Thrift, N. J., Spatial Formations (London 1996). Weiss, B., The Making and Unmaking of the Haya Lived World: Consumption, Commodization, and Everyday Practise (Durham 1996). Wells, P., Understanding Animation (London 1998). West, Candace and Don Zimmerman. 1987. "Doing Gender." Gender and Society 1.2: 121–151. External links Performance and architecture Performance and collective action Feminist philosophy Feminist terminology Pragmatics Semiotics Science studies Science and technology studies Sociological theories
Performativity
[ "Technology" ]
7,899
[ "Science and technology studies" ]
1,538,121
https://en.wikipedia.org/wiki/Tourbillon
In horology, a tourbillion () or tourbillon (; "whirlwind") is an addition to the mechanics of a watch escapement to increase accuracy. Conceived by the British watchmaker and inventor John Arnold, it was developed by his friend the Swiss-French watchmaker Abraham-Louis Breguet and patented by Breguet on 26 June 1801. In a tourbillon, the escapement and balance wheel are mounted in a rotating cage, with the goal of eliminating errors of poise in the balance giving a uniform weight. Tourbillons are still included in some modern wristwatches, where the mechanism is usually exposed on the watch's face to showcase it. Types of tourbillon Single axis tourbillon Patented by Breguet in 1801, the single axis tourbillon minimizes the difference in rate between positions caused by poise errors. The tourbillon was invented to complement the split bi-metallic balance which was inherently difficult to poise. In the most common implementation of this, the tourbillon carriage is carried by the fourth pinion, within a stationary fourth wheel. The escape pinion is engaged with this stationary fourth wheel so when carriage is turned by the fourth pinion the escape wheel will also rotate. The carriage is released and locked with each vibration of the balance. Double-axis tourbillon Anthony Randall invented the double-axis tourbillon in January 1977 and subsequently patented it. The first working example was later constructed by Richard Good in 1978. In 1980 Anthony Randall made a double-axis tourbillon in a carriage clock, which was located in the (now closed) Time Museum in Rockford, Illinois, US, and was included in their Catalogue of Chronometers. A characteristic of this tourbillon is that it turns around two axes, both of which rotate once per minute. The whole tourbillon is powered by a special constant-force mechanism, called a remontoire. Prescher invented the constant-force mechanism to equalize the effects of a wound and unwound mainspring, friction, and gravitation. Thereby even force is always supplied to the oscillation regulating system of the double-axis tourbillon. The device incorporates a modified system after a design by Henri Jeanneret. Double and quadruple tourbillons Robert Greubel and Stephen Forsey launched the brand Greubel Forsey in 2004 with the introduction of their Double Tourbillon 30° (DT30). Both men had been working together since 1992 at Renaud & Papi, where they developed complicated watch movements. The Double Tourbillon 30° features one tourbillon carriage rotating once per minute and inclined at 30°, inside another carriage which rotates once every four minutes. In 2005, Greubel Forsey presented their Quadruple Tourbillon à Différentiel (QDT), using two double-tourbillons working independently. A spherical differential connects the four rotating carriages, distributing torque between two wheels rotating at different speeds. Triple-axis tourbillon In 2004, Thomas Prescher developed the first triple-axis tourbillon for the Thomas Prescher Haute Horlogerie with constant force in the carriage in a wristwatch. It was presented at Baselworld 2004 in Basel, Switzerland, in a set of three watches including a single-axis, a double-axis and a triple-axis tourbillon. The world's unique tri-axial tourbillon movement for wristwatch, with traditional jewel bearings only, was invented by the independent watchmaker Aaron Becsei, from Bexei Watches, in 2007. The Primus wristwatch was presented at the Baselworld 2008 in Basel, Switzerland. In the three axis tourbillon movement, the 3rd (external) cage has a unique form which provides the possibility of using jewel bearings everywhere, instead of ball-bearings. This is a unique solution at this size and level of complication. There are a few wrist and pocket watches that include the Triple Axis or Tri-Axial Tourbillon escapements. Examples of companies and watchmakers that include this mechanism are Vianney Halter in his "Deep Space" watch, Thomas Prescher, Aaron Becsei, Girard-Perregaux with the "Tri-Axial Tourbillon", Purnell with the "Spherion", and Jaeger LeCoultre with the "Heliotourbillon", released in 2024. Flying tourbillon Rather than being supported by a bridge, or cock, at both the top and bottom, the flying tourbillon is cantilevered, being only supported from one side. The first flying tourbillon was designed by Alfred Helwig, instructor at the German School of Watchmaking, in 1920. In 1993, Kiu Tai-Yu, a Chinese watchmaker residing in Hong Kong, created a semi-flying tourbillon with only an abbreviated carriage for the escapement wheel and pallet fork, the upper pivot of the balance wheel being supported in a sapphire bridge. Gyro tourbillon Jaeger-LeCoultre's first wristwatch tourbillon was introduced in 1993 (though JLC had produced tourbillons prior to that, including the famous observatory competition caliber 170) and in 2004 the company introduced the Gyrotourbillon I. Gyrotourbillon I is a double-axis tourbillon with a perpetual calendar and equation of time, and since then, Jaeger-LeCoultre has gone on to produce several variations on the multi-axis tourbillon theme. In general, these have been fairly thick watches (Gyrotourbillon I is 16mm thick) but with the Reverso Tribute Gyrotourbillon, JLC has produced a thinner and much more wearable version of its multi-axis tourbillon. At 51.1mm x 31mm x 12.4mm. Modern tourbillon watches In modern mechanical watch designs, production of a highly accurate watch does not require a tourbillon. There is even debate among horologists as to whether tourbillons ever improved the accuracy of mechanical watches, even when first introduced, or whether the watches of the day were inherently inaccurate due to design and manufacturing techniques. A tourbillon is a valued feature of collectors' and premium-priced watches, possibly for the same reason that mechanical watches fetch a much higher price than similar quartz watches that are much more accurate. High-quality tourbillon wristwatches, usually made by the Swiss luxury watch industry, are very expensive, and typically retail for tens of thousands of dollars or euros, with much higher prices in the hundreds of thousands of dollars or euros being common. A recent renaissance of interest in tourbillons has been met by the industry with increased availability of time pieces bearing the feature, with the result that prices for basic tourbillon models have reduced somewhat in recent years. Previously such models were very rare, either antique or new. Any watch with a tourbillon will cost a great deal more than an equivalent piece without the feature. The prices of Swiss models typically start at $40,000 and the prices of more expensive tourbillon watches can reach six figures. The prices of some Chinese models can range from hundreds of dollars to nearly $5000. The Donald Trump-branded "Victory Tourbillion", however, which is made in China with a production run of 147, costs $100,000. Modern implementations typically allow the tourbillon to be seen through a window in the watch face. In addition to the decorative effect, a tourbillon can act as a second hand for some watches, if the tourbillon rotates exactly once per minute. Some tourbillons rotate faster than this (Greubel Forsey's 24 second tourbillon for example). Also, many quotidian watches feature their oscillating balance wheel. Sometimes termed, appropriately enough, the "open heart", these are sometimes misrepresented by unscrupulous dealers as a tourbillon (and "tourbillon-style" by ethical ones). Improved affordability Several Chinese manufacturers, like Tianjin Seagull, now produce a variety of tourbillon movements. These movements are bought as ébauches by some manufacturers and are sometimes incorporated into watches that meet the requirements of the Federation of the Swiss Watch Industry to be sold as Swiss Made, which requires 60% of the value to have been made in Switzerland. The availability of less expensive tourbillons has led industry spectators to worry that another quartz crisis may occur, where the Swiss watch industry will not be able to adapt quickly to less expensive complicated mechanical watches produced in other countries. In 2016, TAG Heuer began offering the Carrera Heuer-02T tourbillon at a suggested retail price of 14,900 CHF (~US$15,000), significantly lower than the 100,000 CHF or more charged by some other established Swiss watch brands. See also List of most expensive watches sold at auction Bugatti Tourbillon - car named after the tourbillon mechanism References Further reading External links Types of tourbillons at work Articles containing video clips Clocks Horology Swiss inventions French inventions Timekeeping components ru:Механические часы#Турбийон
Tourbillon
[ "Physics", "Technology", "Engineering" ]
1,928
[ "Machines", "Physical quantities", "Time", "Horology", "Clocks", "Measuring instruments", "Physical systems", "Timekeeping components", "Spacetime", "Components" ]
1,538,135
https://en.wikipedia.org/wiki/Automated%20Message%20Handling%20System
The Automated Message Handling System (AMHS) is an automated message handling system that can be used to process, store, and disseminate legacy AUTODIN messages as well as Defense Message System (DMS) messages. The term "Automated Message Handling System" or "AMHS" has not been trademarked by a vendor, but is instead a product category that includes several systems and products created by government agencies, integrators and software companies. Examples include: Telos Corporation has an AMHS product named Automated Message Handling System (AMHS) that was developed for the Defense Information Systems Agency. The National Security Agency terms its own in-house developed message handling system AMHS. The Defense Intelligence Agency and National Geospatial-Intelligence Agency both called their internal message traffic systems AMHS and it is referred to as AMHS by Jane's Military Communications. DIA and NGA both use a blend of the Northrop Grumman MISTIC and the Boeing Multimedia Message Manager. Boldon James classifies its SAFEMail product both a Military Message Handling System and an AMHS - it is built to work alongside Microsoft Exchange Server. Isode has a set of X.400 products that it classifies as both a Military Message Handling System (MMHS) and an Aviation Message Handling System (AMHS). Telos Corporation's AMHS product was selected by all services as the message handling system to be used for organizational messaging throughout the United States Department of Defense. The Air Force completed transition to AMHS in November 2006. The Army has transitioned most of the CONUS organizations, the Coast Guard has completed the transition in 2008, the Navy has begun transitioning in May 2007, and the USMC selected the AMHS and began transitioning in November 2007. In addition to the DoD organizations, other federal agencies (including the DEA and FAA) also use the AMHS. The benefits of using AMHS over older versions of DMS is that it consolidates and reduces the number of Fortezza cards that contain X.509 certificates for each recipient. AMHS can also use the Virtual Fortezza Cards, or VFC's, stored on a Type 2 Cryptographic Support Server board, or T2CSS. The T2CSS is located within the actual AMHS server, reducing the inconvenience placed upon the user to keep track of a Fortezza card. References External links Telos Boldon James Isode Military communications 2000s establishments in the United States
Automated Message Handling System
[ "Engineering" ]
502
[ "Military communications", "Telecommunications engineering" ]
1,538,333
https://en.wikipedia.org/wiki/Business%20object
A business object is an entity within a multi-tiered software application that works in conjunction with the data access and business logic layers to transport data. Business objects separate state from behaviour because they are communicated across the tiers in a multi-tiered system, while the real work of the application is done in the business tier and does not move across the tiers. Function Whereas a program may implement classes, which typically end in objects managing or executing behaviours, a business object usually does nothing itself but holds a set of instance variables or properties, also known as attributes, and associations with other business objects, weaving a map of objects representing the business relationships. A domain model where business objects do not have behaviour is called an anemic domain model. Examples For example, a "Manager" would be a business object where its attributes can be "Name", "Second name", "Age", "Area", "Country" and it could hold a 1-n association with its employees (a collection of "Employee" instances). Another example would be a concept like "Process" having "Identifier", "Name", "Start date", "End date" and "Kind" attributes and holding an association with the "Employee" (the responsible) that started it. See also Active record pattern, design pattern that stores object data in memory in relational databases, with functions to insert, update, and delete records Business intelligence, a field within information technology that provides decision support and business-critical information based on data Data access object, design pattern that provides an interface to a type of database or other persistent mechanism, and offers data operations to application calls without exposing database details Data transfer object, design pattern where an object carries aggregated data between processes to reduce the number of calls References Rockford Lhotka, Visual Basic 6.0 Business Objects, Rockford Lhotka, Expert C# Business Objects, Rockford Lhotka, Expert One-on-One Visual Basic .NET Business Objects, External links A definition of domain model by Martin Fowler Anemic Domain Model by Martin Fowler Programming constructs
Business object
[ "Technology" ]
433
[ "Computing stubs" ]
1,538,339
https://en.wikipedia.org/wiki/Citro%C3%ABn%20M%C3%A9hari
The Citroën Méhari is a lightweight recreational and utility vehicle, manufactured and marketed by French carmaker Citroën over 18 years in a single generation. Built in front-wheel (1968–1988) and four-wheel drive (1980–1983) variants, it features ABS plastic bodywork with optional/removable doors and foldable, stowable, fabric convertible top. The Méhari weighed approximately , and featured the fully independent suspension and chassis of all Citroën 'A-Series' vehicles, using the 602 cc (36.7 cu in) variant of the flat twin petrol engine shared with the 2CV6, Dyane, and Citroën Ami. The car also uses the Dyane's headlights and bezels, and 4WD units differ externally by having the spare wheel on the hood, in a molded recess. The car is named after the fast-running dromedary camel, the méhari, which can be used for racing or transport. Citroën manufactured 144,953 Méharis between the car's French launch in May 1968 and the end of production in 1988. The Méhari and variants were built in many additional variants (under license or not), in a host of other countries, including versions with a fiberglass instead of ABS body, and 2WD version with spare wheel on the hood. Production history Origin The Méhari was designed by French World War II fighter ace Count Roland de la Poype, who headed the French company SEAB - Société d'Etudes et d'Applications des Brevets. He developed the idea of using a plastic, rather than fiberglass body. De la Poype evaluated the fashionable Mini Moke and was determined to improve on its low ground clearance, hard suspension, and rust-prone body. This company was already a supplier to Citroën, and SEAB developed a working concept of the car before presenting it to its client. In 1978, the Mehari was facelifted, with a revised front and grille. French military The French Army purchased 7,064 Méharis – some of which were modified to have 24 V electric power to operate the two way radio. Méhari 4x4 In 1979, Citroën launched the Méhari 4x4 with drive to all four wheels. Unlike the Citroën 2CV Sahara 4x4, this car had only one engine, rather than one engine per axle. The body is distinguished by its spare wheel mounted on the specially designed bonnet, its additional bumpers, front and rear, its flared wheel arches (for 1982), big optional tyres (for 1982) and tail lights similar to the Citroën Acadiane van. The 4x4 version has a gearbox with four normal speeds and a three-speed transfer gearbox for crossing slopes of up to 60 percent. At the time, the Méhari 4x4 was one of the few 4x4s with four-wheel independent suspension. The car had all wheel disc brakes. Méhari 4x4 production stopped in 1983. It only sold in small numbers as it cost twice as much as the standard, two-wheel drive car. Limited editions Two limited edition versions of the Méhari were sold: 'Azur' : initially planned in a limited edition of 700 copies, the Mehari Azur was then integrated into the "normal" range given the great success achieved. The Azur was distinguished from the other Méhari by its white body with blue doors, grille and soft top. The seats were upholstered in blue and white striped fabric. 'Plage' : at the same time as the Azur the Plage series was introduced, reserved for the markets of the Iberian Peninsula. The car, produced in Mangualde, in Portugal (where a new production node for the Méhari had been activated). It was characterized by a yellow body with white rims. International production and sales Irish military Citroën Méhari was also in service with the Irish Defence Forces, which bought a total of 12 vehicles in the late 1970s; most were sold at auction about 1985, but one is retained at the Defence Forces Training Centre in the Curragh Camp, County Kildare, Ireland. Portugal The Méhari was produced at the Mangualde factory, where it built 17,500 copies. Spain The Méhari was produced at the Vigo factory from late 1969 to 1980, of which 12,480 copies were produced. Imported models would continue to be marketed until 1987. UK The Méhari was never type approved for sale in the UK. The 2CV on which it was based also had a gap in UK sales, from 1961 to 1974. United States Citroën marketed the Méhari in the United States for model years 1969–1970, where the vehicle was classified as a truck. As trucks had far more lenient National Highway Traffic Safety Administration safety standards than passenger cars in the US, the Méhari could be sold without seat belts. Budget Rent-A-Car offered them as rentals in Hawaii. Hearst Castle, in San Simeon, California, used them as groundskeeper cars. Elvis Presley featured a US Model Méhari prominently in his 1973 broadcast Aloha from Hawaii Via Satellite Revisions for the US market included: Altered front panel with larger 7" sealed-beam headlamps Lateral side marker lights Special boot lid with room for US registration plate and a lamp (Lucas) either side of it. Straight rear bumper. Two-speed wiper motor. Reversing lights. Hexagonal yellow "cats eyes" on front and rear sides. Argentina and Uruguay The Méhari was manufactured in two different periods: 1971 to 1980 by Citroën Argentina SA with 3,997 units produced. Citroën left Argentina following the collapse of the economy in the late 1970s. The IES company (Industrias Eduardo Sal-Lari) in 1984 resurrected the model, this time under the name Safari or Gringa until 1986, maintaining practically all the technical characteristics of the original model, but with flared wheel arches and big tires. The spare wheel was mounted on the hood, thus gaining luggage space. Contrary to French units with the spare on the hood, these were only front-wheel drive. The Argentine Méhari used the "3CV" (Citroën Ami) platform, with all its mechanics. Consequently it had drum brakes, and not discs, like its French predecessor. The bodywork also had differences, due to the fiberglass, since there was no machinery to model plastics of this size. The body of the Argentine Méhari was manufactured in Uruguay by Dasur, and the chassis were sent from Argentina so that the Nordex company could make the assembly. In 1971 at the time of its presentation, the only color was red, although later some were made blue for the police of Tucumán. Coinciding with the launch of the 3CV M-28 in 1978, the Méhari II was launched, distinguished by its widened rims and its orange color. This Uruguayan version of the Méhari was manufactured under license by the firm Nordex, and had a fiberglass body – instead of the French original ABS plastic (also used for refrigerator interiors). Equipment to heat ABS sheet material, and then cut with a refrigerated die, did not yet exist in Uruguay. It was decided to make the same vehicle using fiberglass reinforced polyester. Otherwise, it was mostly similar to its French sister, but the rear wheel arches have a different shape and are noticeably larger; it also featured a removable hardtop. 14,000 units were built. Of the 14,000 units, 5,000 remained in Uruguay and 9,000 went to Argentina within the CAUCE agreement. Some Méharis, built in Uruguay, were sold in Argentina under the name of Naranja Mecanica ("Clockwork Orange"). Baby Brousse & FAF Citroën built metal-bodied variants of the A-Series, in many ways steel-bodied Méharis, in many countries around the world, including the Baby Brousse and the Citroën FAF. Developed in Chile under the order of Salvador Allende in the year 1971 and produced between 1972 and 1974, the FAF Yagán version was inspired by the French Mèhari. At first, the possibility of importing the Mehari bodywork from Uruguay was considered, but its high price discouraged those responsible for the project. Despite being an artisanal vehicle – the Yagán was made entirely by hand and no type of dies or molds were used – some 1,500 units were produced at its factory in Arica, where other Citroën vehicles were also assembled, such as the Ami 8 and the 2CV. Distinctive about the Yagán was that the base chassis was that of the Citroën 2CV rather than the Méhari, and that the goal of 50% Chilean componentry was reached. Its failure was due to the high unit cost compared to higher quality models, in addition to the failed incorporation into the Chilean Army. Post-production, imitations The Méhari ended production in 1988 with no replacement. This left a gap in the market, that others have tried to address. VanClee The VanClee company made a number of fiberglass kit-cars. Their models 'Emmet' and 'Mungo' were based on the Citroën A-series platform and mechanicals, and were clearly inspired by the Méhari. Fiberfab Sherpa The Méhari was never type approved for sale in Germany, because the ABS body is flammable at 400 degrees C. In 1975, German fiberglass kit car specialist Fiberfab developed the Sherpa, using Citroën delivered platforms, and sold 250 units. Teilhol The Teilhol company, which had been building the recently defunct Renault Rodeo, created the Tangara using 2CV mechanicals, with bolt-on pre-dyed GRP panels. It also created a Citroën AX-based model. The company ceased operations in 1990. Chassis restorations Due to its mechanical simplicity, the Méhari can easily be restored to "as new" condition; all parts including the chassis are easily available, creating a thriving restoration market. Cassis electric Méhari Méhari Club Cassis, a specialist based in the South of France, has been rebuilding the cars for many years, and as of 2019 sells brand new Méhari cars with an electric powertrain. These qualify for exemption from French new car regulations (for the vintage 1968 design) as long as the car is not driven on the motorway (voitures sans permis). Factory electric Méhari The factory began selling a new electric car, the Citroën E-Méhari in 2016. Colours The car's colour was integrated into the ABS plastic during production, with limited colour choices. One colour, Vert Montana, remained a choice throughout the car's entire production span. Except for the limited edition Azur, the official names of colours all refer to desert regions. As ultraviolet sunlight degrades the colourfastness of ABS plastic, unrestored cars have sometimes faded. New bodies for restorations are available in various original colours. Sales figures Criminal activity In 1973–1974, 63 Citroën Méharis were burned by an arsonist in Paris for unknown reasons. In 1985, the Neapolitan journalist Giancarlo Siani was murdered, hit 10 times in the head by two hitmen sent by the Camorra while in his Méhari, green with a black canvas top. Between October and December 2013, Siani's Méhari made a trip from Naples to Brussels, passing through Rome, in order to remember the life of this journalist, like all the other journalists killed by the mafia. See also BMC Mini Moke Volkswagen Type 181 Fiat Ghia Jolly Renault Rodeo Meyers Manx References External links Méhari at Citroenet Méhari links at Citroën World Restored Mehari in France Méhari modelcars IMCDB.org Mehari Mehari Cars powered by boxer engines Off-road vehicles Mini sport utility vehicles Convertibles Copolymers Plastics Thermoplastics Engineering plastic Website mehari-expo belgium
Citroën Méhari
[ "Physics" ]
2,499
[ "Amorphous solids", "Unsolved problems in physics", "Plastics" ]
1,538,943
https://en.wikipedia.org/wiki/Carnobacterium%20pleistocenium
Carnobacterium pleistocenium is a recently discovered bacterium from the arctic part of Alaska. It was found in permafrost, seemingly frozen there for 32,000 years. Melting the ice, however, brought these extremophiles back to life. This is the first case of an organism "coming back to life" from ancient ice. These bacterial cells were discovered in a tunnel dug by the Army Corps of Engineers in the 1960s to allow scientists to study the permafrost in preparation for the construction of the Trans-Alaska pipeline system. The discovery of this bacterium is of particular interest for NASA, for it may be possible for such life to exist in the permafrost of Mars or on the surface of Europa. It is also of interest for scientists investigating the potential for cryogenically freezing life forms to reduce the transportation costs (in terms of life support systems) that would be associated with long-duration space travel. References External links Type strain of Carnobacterium pleistocenium at BacDive - the Bacterial Diversity Metadatabase Cryobiology Bacteria described in 2005
Carnobacterium pleistocenium
[ "Physics", "Chemistry", "Biology" ]
229
[ "Biochemistry", "Physical phenomena", "Phase transitions", "Cryobiology" ]
1,539,042
https://en.wikipedia.org/wiki/Syntactic%20foam
Syntactic foams are composite materials synthesized by filling a metal, polymer, cementitious or ceramic matrix with hollow spheres called microballoons or cenospheres or non-hollow spheres (e.g. perlite) as aggregates. In this context, "syntactic" means "put together." The presence of hollow particles results in lower density, higher specific strength (strength divided by density), lower coefficient of thermal expansion, and, in some cases, radar or sonar transparency. History The term was originally coined by the Bakelite Company, in 1955, for their lightweight composites made of hollow phenolic microspheres bonded to a matrix of phenolic, epoxy, or polyester. These materials were developed in early 1960s as improved buoyancy materials for marine applications. Other characteristics led these materials to aerospace and ground transportation vehicle applications. Research on syntactic foams has recently been advanced by Nikhil Gupta. Characteristics Tailorability is one of the biggest advantages of these materials. The matrix material can be selected from almost any metal, polymer, or ceramic. Microballoons are available in a variety of sizes and materials, including glass microspheres, cenospheres, carbon, and polymers. The most widely used and studied foams are glass microspheres (in epoxy or polymers), and cenospheres or ceramics (in aluminium). One can change the volume fraction of microballoons or use microballoons of different effective density, the latter depending on the average ratio between the inner and outer radii of the microballoons. A manufacturing method for low density syntactic foams is based on the principle of buoyancy. Strength The compressive properties of syntactic foams, in most cases, strongly depend on the properties of the filler particle material. In general, the compressive strength of the material is proportional to its density. Cementitious syntactic foams are reported to achieve compressive strength values greater than while maintaining densities lower than . The matrix material has more influence on the tensile properties. Tensile strength may be highly improved by a chemical surface treatment of the particles, such as silanization, which allows the formation of strong bonds between glass particles and epoxy matrix. Addition of fibrous materials can also increase the tensile strength. Applications Current applications for syntactic foam include buoyancy modules for marine riser tensioners, remotely operated underwater vehicles (ROVs), autonomous underwater vehicles (AUVs), deep-sea exploration, boat hulls, and helicopter and airplane components. Cementitious syntactic foams have also been investigated as a potential lightweight structural composite material. These materials include glass microspheres dispersed in a cement paste matrix to achieve a closed cell foam structure, instead of a metallic or a polymeric matrix. Cementitious syntactic foams have also been tested for their mechanical performance under high strain rate loading conditions to evaluate their energy dissipation capacity in crash cushions, blast walls, etc. Under these loading conditions, the glass microspheres of the cementitious syntactic foams did not show progressive crushing. Ultimately, unlike the polymeric and metallic syntactic foams, they did not emerge as suitable materials for energy dissipation applications. Structural applications of syntactic foams include use as the intermediate layer (that is, the core) of sandwich panels. Though the cementitious syntactic foams demonstrate superior specific strength values in comparison to most conventional cementitious materials, it is challenging to manufacture them. Generally, the hollow inclusions tend to buoy and segregate in the low shear strength and high-density fresh cement paste. Therefore, maintaining a uniform microstructure across the material must be achieved through a strict control of the composite rheology. In addition, certain glass types of microspheres may lead to an alkali silica reaction. Therefore, the adverse effects of this reaction must be considered and addressed to ensure the long-term durability of these composites. Other applications include; Deep-sea buoyancy foams. A method of creating submarine hulls by 3D printing was developed in 2018. Thermoforming plug assist Radar transparent materials Acoustically attenuating materials Cores for sandwich composites Blast mitigating materials Sporting goods such as bowling balls, tennis rackets, and soccer balls. References External links Composite materials Foams Materials science
Syntactic foam
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
918
[ "Applied and interdisciplinary physics", "Foams", "Composite materials", "Materials science", "Materials", "nan", "Matter" ]
1,539,049
https://en.wikipedia.org/wiki/Glass%20microsphere
Glass microspheres are microscopic spheres of glass manufactured for a wide variety of uses in research, medicine, consumer goods and various industries. Glass microspheres are usually between 1 and 1000 micrometers in diameter, although the sizes can range from 100 nanometers to 5 millimeters in diameter. Hollow glass microspheres, sometimes termed microballoons or glass bubbles, have diameters ranging from 10 to 300 micrometers. Hollow spheres are used as a lightweight filler in composite materials such as syntactic foam and lightweight concrete. Microballoons give syntactic foam its light weight, low thermal conductivity, and a resistance to compressive stress that far exceeds that of other foams. These properties are exploited in the hulls of submersibles and deep-sea oil drilling equipment, where other types of foam would implode. Hollow spheres of other materials create syntactic foams with different properties: ceramic balloons e.g. can make a light syntactic aluminium foam. Hollow spheres also have uses ranging from storage and slow release of pharmaceuticals and radioactive tracers to research in controlled storage and release of hydrogen. Microspheres are also used in composites to fill polymer resins for specific characteristics such as weight, sandability and sealing surfaces. When making surfboards for example, shapers seal the EPS foam blanks with epoxy and microballoons to create an impermeable and easily sanded surface upon which fiberglass laminates are applied. Glass microspheres can be made by heating tiny droplets of dissolved water glass in a process known as ultrasonic spray pyrolysis (USP), and properties can be improved somewhat by using a chemical treatment to remove some of the sodium. Sodium depletion has also allowed hollow glass microspheres to be used in chemically sensitive resin systems, such as long pot life epoxies or non-blown polyurethane composites. Additional functionalities, such as silane coatings, are commonly added to the surface of hollow glass microspheres to increase the matrix/microspheres interfacial strength (the common failure point when stressed in a tensile manner). Microspheres made of high quality optical glass, can be produced for research on the field of optical resonators or cavities. Glass microspheres are also produced as waste product in coal-fired power stations. In this case the product would be generally termed "cenosphere" and carry an aluminosilicate chemistry (as opposed to the sodium silica chemistry of engineered spheres). Small amounts of silica in the coal are melted and as they rise up the chimneystack, expand and form small hollow spheres. These spheres are collected together with the ash, which is pumped in a water mixture to the resident ash dam. Some of the particles do not become hollow and sink in the ash dams, while the hollow ones float on the surface of the dams. They become a nuisance, especially when they dry, as they become airborne and blow over into surrounding areas. Application Microspheres have been used to produce focal regions, known as photonic nanojets and whose sizes are large enough to support internal resonances, but at the same time small enough, so that geometrical optics cannot be applied for studying their properties. Previous research has demonstrated experimentally and with simulations the use of microspheres in order to increase the signal intensity obtained in different experiments. A confirmation of the photonic jet in the microwave scale, observing the backscattering enhancement that occurred when metallic particles were introduced in the focus area. A measurable enhancement of the backscattered light in the visible range was obtained when a gold nanoparticle was placed inside the photonic nanojet region produced by a dielectric microsphere with a 4.4 μm diameter. A use of nanojets produced by transparent microspheres in order to excite optical active materials, under upconversion processes with different numbers of excitation photons, has been analyzed as well. Monodisperse glass microspheres have high sphericity and a very tight particle size distribution, often with CV<10% and specification of >95% of particles in size range. Monodisperse glass particles are often used as spacers in adhesives and coatings, such as bond line spacers in epoxies. Just a small amount of spacer grade monodisperse microspheres can create a controlled gap, as well as define and maintain specified bond line thickness. Spacer grade particles can also be used as calibration standards and tracer particles for qualifying medical devices. High quality spherical glass microspheres are often used in gas plasma displays, automotive mirrors, electronic displays, flip chip technology, filters, microscopy, and electronic equipment. Other applications include syntactic foams and particulate composites and reflective paints. Dispensing of microspheres Dispensing of microspheres can be a difficult task. When utilizing microspheres as a filler for standard mixing and dispensing machines, a breakage rate of up to 80% can occur, depending upon factors such as pump choice, material viscosity, material agitation, and temperature. Customized dispensers for microsphere-filled materials may reduce the microsphere breakage rate to a minimal amount. A progressive cavity pump is the pump of choice for dispensing materials with microspheres, which can reduce microsphere breakage as much as 80%. See also Hydrogen storage References Materials Glass chemistry Glass types Hydrogen storage
Glass microsphere
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,141
[ "Glass engineering and science", "Materials", "Glass chemistry", "Matter" ]
1,539,324
https://en.wikipedia.org/wiki/FIPS%20140
The 140 series of Federal Information Processing Standards (FIPS) are U.S. government computer security standards that specify requirements for cryptographic modules. , FIPS 140-2 and FIPS 140-3 are both accepted as current and active. FIPS 140-3 was approved on March 22, 2019 as the successor to FIPS 140-2 and became effective on September 22, 2019. FIPS 140-3 testing began on September 22, 2020, and a small number of validation certificates have been issued. FIPS 140-2 testing is still available until September 21, 2021 (later changed for applications already in progress to April 1, 2022), creating an overlapping transition period of one year. FIPS 140-2 test reports that remain in the CMVP queue will still be granted validations after that date, but all FIPS 140-2 validations will be moved to the Historical List on September 21, 2026 regardless of their actual final validation date. Purpose of FIPS 140 The National Institute of Standards and Technology (NIST) issues the 140 Publication Series to coordinate the requirements and standards for cryptographic modules which include both hardware and software components for use by departments and agencies of the United States federal government. FIPS 140 does not purport to provide sufficient conditions to guarantee that a module conforming to its requirements is secure, still less that a system built using such modules is secure. The requirements cover not only the cryptographic modules themselves but also their documentation and (at the highest security level) some aspects of the comments contained in the source code. User agencies desiring to implement cryptographic modules should confirm that the module they are using is covered by an existing validation certificate. FIPS 140-1 and FIPS 140-2 validation certificates specify the exact module name, hardware, software, firmware, and/or applet version numbers. For Levels 2 and higher, the operating platform upon which the validation is applicable is also listed. Vendors do not always maintain their baseline validations. The Cryptographic Module Validation Program (CMVP) is operated jointly by the United States Government's National Institute of Standards and Technology (NIST) Computer Security Division and the Communications Security Establishment (CSE) of the Government of Canada. The use of validated cryptographic modules is required by the United States Government for all unclassified uses of cryptography. The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments. Security levels FIPS 140-2 defines four levels of security, simply named "Level 1" to "Level 4". It does not specify in detail what level of security is required by any particular application. FIPS 140-2 Level 1 the lowest, imposes very limited requirements; loosely, all components must be "production-grade" and various egregious kinds of insecurity must be absent. FIPS 140-2 Level 2 adds requirements for physical tamper-evidence and role-based authentication. FIPS 140-2 Level 3 adds requirements for physical tamper-resistance (making it difficult for attackers to gain access to sensitive information contained in the module) and identity-based authentication, and for a physical or logical separation between the interfaces by which "critical security parameters" enter and leave the module, and its other interfaces. FIPS 140-2 Level 4 makes the physical security requirements more stringent, and requires robustness against environmental attacks. In addition to the specified levels, Section 4.1.1 of the specification describes additional attacks that may require mitigation, such as differential power analysis. If a product contains countermeasures against these attacks, they must be documented and tested, but protections are not required to achieve a given level. Thus, a criticism of FIPS 140-2 is that the standard gives a false sense of security at Levels 2 and above because the standard implies that modules will be tamper-evident and/or tamper-resistant, yet modules are permitted to have side channel vulnerabilities that allow simple extraction of keys. Scope of requirements FIPS 140 imposes requirements in eleven different areas: Cryptographic module specification (what must be documented) Cryptographic module ports and interfaces (what information flows in and out, and how it must be segregated) Roles, services and authentication (who can do what with the module, and how this is checked) Finite state model (documentation of the high-level states the module can be in, and how transitions occur) Physical security (tamper evidence and resistance, and robustness against extreme environmental conditions) Operational environment (what sort of operating system the module uses and is used by) Cryptographic key management (generation, entry, output, storage and destruction of keys) EMI/EMC Self-tests (what must be tested and when, and what must be done if a test fails) Design assurance (what documentation must be provided to demonstrate that the module has been well designed and implemented) Mitigation of other attacks (if a module is designed to mitigate against, say, TEMPEST attacks then its documentation must say how) Brief history FIPS 140-1, issued on 11 January 1994 and withdrawn on May 25, 2002, was developed by a government and industry working group, composed of vendors and users of cryptographic equipment. The group identified the four "security levels" and eleven "requirement areas" listed above, and specified requirements for each area at each level. FIPS 140-2, issued on 25 May 2001, takes account of changes in available technology and official standards since 1994, and of comments received from the vendor, tester, and user communities. It was the main input document to the international standard ISO/IEC 19790:2006 Security requirements for cryptographic modules issued on 1 March 2006. NIST issued Special Publication 800-29 outlining the significant changes from FIPS 140-1 to FIPS 140-2. FIPS 140-3, issued on 22 March 2019 and announced in May 2019 is currently in the overlapping transition period to supersede FIPS 140-2 and aligns the NIST guidance around two international standards documents: ISO/IEC 19790:2012(E) Information technology — Security techniques — Security requirements for cryptographic modules and ISO/IEC 24759:2017(E) Information technology — Security techniques — Test requirements for cryptographic modules. In the first draft version of the FIPS 140-3 standard, NIST introduced a new software security section, one additional level of assurance (Level 5) and new Simple Power Analysis (SPA) and Differential Power Analysis (DPA) requirements. The draft issued on 11 Sep 2009, however, reverted to four security levels and limits the security levels of software to levels 1 and 2. Criticism Due to the way in which the validation process is set up, a software vendor is required to re-validate their FIPS-validated module for every change, no matter how small, to the software; this re-validation is required even for obvious bug or security fixes. Since validation is an expensive process, this gives software vendors an incentive to postpone changes to their software and can result in software that does not receive security updates until the next validation. The result may be that validated software is less safe than a non-validated equivalent. This criticism has been countered more recently by some industry experts who instead put the responsibility on the vendor to narrow their validation boundary. As most of the re-validation efforts are triggered by bugs and security fixes outside the core cryptographic operations, a properly scoped validation is not subject to the common re-validation as described. See also Common Criteria FIPS 140-2 FIPS 140-3 ISO/IEC 19790 :Category: Computer security standards :Category: Cryptography standards References External links Computer security standards Cryptography standards Standards of the United States
FIPS 140
[ "Technology", "Engineering" ]
1,596
[ "Computer security standards", "Computer standards", "Cybersecurity engineering" ]
1,539,548
https://en.wikipedia.org/wiki/Reversible%20computing
Reversible computing is any model of computation where the computational process, to some extent, is time-reversible. In a model of computation that uses deterministic transitions from one state of the abstract machine to another, a necessary condition for reversibility is that the relation of the mapping from states to their successors must be one-to-one. Reversible computing is a form of unconventional computing. Due to the unitarity of quantum mechanics, quantum circuits are reversible, as long as they do not "collapse" the quantum states on which they operate. Reversibility There are two major, closely related types of reversibility that are of particular interest for this purpose: physical reversibility and logical reversibility. A process is said to be physically reversible if it results in no increase in physical entropy; it is isentropic. There is a style of circuit design ideally exhibiting this property that is referred to as charge recovery logic, adiabatic circuits, or adiabatic computing (see Adiabatic process). Although in practice no nonstationary physical process can be exactly physically reversible or isentropic, there is no known limit to the closeness with which we can approach perfect reversibility, in systems that are sufficiently well isolated from interactions with unknown external environments, when the laws of physics describing the system's evolution are precisely known. A motivation for the study of technologies aimed at implementing reversible computing is that they offer what is predicted to be the only potential way to improve the computational energy efficiency (i.e., useful operations performed per unit energy dissipated) of computers beyond the fundamental von Neumann–Landauer limit of energy dissipated per irreversible bit operation. Although the Landauer limit was millions of times below the energy consumption of computers in the 2000s and thousands of times less in the 2010s, proponents of reversible computing argue that this can be attributed largely to architectural overheads which effectively magnify the impact of Landauer's limit in practical circuit designs, so that it may prove difficult for practical technology to progress very far beyond current levels of energy efficiency if reversible computing principles are not used. Relation to thermodynamics As was first argued by Rolf Landauer while working at IBM, in order for a computational process to be physically reversible, it must also be logically reversible. Landauer's principle is the observation that the oblivious erasure of n bits of known information must always incur a cost of in thermodynamic entropy. A discrete, deterministic computational process is said to be logically reversible if the transition function that maps old computational states to new ones is a one-to-one function; i.e. the output logical states uniquely determine the input logical states of the computational operation. For computational processes that are nondeterministic (in the sense of being probabilistic or random), the relation between old and new states is not a single-valued function, and the requirement needed to obtain physical reversibility becomes a slightly weaker condition, namely that the size of a given ensemble of possible initial computational states does not decrease, on average, as the computation proceeds forwards. Physical reversibility Landauer's principle (and indeed, the second law of thermodynamics) can also be understood to be a direct logical consequence of the underlying reversibility of physics, as is reflected in the general Hamiltonian formulation of mechanics, and in the unitary time-evolution operator of quantum mechanics more specifically. The implementation of reversible computing thus amounts to learning how to characterize and control the physical dynamics of mechanisms to carry out desired computational operations so precisely that the experiment accumulates a negligible total amount of uncertainty regarding the complete physical state of the mechanism, per each logic operation that is performed. In other words, precisely track the state of the active energy that is involved in carrying out computational operations within the machine, and design the machine so that the majority of this energy is recovered in an organized form that can be reused for subsequent operations, rather than being permitted to dissipate into the form of heat. Although achieving this goal presents a significant challenge for the design, manufacturing, and characterization of ultra-precise new physical mechanisms for computing, there is at present no fundamental reason to think that this goal cannot eventually be accomplished, allowing someday to build computers that generate much less than 1 bit's worth of physical entropy (and dissipate much less than kT ln 2 energy to heat) for each useful logical operation that they carry out internally. Today, the field has a substantial body of academic literature. A wide variety of reversible device concepts, logic gates, electronic circuits, processor architectures, programming languages, and application algorithms have been designed and analyzed by physicists, electrical engineers, and computer scientists. This field of research awaits the detailed development of a high-quality, cost-effective, nearly reversible logic device technology, one that includes highly energy-efficient clocking and synchronization mechanisms, or avoids the need for these through asynchronous design. This sort of solid engineering progress will be needed before the large body of theoretical research on reversible computing can find practical application in enabling real computer technology to circumvent the various near-term barriers to its energy efficiency, including the von Neumann–Landauer bound. This may only be circumvented by the use of logically reversible computing, due to the second law of thermodynamics. Logical reversibility For a computational operation to be logically reversible means that the output (or final state) of the operation can be computed from the input (or initial state), and vice versa. Reversible functions are bijective. This means that reversible gates (and circuits, i.e. compositions of multiple gates) generally have the same number of input bits as output bits (assuming that all input bits are consumed by the operation, and that all input/output states are possible). An inverter (NOT) gate is logically reversible because it can be undone. The NOT gate may however not be physically reversible, depending on its implementation. The exclusive or (XOR) gate is irreversible because its two inputs cannot be unambiguously reconstructed from its single output, or alternatively, because information erasure is not reversible. However, a reversible version of the XOR gate—the controlled NOT gate (CNOT)—can be defined by preserving one of the inputs as a 2nd output. The three-input variant of the CNOT gate is called the Toffoli gate. It preserves two of its inputs a,b and replaces the third c by . With , this gives the AND function, and with this gives the NOT function. Because AND and NOT together is a functionally complete set, the Toffoli gate is universal and can implement any Boolean function (if given enough initialized ancilla bits). Similarly, in the Turing machine model of computation, a reversible Turing machine is one whose transition function is invertible, so that each machine state has at most one predecessor. Yves Lecerf proposed a reversible Turing machine in a 1963 paper, but apparently unaware of Landauer's principle, did not pursue the subject further, devoting most of the rest of his career to ethnolinguistics. In 1973 Charles H. Bennett, at IBM Research, showed that a universal Turing machine could be made both logically and thermodynamically reversible, and therefore able in principle to perform an arbitrarily large number of computation steps per unit of physical energy dissipated, if operated sufficiently slowly. Thermodynamically reversible computers could perform useful computations at useful speed, while dissipating considerably less than kT of energy per logical step. In 1982 Edward Fredkin and Tommaso Toffoli proposed the Billiard ball computer, a mechanism using classical hard spheres to do reversible computations at finite speed with zero dissipation, but requiring perfect initial alignment of the balls' trajectories, and Bennett's review compared these "Brownian" and "ballistic" paradigms for reversible computation. Aside from the motivation of energy-efficient computation, reversible logic gates offered practical improvements of bit-manipulation transforms in cryptography and computer graphics. Since the 1980s, reversible circuits have attracted interest as components of quantum algorithms, and more recently in photonic and nano-computing technologies where some switching devices offer no signal gain. Surveys of reversible circuits, their construction and optimization, as well as recent research challenges, are available. Commercialization London-based Vaire Computing is prototyping a chip in 2025, for release in 2027. See also , on the uncertainty interpretation of the second law of thermodynamics , a variant of reversible cellular automata References Further reading Frank, Michael P. (2017). "The Future of Computing Depends on Making It Reversible"" (web) / "Throwing Computing Into Reverse" (print). IEEE Spectrum. 54 (9): 32–37. doi:10.1109/MSPEC.2017.8012237. Perumalla K. S. (2014), Introduction to Reversible Computing, CRC Press. External links Introductory article on reversible computing First International Workshop on reversible computing Publications of Michael P. Frank: Sandia (2015-), FSU (2004-'15), UF (1999-2004), MIT 1996-'99). Internet Archive backup of the "Reversible computing community Wiki" that was administered by Frank Reversible Computation workshop/conference series CCC Workshop on Physics & Engineering Issues in Adiabatic/Reversible Classical Computing Open-source toolkit for reversible circuit design Digital electronics Models of computation Thermodynamics
Reversible computing
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
2,067
[ "Physical quantities", "Time", "Digital electronics", "Reversible computing", "Electronic engineering", "Thermodynamics", "Spacetime", "Dynamical systems" ]
1,539,563
https://en.wikipedia.org/wiki/Nonsymmetric%20gravitational%20theory
In theoretical physics, the nonsymmetric gravitational theory (NGT) of John Moffat is a classical theory of gravitation that tries to explain the observation of the flat rotation curves of galaxies. In general relativity, the gravitational field is characterized by a symmetric rank-2 tensor, the metric tensor. The possibility of generalizing the metric tensor has been considered by many, including Albert Einstein and others. A general (nonsymmetric) tensor can always be decomposed into a symmetric and an antisymmetric part. As the electromagnetic field is characterized by an antisymmetric rank-2 tensor, there is an obvious possibility for a unified theory: a nonsymmetric tensor composed of a symmetric part representing gravity, and an antisymmetric part that represents electromagnetism. Research in this direction ultimately proved fruitless; the desired classical unified field theory was not found. In 1979, Moffat made the observation that the antisymmetric part of the generalized metric tensor need not necessarily represent electromagnetism; it may represent a new, hypothetical force. Later, in 1995, Moffat noted that the field corresponding with the antisymmetric part need not be massless, like the electromagnetic (or gravitational) fields. In its original form, the theory may be unstable, although this has only been shown in the case of the linearized version. In the weak field approximation where interaction between fields is not taken into account, NGT is characterized by a symmetric rank-2 tensor field (gravity), an antisymmetric tensor field, and a constant characterizing the mass of the antisymmetric tensor field. The antisymmetric tensor field is found to satisfy the equations of a Maxwell–Proca massive antisymmetric tensor field. This led Moffat to propose metric-skew-tensor-gravity (MSTG), in which a skew symmetric tensor field postulated as part of the gravitational action. A newer version of MSTG, in which the skew symmetric tensor field was replaced by a vector field, is scalar–tensor–vector gravity (STVG). STVG, like Milgrom's Modified Newtonian Dynamics (MOND), can provide an explanation for flat rotation curves of galaxies. In 2013, Hammond showed the nonsymmetric part of the metric tensor was shown to be equal to the torsion potential, a result following the metricity condition, that the length of a vector is invariant under parallel transport. In addition, the energy momentum tensor is not symmetric, and both the symmetric and nonsymmetric parts are those of a string. See also Reinventing Gravity References Theories of gravity
Nonsymmetric gravitational theory
[ "Physics" ]
546
[ "Theoretical physics", "Theories of gravity" ]
1,539,597
https://en.wikipedia.org/wiki/XHTML%20Friends%20Network
XHTML Friends Network (XFN) is an HTML microformat developed by Global Multimedia Protocols Group that provides a simple way to represent human relationships using links. XFN enables web authors to indicate relationships to the people in their blogrolls by adding one or more keywords as the rel attribute to their links. XFN was the first microformat, introduced in December 2003. Example A friend of Jimmy Example could indicate that relationship by publishing a link on their site like this: <a href="http://jimmy.example.com/" rel="friend">Jimmy Example</a> Multiple values may be used, so if that friend has met Jimmy: <a href="http://jimmy.example.com/" rel="friend met">Jimmy Example</a> See also FOAF hCard References External links XFN at the Global Multimedia Protocols Group Microformats Social networking services XML-based standards Semantic HTML
XHTML Friends Network
[ "Technology" ]
210
[ "Computer standards", "XML-based standards" ]
1,539,609
https://en.wikipedia.org/wiki/Non-methane%20volatile%20organic%20compound
Non-methane volatile organic compounds (NMVOCs) are a set of organic compounds that are typically photochemically reactive in the atmosphere—marked by the exclusion of methane. NMVOCs include a large variety of chemically different compounds, such as benzene, ethanol, formaldehyde, cyclohexane, 1,1,1-trichloroethane and acetone. Essentially, NMVOCs are identical to volatile organic compounds (VOCs), but with methane excluded. Methane is excluded in air-pollution contexts because it is not toxic. It is however a very potent greenhouse gas, with low reactivity and thus a long lifetime in the atmosphere. An important subset of NMVOCs are the non-methane hydrocarbons (NMHCs). Sometimes NMVOC is also used as a sum parameter for emissions, where all NMVOC emissions are added up per weight into one figure. In absence of more detailed data, this can be a very coarse parameter for pollution (e.g. for summer smog or indoor air pollution). The major sources of NMVOCs include vegetation, biomass burning, geogenic sources, and human activity. Importance of atmospheric chemistry The study of NMVOCs is important in atmospheric chemistry, where it can be used as a proxy to study the collective properties of reactive atmospheric VOCs. The exclusion of methane is necessary due to its relatively high ambient concentration in comparison to other atmospheric species and its relative inertness. NMVOCs is an umbrella term which encompasses all speciated and oxygenated biogenic, anthropogenic, and pyrogenic organic molecules present in the atmosphere, minus the contribution of methane. The necessity of this term is also governed by current estimates which suggest that somewhere between 10,000 and 100,000 NMVOCs are present in the atmosphere, most with concentrations in the realm of parts per billion or parts per trillion. The aggregation of these compounds and their collective properties are easier to study than the individual components. Many NMVOCs carry importance due to their influence on atmospheric ozone. Ground level ozone is not directly emitted, but is instead formed by the reaction of sunlight with various other emitted compounds, including NMHCs (a type of NMVOC), methane, carbon monoxide, and nitrogen oxides. Biogenic emission In some non-urban areas, biogenic emissions of NMVOCs meet or exceed anthropogenic emissions of NMVOCs. Vegetation emissions There are estimated to be 40 or less NMVOC classified compounds emitted from vegetation that actively influence atmospheric composition, as many NMVOCs are either weakly volatile or are unlikely to be emitted at high volume into the atmosphere. These atmospherically important NMVOCs include compounds such as terpenoids, hexenals, alkenes, aldehydes, organic acids, alcohols, ketones, and alkanes). These NMVOCs which are emitted by vegetation can be divided by source as having originated from one of seven processes: Emissions from chloroplast activity Emissions from specialized defense tissues Emissions from defense processes not related to defense specialized tissues Emissions of plant growth hormones Emissions from cut and drying vegetation Emissions of floral scents Other vegetation related emissions Of these processes, chlorophyll related emissions and emissions from specialized defense tissues are understood to the point of numerical description. This has led to the characterization of all other emissions processes (besides chlorophyll related emissions) using the model of emissions from specialized defense tissues. Soil microbe emissions Many NMVOCs are produced by soil microorganisms (such as methane, ethane, and isoprene). However, due to the ability for many other soil microorganisms to metabolize these compounds, soils sometimes act as a sink for NMVOCs, leading to the belief that NMVOC flux from soil is negligible. Biomass burning Biomass burning, other than for use as fuel, is considered to be a biogenic source. These emissions are modeled based on the area burned, the ratio of above ground biomass to total biomass, the density of the burned organic matter, and combustion efficiency. The chemical composition of emissions from biomass burning varies across different stages of burning, but total NMVOCs emitted from burning is estimated to be 4.5 grams of Carbon per kilogram. The main NMVOCs emitted from burning are ethane, propane, propene, and acetylene. Geogenic sources Major geogenic sources of NMVOCs include volcanism and seepage resulting from natural gas. Volcanism results in the emissions of many NMVOCs, but at negligible rates. Natural gas seepage is estimated to result in emissions of approximately 0.06 o 2.6 μg m−2 h−1. Anthropogenic emissions In the European Database for Global Atmospheric Research (EDGAR), anthropogenic sources of NMVOCs are divided into the following categories: Power generation Combustion for manufacturing Energy for buildings Road transportation Transformation Industry Fugitive emissions from fuel exploitation Emissions from production processes Oil Refineries Agricultural waste burning Shipping Railways, pipelines, and off-road transport Fossil Fuel Fires Solid waste and wastewater Aviation EDGAR measures that in 2015, the amount of NMVOCS from the six most contributing sectors (agriculture, power industry, waste, buildings, transport, and other industrial combustion) was 1.2*108 tons. The reported emissions are provided by sector as follows: Global NMVOC emissions from anthropogenic sources have been increasing over time, with the emissions amount rising from 119,000kt to 169,000kt between 1970 and 2010. Regionally, trends vary, with America and Europe reducing their emissions in the same time period, while Africa and Asia increased their NMVOC emissions in this period. Reductions in emissions from America and Europe are largely attributed to use of greener fuels for transport and changing emissions standards. References Smog Solvents Indoor air pollution
Non-methane volatile organic compound
[ "Physics" ]
1,206
[ "Visibility", "Smog", "Physical quantities" ]
1,539,732
https://en.wikipedia.org/wiki/Operating%20ratio
In finance, the operating ratio is a company's operating expenses as a percentage of revenue. This financial ratio is most commonly used for industries which require a large percentage of revenues to maintain operations, such as railroads. In railroading, an operating ratio of 80 or lower is considered desirable. The operating ratio can be used to determine the efficiency of a company's management by comparing operating expenses to net sales. It is calculated by dividing the operating expenses by the net sales. The smaller the ratio, the greater the organization's ability to generate profit. The ratio does not factor in expansion or debt repayment. Alternatively, it may be expressed as a ratio of sales to cost. In such case, a higher ratio indicates a better ability to generate revenue. See also Farebox recovery ratio References Corporate finance Financial ratios
Operating ratio
[ "Mathematics" ]
165
[ "Financial ratios", "Quantity", "Metrics" ]
1,539,774
https://en.wikipedia.org/wiki/Language%20Weaver
Language Weaver is the machine translation (MT) technology and brand of RWS. The brand name was revived in 2021 following the acquisition of SDL and Iconic Translation Machines Ltd. and the merging of the respective teams and technologies. Language Weaver was formerly a standalone company that was acquired by SDL in 2010. History Language Weaver was a Los Angeles, California–based company founded in 2002 as a spin-out company from the University of Southern California. The company was founded to commercialise a statistical approach to automatic language translation and natural language processing known as statistical machine translation (SMT). The company's name is a reference to one of the pioneers of machine translation — Warren Weaver — who first proposed the idea of using computers to ‘decode’ or ‘decrypt’ language in a memorandum back in 1947. Language Weaver’s statistical approach to machine translation was cutting-edge at the time, and a significant improvement over previous approaches such as Rule-Based MT. Language Weaver grew steadily over an 8 year period, with staff numbers totalling 96 across offices in US, Europe, and Japan. The company had significant business with Government organisations where its name continues to hold strong recognition to this day. In July 2010, Language Weaver was acquired by SDL plc for $42.5 million and the company was renamed SDL Language Weaver. SDL Language Weaver SDL Language Weaver was the primary machine translation technology at SDL where, over time, it evolved from SMT to syntax-based MT, to Neural Machine Translation. The Language Weaver brand was retired in 2015 in favour of SDL BeGlobal for the cloud-based solution, and SDL Enterprise Translation Server for the on-premise solution. Later, these products were rebranded again as SDL Machine Translation Cloud and SDL Machine Translation Edge respectively. 2021 Relaunch The Language Weaver brand was revived in 2021 following the acquisition of SDL by RWS, and the merger of the SDL MT and Iconic Translation Machines teams and technologies. The combined technologies of both companies, based on state-of-the-art Transformer-based Neural Machine Translation, are now sold as "Language Weaver" for cloud-based MT, and "Language Weaver Edge" for on-premise MT. Supported languages , Language Weaver supports the following languages and language varieties: Albanian Arabic Armenian Bengali Bulgarian Burmese Catalan Chinese (Simplified) Chinese (Traditional) Croatian Czech Danish Dari Dutch English Estonian Finnish French French (Canada) Georgian German Greek Hausa Hebrew Hindi Hungarian Indonesian Italian Japanese Javanese Khmer Korean Kurdish (Kurmanji) Latvian Lithuanian Malay Maltese Norwegian Pashto Persian Polish Portuguese Portuguese (Brazil) Romanian Russian Serbian Slovak Slovenian Somali Spanish Swahili Swedish Thai Turkish Ukrainian Urdu Uzbek Vietnamese See also RWS Group SDL Notes and references Machine translation Companies based in Los Angeles Machine translation software
Language Weaver
[ "Technology" ]
569
[ "Machine translation", "Natural language and computing" ]
1,539,785
https://en.wikipedia.org/wiki/Dark%20matter%20halo
In modern models of physical cosmology, a dark matter halo is a basic unit of cosmological structure. It is a hypothetical region that has decoupled from cosmic expansion and contains gravitationally bound matter. A single dark matter halo may contain multiple virialized clumps of dark matter bound together by gravity, known as subhalos. Modern cosmological models, such as ΛCDM, propose that dark matter halos and subhalos may contain galaxies. The dark matter halo of a galaxy envelops the galactic disc and extends well beyond the edge of the visible galaxy. Thought to consist of dark matter, halos have not been observed directly. Their existence is inferred through observations of their effects on the motions of stars and gas in galaxies and gravitational lensing. Dark matter halos play a key role in current models of galaxy formation and evolution. Theories that attempt to explain the nature of dark matter halos with varying degrees of success include cold dark matter (CDM), warm dark matter, and massive compact halo objects (MACHOs). Rotation curves as evidence of a dark matter halo The presence of dark matter (DM) in the halo is inferred from its gravitational effect on a spiral galaxy's rotation curve. Without large amounts of mass throughout the (roughly spherical) halo, the rotational velocity of the galaxy would decrease at large distances from the galactic center, just as the orbital speeds of the outer planets decrease with distance from the Sun. However, observations of spiral galaxies, particularly radio observations of line emission from neutral atomic hydrogen (known, in astronomical parlance, as 21 cm Hydrogen line, H one, and H I line), show that the rotation curve of most spiral galaxies flattens out, meaning that rotational velocities do not decrease with distance from the galactic center. The absence of any visible matter to account for these observations implies either that unobserved (dark) matter, first proposed by Ken Freeman in 1970, exist, or that the theory of motion under gravity (general relativity) is incomplete. Freeman noticed that the expected decline in velocity was not present in NGC 300 nor M33, and considered an undetected mass to explain it. The DM Hypothesis has been reinforced by several studies. Formation and structure of dark matter halos The formation of dark matter halos is believed to have played a major role in the early formation of galaxies. During initial galactic formation, the temperature of the baryonic matter should have still been much too high for it to form gravitationally self-bound objects, thus requiring the prior formation of dark matter structure to add additional gravitational interactions. The current hypothesis for this is based on cold dark matter (CDM) and its formation into structure early in the universe. The hypothesis for CDM structure formation begins with density perturbations in the Universe that grow linearly until they reach a critical density, after which they would stop expanding and collapse to form gravitationally bound dark matter halos. The spherical collapse framework analytically models the formation and growth of such halos. These halos would continue to grow in mass (and size), either through accretion of material from their immediate neighborhood, or by merging with other halos. Numerical simulations of CDM structure formation have been found to proceed as follows: A small volume with small perturbations initially expands with the expansion of the Universe. As time proceeds, small-scale perturbations grow and collapse to form small halos. At a later stage, these small halos merge to form a single virialized dark matter halo with an ellipsoidal shape, which reveals some substructure in the form of dark matter sub-halos. The use of CDM overcomes issues associated with the normal baryonic matter because it removes most of the thermal and radiative pressures that were preventing the collapse of the baryonic matter. The fact that the dark matter is cold compared to the baryonic matter allows the DM to form these initial, gravitationally bound clumps. Once these subhalos formed, their gravitational interaction with baryonic matter is enough to overcome the thermal energy, and allow it to collapse into the first stars and galaxies. Simulations of this early galaxy formation matches the structure observed by galactic surveys as well as observation of the Cosmic Microwave Background. Density profiles A commonly used model for galactic dark matter halos is the pseudo-isothermal halo: where denotes the finite central density and the core radius. This provides a good fit to most rotation curve data. However, it cannot be a complete description, as the enclosed mass fails to converge to a finite value as the radius tends to infinity. The isothermal model is, at best, an approximation. Many effects may cause deviations from the profile predicted by this simple model. For example, (i) collapse may never reach an equilibrium state in the outer region of a dark matter halo, (ii) non-radial motion may be important, and (iii) mergers associated with the (hierarchical) formation of a halo may render the spherical-collapse model invalid. Numerical simulations of structure formation in an expanding universe lead to the empirical NFW (Navarro–Frenk–White) profile: where is a scale radius, is a characteristic (dimensionless) density, and = is the critical density for closure. The NFW profile is called 'universal' because it works for a large variety of halo masses, spanning four orders of magnitude, from individual galaxies to the halos of galaxy clusters. This profile has a finite gravitational potential even though the integrated mass still diverges logarithmically. It has become conventional to refer to the mass of a halo at a fiducial point that encloses an overdensity 200 times greater than the critical density of the universe, though mathematically the profile extends beyond this notational point. It was later deduced that the density profile depends on the environment, with the NFW appropriate only for isolated halos. NFW halos generally provide a worse description of galaxy data than does the pseudo-isothermal profile, leading to the cuspy halo problem. Higher resolution computer simulations are better described by the Einasto profile: where r is the spatial (i.e., not projected) radius. The term is a function of n such that is the density at the radius that defines a volume containing half of the total mass. While the addition of a third parameter provides a slightly improved description of the results from numerical simulations, it is not observationally distinguishable from the 2 parameter NFW halo, and does nothing to alleviate the cuspy halo problem. Shape The collapse of overdensities in the cosmic density field is generally aspherical. So, there is no reason to expect the resulting halos to be spherical. Even the earliest simulations of structure formation in a CDM universe emphasized that the halos are substantially flattened. Subsequent work has shown that halo equidensity surfaces can be described by ellipsoids characterized by the lengths of their axes. Because of uncertainties in both the data and the model predictions, it is still unclear whether the halo shapes inferred from observations are consistent with the predictions of ΛCDM cosmology. Halo substructure Up until the end of the 1990s, numerical simulations of halo formation revealed little substructure. With increasing computing power and better algorithms, it became possible to use greater numbers of particles and obtain better resolution. Substantial amounts of substructure are now expected. When a small halo merges with a significantly larger halo it becomes a subhalo orbiting within the potential well of its host. As it orbits, it is subjected to strong tidal forces from the host, which cause it to lose mass. In addition the orbit itself evolves as the subhalo is subjected to dynamical friction which causes it to lose energy and angular momentum to the dark matter particles of its host. Whether a subhalo survives as a self-bound entity depends on its mass, density profile, and its orbit. Angular momentum As originally pointed out by Hoyle and first demonstrated using numerical simulations by Efstathiou & Jones, asymmetric collapse in an expanding universe produces objects with significant angular momentum. Numerical simulations have shown that the spin parameter distribution for halos formed by dissipation-less hierarchical clustering is well fit by a log-normal distribution, the median and width of which depend only weakly on halo mass, redshift, and cosmology: with and . At all halo masses, there is a marked tendency for halos with higher spin to be in denser regions and thus to be more strongly clustered. Milky Way dark matter halo The visible disk of the Milky Way Galaxy is thought to be embedded in a much larger, roughly spherical halo of dark matter. The dark matter density drops off with distance from the galactic center. It is now believed that about 95% of the galaxy is composed of dark matter, a type of matter that does not seem to interact with the rest of the galaxy's matter and energy in any way except through gravity. The luminous matter makes up approximately solar masses. The dark matter halo is likely to include around to solar masses of dark matter. A 2014 Jeans analysis of stellar motions calculated the dark matter density (at the sun's distance from the galactic centre) = 0.0088 (+0.0024 −0.0018) solar masses/parsec^3. See also Press–Schechter formalism – A mathematical model used to predict the number of dark matter halos of a certain mass. References Further reading External links Rare Blob Unveiled: Evidence For Hydrogen Gas Falling Onto A Dark Matter Clump? European Southern Observatory (ScienceDaily) July 3, 2006 Dark Matter Search Experiment , PICASSO Experiment Black Holes and Dark matter Galaxies Dark matter
Dark matter halo
[ "Physics", "Astronomy" ]
2,015
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Galaxies", "Unsolved problems in physics", "Exotic matter", "Astronomical objects", "Physics beyond the Standard Model", "Matter" ]
1,539,804
https://en.wikipedia.org/wiki/Sheet%20resistance
Sheet resistance is the resistance of a square piece of a thin material with contacts made to two opposite sides of the square. It is usually a measurement of electrical resistance of thin films that are uniform in thickness. It is commonly used to characterize materials made by semiconductor doping, metal deposition, resistive paste printing, and glass coating. Examples of these processes are: doped semiconductor regions (e.g., silicon or polysilicon), and the resistors that are screen printed onto the substrates of thick-film hybrid microcircuits. The utility of sheet resistance as opposed to resistance or resistivity is that it is directly measured using a four-terminal sensing measurement (also known as a four-point probe measurement) or indirectly by using a non-contact eddy-current-based testing device. Sheet resistance is invariable under scaling of the film contact and therefore can be used to compare the electrical properties of devices that are significantly different in size. Calculations Sheet resistance is applicable to two-dimensional systems in which thin films are considered two-dimensional entities. When the term sheet resistance is used, it is implied that the current is along the plane of the sheet, not perpendicular to it. In a regular three-dimensional conductor, the resistance can be written aswhere is material resistivity, is the length, is the cross-sectional area, which can be split into: width , thickness . Upon combining the resistivity with the thickness, the resistance can then be written aswhere is the sheet resistance. If the film thickness is known, the bulk resistivity (in Ω·m) can be calculated by multiplying the sheet resistance by the film thickness in m: Units Sheet resistance is a special case of resistivity for a uniform sheet thickness. Commonly, resistivity (also known as bulk resistivity, specific electrical resistivity, or volume resistivity) is in units of Ω·m, which is more completely stated in units of Ω·m2/m (Ω·area/length). When divided by the sheet thickness (m), the units are Ω·m·(m/m)/m = Ω. The term "(m/m)" cancels, but represents a special "square" situation yielding an answer in ohms. An alternative, common unit is "ohms square" (denoted "") or "ohms per square" (denoted "Ω/sq" or ""), which is dimensionally equal to an ohm, but is exclusively used for sheet resistance. This is an advantage, because sheet resistance of 1 Ω could be taken out of context and misinterpreted as bulk resistance of 1 ohm, whereas sheet resistance of 1 Ω/sq cannot thus be misinterpreted. The reason for the name "ohms per square" is that a square sheet with sheet resistance 10 ohm/square has an actual resistance of 10 ohm, regardless of the size of the square. (For a square, , so .) The unit can be thought of as, loosely, "ohms · aspect ratio". Example: A 3-unit long by 1-unit wide (aspect ratio = 3) sheet made of material having a sheet resistance of 21 Ω/sq would measure 63 Ω (since it is composed of three 1-unit by 1-unit squares), if the 1-unit edges were attached to an ohmmeter that made contact entirely over each edge. For semiconductors For semiconductors doped through diffusion or surface peaked ion implantation we define the sheet resistance using the average resistivity of the material:which in materials with majority-carrier properties can be approximated by (neglecting intrinsic charge carriers):where is the junction depth, is the majority-carrier mobility, is the carrier charge, and is the net impurity concentration in terms of depth. Knowing the background carrier concentration and the surface impurity concentration, the sheet resistance-junction depth product can be found using Irvin's curves, which are numerical solutions to the above equation. Measurement A four-point probe is used to avoid contact resistance, which can often have the same magnitude as the sheet resistance. Typically a constant current is applied to two probes, and the potential on the other two probes is measured with a high-impedance voltmeter. A geometry factor needs to be applied according to the shape of the four-point array. Two common arrays are square and in-line. For more details see Van der Pauw method. Measurement may also be made by applying high-conductivity bus bars to opposite edges of a square (or rectangular) sample. Resistance across a square area will be measured in Ω/sq (often written as Ω/◻). For a rectangle, an appropriate geometric factor is added. Bus bars must make ohmic contact. Inductive measurement is used as well. This method measures the shielding effect created by eddy currents. In one version of this technique a conductive sheet under test is placed between two coils. This non-contact sheet resistance measurement method also allows to characterize encapsulated thin-films or films with rough surfaces. A very crude two-point probe method is to measure resistance with the probes close together and the resistance with the probes far apart. The difference between these two resistances will be of the order of magnitude of the sheet resistance. Typical applications Sheet resistance measurements are very common to characterize the uniformity of conductive or semiconductive coatings and materials, e.g. for quality assurance. Typical applications include the inline process control of metal, TCO, conductive nanomaterials, or other coatings on architectural glass, wafers, flat panel displays, polymer foils, OLED, ceramics, etc. The contacting four-point probe is often applied for single-point measurements of hard or coarse materials. Non-contact eddy current systems are applied for sensitive or encapsulated coatings, for inline measurements and for high-resolution mapping. See also ESD materials References Measuring Sheet Resistance General references Measuring Sheet Resistance Semiconductors Electrical resistance and conductance
Sheet resistance
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,250
[ "Matter", "Physical quantities", "Semiconductors", "Quantity", "Materials", "Electronic engineering", "Condensed matter physics", "Wikipedia categories named after physical quantities", "Solid state engineering", "Electrical resistance and conductance" ]
9,335,905
https://en.wikipedia.org/wiki/Multidelay%20block%20frequency%20domain%20adaptive%20filter
The multidelay block frequency domain adaptive filter (MDF) algorithm is a block-based frequency domain implementation of the (normalised) Least mean squares filter (LMS) algorithm. Introduction The MDF algorithm is based on the fact that convolutions may be efficiently computed in the frequency domain (thanks to the fast Fourier transform). However, the algorithm differs from the fast LMS algorithm in that block size it uses may be smaller than the filter length. If both are equal, then MDF reduces to the FLMS algorithm. The advantages of MDF over the (N)LMS algorithm are: Lower algorithmic complexity Partial de-correlation of the input (which 'may' lead to faster convergence) Variable definitions Let be the length of the processing blocks, be the number of blocks and denote the 2Nx2N Fourier transform matrix. The variables are defined as: With normalisation matrices and : In practice, when multiplying a column vector by , we take the inverse FFT of , set the first values in the result to zero and then take the FFT. This is meant to remove the effects of the circular convolution. Algorithm description For each block, the MDF algorithm is computed as: It is worth noting that, while the algorithm is more easily expressed in matrix form, the actual implementation requires no matrix multiplications. For instance the normalisation matrix computation reduces to an element-wise vector multiplication because is block-diagonal. The same goes for other multiplications. References J.-S. Soo and K. Pang, “Multidelay block frequency domain adaptive filter,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 2, pp. 373–376, 1990. H. Buchner, J. Benesty, W. Kellermann, "An Extended Multidelay Filter: Fast Low-Delay Algorithms for Very High-Order Adaptive Systems". Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2003. A free implementation of the MDF algorithm is available in Speex (main source file) See also Adaptive filter Recursive least squares For statistical techniques relevant to LMS filter see Least squares. Digital signal processing Filter theory
Multidelay block frequency domain adaptive filter
[ "Engineering" ]
466
[ "Telecommunications engineering", "Filter theory" ]
9,335,972
https://en.wikipedia.org/wiki/Lipman%20Bers
Lipman Bers (Latvian: Lipmans Berss; May 22, 1914 – October 29, 1993) was a Latvian-American mathematician, born in Riga, who created the theory of pseudoanalytic functions and worked on Riemann surfaces and Kleinian groups. He was also known for his work in human rights activism. Biography Bers was born in Riga, then under the rule of the Russian Czars, and spent several years as a child in Saint Petersburg; his family returned to Riga in approximately 1919, by which time it was part of independent Latvia. In Riga, his mother was the principal of a Jewish elementary school, and his father became the principal of a Jewish high school, both of which Bers attended, with an interlude in Berlin while his mother, by then separated from his father, attended the Berlin Psychoanalytic Institute. After high school, Bers studied at the University of Zurich for a year, but had to return to Riga again because of the difficulty of transferring money from Latvia in the international financial crisis of the time. He continued his studies at the University of Riga, where he became active in socialist politics, including giving political speeches and working for an underground newspaper. In the aftermath of the Latvian coup in 1934 by right-wing leader Kārlis Ulmanis, Bers was targeted for arrest but fled the country, first to Estonia and then to Czechoslovakia. Bers received his Ph.D. in 1938 from the University of Prague. He had begun his studies in Prague with Rudolf Carnap, but when Carnap moved to the US he switched to Charles Loewner, who would eventually become his thesis advisor. In Prague, he lived with an aunt, and married his wife Mary (née Kagan) whom he had met in elementary school and who had followed him from Riga. Having applied for postdoctoral studies in Paris, he was given a visa to go to France soon after the Munich Agreement, by which Nazi Germany annexed the Sudetenland. He and his wife Mary had a daughter in Paris. They were unable to obtain a visa there to emigrate to the US, as the Latvian quota had filled, so they escaped to the south of France ten days before the fall of Paris, and eventually obtained an emergency US visa in Marseilles, one of a group of 10,000 visas set aside for political refugees by Eleanor Roosevelt. The Bers family rejoined Bers' mother, who had by then moved to New York City and become a psychoanalyst, married to thespian Beno Tumarin. At this time, Bers worked for the YIVO Yiddish research agency. Bers spent World War II teaching mathematics as a research associate at Brown University, where he was joined by Loewner. After the war, Bers found an assistant professorship at Syracuse University (1945–1951), before moving to New York University (1951–1964) and then Columbia University (1964–1982), where he became the Davies Professor of Mathematics, and where he chaired the mathematics department from 1972 to 1975. His move to NYU coincided with a move of his family to New Rochelle, New York, where he joined a small community of émigré mathematicians. He was a visiting scholar at the Institute for Advanced Study in 1949–51. He was a Vice-President (1963–65) and a President (1975–77) of the American Mathematical Society, chaired the Division of Mathematical Sciences of the United States National Research Council from 1969 to 1971, chaired the U.S. National Committee on Mathematics from 1977 to 1981, and chaired the Mathematics Section of the National Academy of Sciences from 1967 to 1970. Late in his life, Bers suffered from Parkinson's disease and strokes. He died on October 29, 1993. Mathematical research Bers' doctoral work was on the subject of potential theory. While in Paris, he worked on Green's function and on integral representations. After first moving to the US, while working for YIVO, he researched Yiddish mathematics textbooks rather than pure mathematics. At Brown, he began working on problems of fluid dynamics, and in particular on the two-dimensional subsonic flows associated with cross-sections of airfoils. At this time, he began his work with Abe Gelbart on what would eventually develop into the theory of pseudoanalytic functions. Through the 1940s and 1950s he continued to develop this theory, and to use it to study the planar elliptic partial differential equations associated with subsonic flows. Another of his major results in this time concerned the singularities of the partial differential equations defining minimal surfaces. Bers proved an extension of Riemann's theorem on removable singularities, showing that any isolated singularity of a pencil of minimal surfaces can be removed; he spoke on this result at the 1950 International Congress of Mathematicians and published it in Annals of Mathematics. Later, beginning with his visit to the Institute for Advanced Study, Bers "began a ten-year odyssey that took him from pseudoanalytic functions and elliptic equations to quasiconformal mappings, Teichmüller theory, and Kleinian groups". With Lars Ahlfors, he solved the "moduli problem", of finding a holomorphic parameterization of the Teichmüller space, each point of which represents a compact Riemann surface of a given genus. During this period he also coined the popular phrasing of a question on eigenvalues of planar domains, "Can one hear the shape of a drum?", used as an article title by Mark Kac in 1966 and finally answered negatively in 1992 by an academic descendant of Bers. In the late 1950s, by way of adding a coda to his earlier work, Bers wrote several major retrospectives of flows, pseudoanalytic functions, fixed point methods, Riemann surface theory prior to his work on moduli, and the theory of several complex variables. In 1958, he presented his work on Riemann surfaces in a second talk at the International Congress of Mathematicians. Bers' work on the parameterization of Teichmüller space led him in the 1960s to consider the boundary of the parameterized space, whose points corresponded to new types of Kleinian groups, eventually to be called singly-degenerate Kleinian groups. He applied Eichler cohomology, previously developed for applications in number theory and the theory of Lie groups, to Kleinian groups. He proved the Bers area inequality, an area bound for hyperbolic surfaces that became a two-dimensional precursor to William Thurston's work on geometrization of 3-manifolds and 3-manifold volume, and in this period Bers himself also studied the continuous symmetries of hyperbolic 3-space. Quasi-Fuchsian groups may be mapped to a pair of Riemann surfaces by taking the quotient by the group of one of the two connected components of the complement of the group's limit set; fixing the image of one of these two maps leads to a subset of the space of Kleinian groups called a Bers slice. In 1970, Bers conjectured that the singly degenerate Kleinian surface groups can be found on the boundary of a Bers slice; this statement, known as the Bers density conjecture, was finally proven by Namazi, Souto, and Ohshika in 2010 and 2011. The Bers compactification of Teichmüller space also dates to this period. Advising Over the course of his career, Bers advised approximately 50 doctoral students, among them Enrico Arbarello, Irwin Kra, Linda Keen, Murray H. Protter, and Lesley Sibner. Approximately a third of Bers' doctoral students were women, a high proportion for mathematics. Having felt neglected by his own advisor, Bers met regularly for meals with his students and former students, maintained a keen interest in their personal lives as well as their professional accomplishments, and kept up a friendly competition with Lars Ahlfors over who could bring to larger number of academic descendants to mathematical gatherings. Human rights activism As a small child with his mother in Saint Petersburg, Bers had cheered the Russian Revolution and the rise of the Soviet Union, but by the late 1930s he had become disillusioned with communism after the assassination of Sergey Kirov and Stalin's ensuing purges. His son, Victor Bers, later said that "His experiences in Europe motivated his activism in the human rights movement," and Bers himself attributed his interest in human rights to the legacy of Menshevik leader Julius Martov. He founded the Committee on Human Rights of the National Academy of Sciences, and beginning in the 1970s worked to allow the emigration of dissident Soviet mathematicians including Yuri Shikhanovich, Leonid Plyushch, Valentin Turchin, and David and Gregory Chudnovsky. Within the U.S., he also opposed the American involvement in the Vietnam War and southeast Asia, and the maintenance of the U.S. nuclear arsenal during the Cold War. Awards and honors In 1961, Bers was elected a Fellow of the American Academy of Arts and Sciences, and in 1965 he became a Fellow of the American Association for the Advancement of Science. He joined the National Academy of Sciences in 1964. He was a member of the Finnish Academy of Sciences, and the American Philosophical Society. He received the AMS Leroy P. Steele Prize for mathematical exposition in 1975 for his paper "Uniformization, moduli, and Kleinian groups". In 1986, the New York Academy of Sciences gave him their Human Rights Award. In the early 1980s, the Association for Women in Mathematics held a symposium to honor Bers' accomplishments in mentoring women mathematicians. Publications Books Bers, Lipman (1976), Calculus, Holt, Rinehart and Winston, (in collaboration with Frank Karal) Selected articles with Abe Gelbart: with Shmuel Agmon: with Leon Ehrenpreis: References External links 20th-century American mathematicians 20th-century Latvian mathematicians Latvian emigrants to the United States Scientists from Riga Latvian Jews New York University faculty Columbia University faculty Syracuse University faculty Fellows of the American Academy of Arts and Sciences Fellows of the American Association for the Advancement of Science Institute for Advanced Study visiting scholars Members of the United States National Academy of Sciences Complex analysts 1914 births 1993 deaths Presidents of the American Mathematical Society People from New Rochelle, New York Mathematical analysts Mathematicians from New York (state)
Lipman Bers
[ "Mathematics" ]
2,120
[ "Mathematical analysis", "Mathematical analysts" ]
9,336,144
https://en.wikipedia.org/wiki/Marinisation
Marinisation (also marinization) is design, redesign, or testing of products for use in a marine environment. Most commonly, it refers to use and long-term survival in harsh, highly corrosive salt water conditions. Marinisation is done by many manufacturing industries worldwide including many military organisations, especially navies. In some instances, cost is not a guiding force, and items may be designed from scratch with entirely non-corrosive components engineered and assembled to resist the effects of vibration and constantly changing attitude. In others, particularly in "marinising" an existing product that was not designed specifically for a marine environment for sale in the public marketplace, a balance must be found between the competing criteria. There are three main factors that need to be considered for a product to be truly marinised. Resistance to corrosion. Resistance to vibration. Ability to function properly in conditions of constantly changing attitude (an object's orientation about its center of gravity). Examples Metals Marinised metals include some of the following: Non-corrosive alloys that resist or are impervious to salt-water corrosion, e.g. 316 marine grade stainless steel; brass (an alloy of copper and zinc), or bronze (which contains copper with tin in place of zinc). The adjectival phrase "marine grade" being used when the above alloys have all impurities removed and are suitable for exposure to a marine environment. Metals electroplated or dipped in a corrosion-resistant material, e.g. galvanised steel Metals painted with special anti rust or anti corrosion coatings Plastic coated metals Electronics Marinised electronics use one or more of the following protection methods. In most cases more than one method is used: Coating by a spray or dipping to protect from salt air and water. Full encapsulation in some form of resin or gel. Specialised mounting of internal parts for vibration protection. Use of specialised corrosion resistant solder and corrosion resistant metals. Batteries Marinised batteries are usually gel batteries or sealed maintenance-free batteries. Not using marinised batteries in salt water can be deadly in an enclosed environment for many reasons: Sulfuric acid and salt water react to generate dangerous hydrogen chloride gas, necessitating the use of valve-regulated maintenance-free sealed batteries. The battery must have stronger plates and separators to withstand constant vibrations and impacts caused by large waves striking the hull. Plate collapse can cause short-circuits and electrical fires or explosions. A marine battery must function at any angle due to the changing attitude of the vessel it is mounted in. Gel VRLA batteries are best for this purpose. See also Marine electronics Material protection References
Marinisation
[ "Physics" ]
534
[ "Material protection", "Materials", "Matter" ]
9,336,762
https://en.wikipedia.org/wiki/Rhynchosporium%20secalis
Rhynchosporium secalis is an ascomycete fungus that is the causal agent of barley and rye scald. Morphology No sexual stage is known. The mycelium is hyaline to light gray and develops sparsely as a compact stroma under the cuticle of the host plant. Condia (2-4 x 12-20 μm) are borne sessilely on cells of the fertile stroma. They are hyaline, 1-septate, and cylindric to ovate, mostly with a short apical beak. Microconida have been reported, but their function is unknown. They are exuded from flasklike mycelial branches. Host species Agropyron dasystachyum, A. desertorum, A. elmeri, A. intermedium, A. riparium, A. scabriglume, A. semicostatum, A. subsecundum, A. trachycaulum, A. trachycaulum var. trachycaulum, A. trachycaulum var. unilaterale Agrostis gigantea, A. stolonifera, A. tenuis Alopecurus geniculatus, A. pratensis Bouteloua gracilis, B. hirsuta Bromus aleutensis, B. carinatus, B. ciliatus, B. frondosus, B. inermis, B. pumpellianus, B. secalinus, B. vulgaris Calamagrostis arundinacea, C. epigejos Chrysopogon gryllus Critesion murinum Cynodon dactylon Dactylis glomerata Danthonia sp. Deschampsia cespitosa Elymus angustus, E. canadensis, E. chinensis, E. glaucus, E. junceus, E. repens, E. virginicus Festuca pratensis, F. rubra Hordeum aegiceras, H. brachyantherum, H. distichon, H. hexastichon, H. jubatum, H. leporinum, H. murinum, H. vulgare, H. vulgare var. nudum, H. vulgare var. trifurcatum Leymus condensatus, L. innovatus, L. triticoides Lolium multiflorum, L. perenne, L. rigidum Microlaena stipoides Panicum sp. Phalaris arundinacea Phleum pratense Poa annua, P. eminens, P. pratensis Quercus chrysolepis Roegneria sp. Secale cereale, S. montanum × Triticosecale sp. Sources Index Fungorum USDA ARS Fungal Database References Fungal plant pathogens and diseases Rye diseases Barley diseases Enigmatic Ascomycota taxa Fungus species
Rhynchosporium secalis
[ "Biology" ]
630
[ "Fungi", "Fungus species" ]
9,336,927
https://en.wikipedia.org/wiki/SV%20Tenacious
The SV Tenacious is a modern British wooden sail training ship, specially designed in the 1990s. When completed in 2000, it was the largest wooden ship to be built in the UK for over 100 years. Design and construction The ship was built by the Jubilee Sailing Trust (JST) and designed by Tony Castro. With STS Lord Nelson, the pair are the only tall ships in the world that were built so that both disabled and non-disabled people can sail as crew, not passengers. Features that cannot be found on other ships include wheelchair lifts throughout, a unique ascender systems that allow wheelchair users to go aloft (either assisted or by their own efforts), a speaking compass for those with visual impairments, hearing loops, adjustable furniture for those with mobility difficulties, and a joystick to help individuals with dexterity limitations to steer the ship. Everyone plays a full and active role in the voyage. The JST is a UN-accredited charity offering sailing adventures to people of all abilities and backgrounds. The ship is owned and operated by Jubilee Sailing Trust (Tenacious) Ltd. Launched in 2000, the sailing vessel Tenacious is the largest wooden tall ship built in the United Kingdom in the last 100 years. It is 65 metres (213.25 feet) long including bowsprit, and it is rigged as a three-masted barque with two mizzen gaffs. Its deck is 49.85 metres long, its hull is 54.02 metres long, and it has a beam of 10.6 metres at its widest point. Tenacious displaces about 714 tons (summer draft). A press release from the Belfast Maritime Festival on 22 June 2006 announced that the Tenacious was "the largest wooden ship still afloat". History The ship's maiden voyage was on 1 September 2000 from Southampton to Sark, St Helier and Weymouth before returning to Southampton. The ship is owned by a UK-based charity, the Jubilee Sailing Trust, which also owns the 42-metre-long tall ship STS Lord Nelson (length including bowsprit is 55 metres and waterline length is 37 metres). Tenacious featured in the first series of Channel 5's Sea Patrol UK, when one of the crew members had fallen ill and needed to be winched into an RAF Westland Sea King and taken to hospital. Due to the height of the masts and rigging, this posed a challenge to the helicopter's pilot and winch crew but the rescue attempt was successful and the crew member survived a potentially fatal condition. In December 2023 the JST announced that Tenaciouss owning company was insolvent and had been put into liquidation and that all future cruises had been cancelled. In 2024, a campaign group called 'Save Tall Ship Tenacious' was formed to save the ship. References External links Official website 2000 ships Accessible transportation Disabled boating Tall ships of the United Kingdom Individual sailing vessels Barques Sail training ships Ships built in Southampton
SV Tenacious
[ "Physics" ]
607
[ "Physical systems", "Transport", "Accessible transportation" ]
9,336,966
https://en.wikipedia.org/wiki/Happy%20path
In the context of software or information modeling, a happy path (sometimes called happy flow) is a default scenario featuring no exceptional or error conditions. For example, the happy path for a function validating credit card numbers would be where none of the validation rules raise an error, thus letting execution continue successfully to the end, generating a positive response. Process steps for a happy path are also used in the context of a use case. In contrast to the happy path, process steps for alternate flow and exception flow may also be documented. Happy path test is a well-defined test case using known input, which executes without exception and produces an expected output. Happy path testing can show that a system meets its functional requirements but it doesn't guarantee a graceful handling of error conditions or aid in finding hidden bugs. Happy day (or sunny day) scenario and golden path are slang synonyms for happy path. In use case analysis, there is only one happy path, but there may be any number of additional alternate path scenarios which are all valid optional outcomes. If valid alternatives exist, the happy path is then identified as the default or most likely positive alternative. The analysis may also show one or more exception paths. An exception path is taken as the result of a fault condition. Use cases and the resulting interactions are commonly modeled in graphical languages such as the Unified Modeling Language (UML) or SysML. Unhappy path There is no agreed name for the opposite of happy paths: they may be known as sad paths, bad paths, or exception paths. The term 'unhappy path' is gaining popularity as it suggests a complete opposite to 'happy path' and retains the same context. Usually there is no extra 'unhappy path', leaving such 'term' meaningless, because the happy path reaches the utter end, but an 'unhappy path' is shorter, ends prematurely, and doesn't reach the desired end, i.e. not even the last page of a wizard. And in contrast to a single happy path, there are a lot of different ways in which things can go wrong, so there is no single criterion to determine 'the unhappy path'. See also Edge case Corner case Use case References Computer programming Software testing {{More citations needed section}}
Happy path
[ "Technology", "Engineering" ]
457
[ "Software testing", "Computer programming", "Software engineering", "Computing stubs", "Computers" ]
9,338,319
https://en.wikipedia.org/wiki/Henryk%20Iwaniec
Henryk Iwaniec (born October 9, 1947) is a Polish-American mathematician, and since 1987 a professor at Rutgers University. He is a member of the American Academy of Arts and Sciences and Polish Academy of Sciences. He has made important contributions to analytic and algebraic number theory as well as harmonic analysis. He is the recipient of Cole Prize (2002), Steele Prize (2011), and Shaw Prize (2015). Background and education Iwaniec studied at the University of Warsaw, where he got his PhD in 1972 under Andrzej Schinzel. He then held positions at the Institute of Mathematics of the Polish Academy of Sciences until 1983 when he left Poland. He held visiting positions at the Institute for Advanced Study, University of Michigan, and University of Colorado Boulder before being appointed Professor of Mathematics at Rutgers University. He is a citizen of both Poland and the United States. He and mathematician Tadeusz Iwaniec are twin brothers. Work Iwaniec studies both sieve methods and deep complex-analytic techniques, with an emphasis on the theory of automorphic forms and harmonic analysis. In 1997, Iwaniec and John Friedlander proved that there are infinitely many prime numbers of the form . Results of this strength had previously been seen as completely out of reach: sieve theory—used by Iwaniec and Friedlander in combination with other techniques—cannot usually distinguish between primes and products of two primes, say. He also showed that there are infinitely many numbers of the form with at most two prime factors. In 2001, Iwaniec was awarded the seventh Ostrowski Prize. The prize citation read, in part, "Iwaniec's work is characterized by depth, profound understanding of the difficulties of a problem, and unsurpassed technique. He has made deep contributions to the field of analytic number theory, mainly in modular forms on and sieve methods." Awards and honors He became a fellow of the American Academy of Arts and Sciences in 1995. He was awarded the fourteenth Frank Nelson Cole Prize in Number Theory in 2002. In 2006, he became a member of the National Academy of Science. He received the Leroy P. Steele Prize for Mathematical Exposition in 2011. In 2012, he became a fellow of the American Mathematical Society. In 2015 he was awarded the Shaw Prize in Mathematics. In 2017, he was awarded the AMS Doob Prize (jointly with John Friedlander) for their book Opera de Cribro, which is about sieve theory. Publications See also List of Polish mathematicians References Further reading . External links People from Elbląg 20th-century Polish mathematicians 21st-century Polish mathematicians Number theorists Institute for Advanced Study visiting scholars Rutgers University faculty Living people 1947 births Members of the United States National Academy of Sciences Fellows of the American Mathematical Society International Mathematical Olympiad participants University of Michigan people Recipients of the State Award Badge (Poland) Polish twins
Henryk Iwaniec
[ "Mathematics" ]
584
[ "Number theorists", "Number theory" ]
9,339,120
https://en.wikipedia.org/wiki/Magnesium%20citrate
Magnesium citrates are metal-organic compounds formed from citrate and magnesium ions. They are salts. One form is the 1:1 magnesium preparation in salt form with citric acid in a 1:1 ratio (1 magnesium atom per citrate molecule). It contains 11.33% magnesium by weight. Magnesium citrate (sensu lato) is used medicinally as a saline laxative and to empty the bowel before major surgery or a colonoscopy. It is available without a prescription, both as a generic and under various brand names. It is also used in the pill form as a magnesium dietary supplement. As a food additive, magnesium citrate is used to regulate acidity and is known as E number E345. Structures The structures of solid magnesium citrates have been characterized by X-ray crystallography. In the 1:1 salt, only one carboxylate of citrate is deprotonated. It has the formula The other form of magnesium citrate has the formula , consisting of the citrate dianion (both carboxylic acids are deprotonated). Thus, it is clear that name "magnesium citrate" is ambiguous and sometimes may refer to other salts such as trimagnesium dicitrate which has a magnesium:citrate ratio of 3:2, or monomagnesium dicitrate with a ratio of 1:2, or a mix of two or three of the salts of magnesium and citric acid. Mechanism of action Magnesium citrate works by attracting water through the tissues by a process known as osmosis. Once in the intestine, it can attract enough water into the intestine to induce defecation. The additional water stimulates bowel motility. This means it can also be used to treat rectal and colon problems. Magnesium citrate functions best on an empty stomach, and should always be followed with a full (eight-ounce or 250 ml) glass of water or juice to help counteract water loss and aid in absorption. Magnesium citrate solutions generally produce bowel movement in one-half to three hours. Use and dosage The maximum upper tolerance limit (UTL) for magnesium in supplement form for adults is 350 mg of elemental magnesium per day, according to the National Institutes of Health (NIH). In addition, according to the NIH, total dietary requirements for magnesium from all sources (in other words, food and supplements) is 320–420 mg of elemental magnesium per day, though there is no UT for dietary magnesium. Laxative Magnesium citrate is used as a laxative agent. It is not recommended for use in children and infants two years of age or less. Magnesium deficiency treatment Although less common, as a magnesium supplement the citrate form is sometimes used because it is believed to be more bioavailable than other common pill forms, such as magnesium oxide. But, according to one study, magnesium gluconate was found to be marginally more bioavailable than even magnesium citrate. Potassium-magnesium citrate, as a supplement in pill form, is useful for the prevention of kidney stones. Side effects Magnesium citrate is generally not a harmful substance, but care should be taken by consulting a healthcare professional if any adverse health problems are suspected or experienced. Extreme magnesium overdose can result in serious complications such as slow heartbeat, low blood pressure, nausea, drowsiness, etc. If severe enough, an overdose can even result in coma or death. However, a moderate overdose will be excreted through the kidneys, unless one has serious kidney problems. Rectal bleeding or failure to have a bowel movement after use could be signs of a serious condition. See also ATC code A12 Magnesium aspartate References External links Saline laxatives. MedicineNet. Magnesium citrate Patient Advice. Drugs.com. Citrates Laxatives Magnesium compounds Antiarrhythmic agents Antidepressants Diuretics Sedatives Food additives Hypnotics Antispasmodics Tocolytics Psycholeptics
Magnesium citrate
[ "Biology" ]
856
[ "Hypnotics", "Behavior", "Sleep" ]
9,339,360
https://en.wikipedia.org/wiki/Overnight%20cost
Overnight cost is the cost of a construction project if no interest was incurred during construction, as if the project was completed "overnight." This concept is used for providing a simplistic cost comparison between power plant projects or technologies, through a ratio with the maximum power the plant can deliver. Power generation The overnight capital cost is a term used in the power generation industry. It is usually computed by dividing the overnight cost of building the plant by the maximum instantaneous power the plant can deliver. This overnight capital cost does not take into account: the life span of a plant or its key components, the capacity factor, i.e. the ratio between the effective mean power (actually delivered through the year) and the maximum power (maybe reached only a few hours per year) which typically varies from 10% (e.g. solar plants in Germany) to 90% (e.g. nuclear plants in USA) due to various causes : natural such as sun, clouds, wind or waterfall, technological such as maintenance constraints, financial such as fuel cost vs. electricity wholesale price, legal such as pollution reduction, the financing costs or escalation, noticeably the discount due to some interest rate or the comparative return of capital in other industries. Hence : the overnight capital cost is not an actual estimate of construction cost, investors in the energy industry typically look rather to the levelized cost of energy (LCOE) for comparing generation projects or technologies (e.g. solar power, natural gas) in the long term, as it includes ongoing fuel, maintenance, operation and financial costs. The U.S. Department of Energy tracks and makes publicly available levelized cost of energy figures for competing technologies. These figures will vary substantially in other countries due to different energy policies and domestic energy sources. See also Economics of new nuclear power plants References Costs Energy production Energy economics Energy infrastructure
Overnight cost
[ "Environmental_science" ]
377
[ "Environmental social science stubs", "Energy economics", "Environmental social science" ]
9,339,520
https://en.wikipedia.org/wiki/Ren%C3%A9%2041
René 41 is a nickel-based high temperature alloy developed by General Electric. It retains high strength in the temperature range. It is used in jet engine and missile components, and other applications that require high strength at extreme temperatures. René 41 is considered a nickel alloy based upon its chemical composition. René 41 was used to create the outer shell of the Mercury space capsule, due to its ability to retain high strength at very high temperatures. References External links Technical data Nickel alloys Superalloys Aerospace materials Refractory metals
René 41
[ "Chemistry", "Engineering" ]
104
[ "Nickel alloys", "Alloy stubs", "Aerospace materials", "Refractory metals", "Superalloys", "Alloys", "Aerospace engineering" ]
9,339,579
https://en.wikipedia.org/wiki/TOMLAB
The TOMLAB Optimization Environment is a modeling platform for solving applied optimization problems in MATLAB. Description TOMLAB is a general purpose development and modeling environment in MATLAB for research, teaching and practical solution of optimization problems. It enables a wider range of problems to be solved in MATLAB and provides many additional solvers. Optimization problems supported TOMLAB handles a wide range of problem types, among them: Linear programming Quadratic programming Nonlinear programming Mixed-integer programming Mixed-integer quadratic programming with or without convex quadratic constraints Mixed-integer nonlinear programming Linear and nonlinear least squares with L1, L2 and infinity norm Exponential data fitting Global optimization Semi-definite programming problem with bilinear matrix inequalities Constrained goal attainment Geometric programming Genetic programming Costly or expensive black-box global optimization Nonlinear complementarity problems Additional features TOMLAB supports more areas than general optimization, for example: Optimal control with PROPT using Gauss and Chebyshev collocation. Automatic differentiation with MAD Interface to AMPL Further details TOMLAB supports solvers like CPLEX, SNOPT, KNITRO and MIDACO. Each such solver can be called to solve one single model formulation. The supported solvers are appropriate for many problems, including linear programming, integer programming, and global optimization. An interface to AMPL makes it possible to formulate the problem in an algebraic format. The MATLAB Compiler enables the user to build stand-alone solutions. Sister products are available for LabVIEW and Microsoft .NET. Modeling is mainly facilitated by the TomSym class. References External links TOMLAB MAD (MATLAB Automatic Differentiation) PROPT - MATLAB Optimal Control Software Numerical software Mathematical optimization software
TOMLAB
[ "Mathematics" ]
340
[ "Numerical software", "Mathematical software" ]
9,339,769
https://en.wikipedia.org/wiki/Beijing%20Aerospace%20Flight%20Control%20Center
Beijing Aerospace Flight Control Center (), formerly known as Beijing Aerospace Command and Control Center (; BACCC or BACC), is a command center for the Chinese space program which includes the Shenzhou missions, and is located in a suburb northwest of Beijing under the administration of Haidian District. The space center main entrance is located at the intersection of Beiqing Road and You Yi Road as shown by the photograph. The BACC is subordinated to the People's Liberation Army's Aerospace Force, controlling both military and civilian launches and satellites. BACC's primary functions include supervision, telemetry, tracking and command of spacecraft. The building is inside a complex nicknamed Aerospace City. It was initially created for China's crewed space missions, a.k.a. "Project 921", hence also the name "921" among some insiders. It has evolved to be responsible for the Chang'e 1 mission and the Sino-Russian Interplanetary Space Mission, Fobos-Grunt. The BACC also oversees Shenzhou missions with the help of four Yuan Wang-class tracking ships. It has dedicated subsidiaries for SINOSAT and Inmarsat. It was renamed to 北京航天飞行控制中心 (literally: Beijing Aerospace Flight Control Center) in 2006. As of March, 2009, no official announcement has been made to revise its formal name in English in accordance with its new Chinese name. See also Xi'an Satellite Control Center Chinese space program References Chinese space program facilities
Beijing Aerospace Flight Control Center
[ "Astronomy" ]
312
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
9,342,843
https://en.wikipedia.org/wiki/Fujiki%20class%20C
In algebraic geometry, a complex manifold is called Fujiki class if it is bimeromorphic to a compact Kähler manifold. This notion was defined by Akira Fujiki. Properties Let M be a compact manifold of Fujiki class , and its complex subvariety. Then X is also in Fujiki class (, Lemma 4.6). Moreover, the Douady space of X (that is, the moduli of deformations of a subvariety , M fixed) is compact and in Fujiki class . Fujiki class manifolds are examples of compact complex manifolds which are not necessarily Kähler, but for which the -lemma holds. Conjectures J.-P. Demailly and M. Pǎun have shown that a manifold is in Fujiki class if and only if it supports a Kähler current. They also conjectured that a manifold M is in Fujiki class if it admits a nef current which is big, that is, satisfies For a cohomology class which is rational, this statement is known: by Grauert-Riemenschneider conjecture, a holomorphic line bundle L with first Chern class nef and big has maximal Kodaira dimension, hence the corresponding rational map to is generically finite onto its image, which is algebraic, and therefore Kähler. Fujiki and Ueno asked whether the property is stable under deformations. This conjecture was disproven in 1992 by Y.-S. Poon and Claude LeBrun References Algebraic geometry Complex manifolds
Fujiki class C
[ "Mathematics" ]
322
[ "Fields of abstract algebra", "Algebraic geometry" ]
9,343,886
https://en.wikipedia.org/wiki/Civilian%20Space%20eXploration%20Team
The Civilian Space eXploration Team, also known as the CSXT, is a team of around 30 civilians interested in private spaceflight. The team was created by Ky Michaelson. Having conducted multiple rocket launches in an attempt to establish altitude records, CSXT became the first entity to officially launch an amateur rocket into space on May 17, 2004, with the successful launch of its GoFast rocket to an altitude of 116 km (72 miles) above the surface, which was verified by FAA analysis of the team's flight data. Prior privately funded space launches were achieved by the Orbital Sciences Pegasus, and many other amateur teams have made unverified claims of rocket flights exceeding the boundary of space. Team composition Currently, Ky Michaelson is the CSXT program director. The CSXT's program is subdivided into three teams: Rocket Design and Ground Support Equipment Avionics and Ground System Design Wind Weighting System Development History The team was established in 1995 by a group of model rocket hobbyists interested in spaceflight. The team is supported by corporate sponsorship. D.R. Hero The D.R. Hero rocket was launched in August 1995. It was dedicated to stuntman Dar Robinson, a late friend of Ky Michaelson. The rocket was tall and in diameter. It was anticipated to reach . This rocket failed in a large CATO (catastrophe at take off) motor failure just above the ground. Joe Boxer launches Launched on August 18, 1996, this rocket was also tall and 6 inches in diameter. The name is attributed to the largest contributing sponsor, Joe Boxer. It was anticipated to reach ; however, the actual height obtained was only . The entire rocket was recovered after what was considered a successful flight. All of the rocket's systems functioned as intended and this flight was claimed to be the first amateur rocket to be recovered intact after reaching more than . Space shots 1997 Launched on July 21, 1997, this slightly smaller rocket was tall and 6 inches in diameter, with an upper stage dart, which was only 3 or 4 inches across. It was the first two-stage rocket launched by CSXT and was expected to reach . During the launch, an electronics failure prevented the ignition of the second stage, though the first stage successfully detached and was recovered with a parachute. 2000 This rocket was launched on September 29, 2000, and was tall and 8.625 inches in diameter. It was expected to reach with a maximum speed of . After launch, the rocket encountered problems at , where the wind sheared off the fin, causing the rocket to break apart. Although the launch was fairly unsuccessful, it did set a record for amateur rocket speed of . 2002 This rocket was launched on September 19, 2002. It was launched at the Black Rock Desert in Nevada. The rocket was equipped with a solid propellant motor. The motor was to accelerate the rocket to Mach 5. The rocket was equipped with GPS receivers and antennas, video recording devices, and a series of flight monitoring devices. Three seconds after the rocket launched the motor burned through the casing, causing the rocket to fail. 2004 "GoFast" The rocket was launched on Monday, May 17, 2004. This rocket was the first amateur rocket to exceed , the official boundary of outer space. It was launched at the Black Rock Desert. The rocket reached top speed of in 10 seconds, and reached an estimated altitude of . The avionics were recovered by deployment of a parachute. The final verified altitude of the rocket was released as . The rocket was tall and in diameter, and used an ammonium perchlorate-based solid propellant. 2014 "GoFast" On July 14, 2014, the team repeated their accomplishment with a second successful space launch. Analysis of the data from the recovered military-grade Inertial Measurement Unit (IMU) that flew onboard shows that the GoFast rocket reached an altitude of above mean sea level and hit a top speed of . See also Amateur rocketry Private spaceflight Copenhagen Suborbitals Sounding Rocket References External links High Altitude Amateur Rocket Records, HobbySpace.com (2004) Mystery Solved: Stratofox Recovers CSXT Booster, Stratofox Aerospace Tracking Team (2004) CSXT SpaceShot 2004 - First Amateur Launch to Space, Stratofox Aerospace Tracking Team (2004) Recollections of the CSXT Space Shot 2004 (5th anniversary page), Stratofox Aerospace Tracking Team (2009) , posted by Ky Michaelson , posted by Wayne Vaughan , posted by Derek Deville , posted by Ian Kluft The Rocketman, Ky Michaelson's website. , posted by Ky Michaelson Rocketry Private spaceflight Amateur radio organizations 1995 establishments in the United States
Civilian Space eXploration Team
[ "Engineering" ]
948
[ "Rocketry", "Aerospace engineering" ]
9,344,115
https://en.wikipedia.org/wiki/113P/Spitaler
Comet Spitaler is a periodic comet in the Solar System discovered by Rudolf Ferdinand Spitaler (Vienna, Austria) on November 17, 1890, while attempting to observe Comet Zona (C/1890 V1). Spitaler, together with G. M. Searle, J. F. Tennant, and J. R. Hind, calculated orbits based on the observations, but despite predictions of a return in 1897, it was lost and remained so for the next few decades. On October 24, 1993, the comet was rediscovered by J. V. Scotti (Spacewatch, Kitt Peak Observatory, Arizona, United States), it was confirmed as Spitaler's comet when Brian G. Marsden connected the 1890 and 1994 apparitions. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 113P at Kronk's Cometography 113P at Kazuo Kinoshita's Comets 113P at Seiichi Yoshida's Comet Catalog Periodic comets 0113 113P 18901117 Recovered astronomical objects
113P/Spitaler
[ "Astronomy" ]
221
[ "Recovered astronomical objects", "Astronomical objects", "Astronomy stubs", "Comet stubs" ]
9,344,327
https://en.wikipedia.org/wiki/112P/Urata%E2%80%93Niijima
Comet Urata-Niijima is a periodic comet in the Solar System discovered by Japanese astronomers Tsuneo Niijima and Takeshi Urata on October 30, 1986, at Ojima, the first orbit was calculated by Brian G. Marsden on November 5 giving an orbital period of 6.42 years. On October 20, 1993, the comet was recovered by J. V. Scotti (Spacewatch, Kitt Peak Observatory, Arizona, United States), and on the next return on March 4, 2000, by Philippe. L. Lamy and Harold. A. Weaver using the Hubble Space Telescope. The nucleus of the comet has a radius of 0.90 ± 0.05 kilometers, assuming a geometric albedo of 0.04. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 112P at Kronk's Cometography 112P at Kazuo Kinoshita's Comets 112P at Seiichi Yoshida's Comet Catalog Periodic comets 0112 112P 112P 19861030
112P/Urata–Niijima
[ "Astronomy" ]
218
[ "Astronomy stubs", "Comet stubs" ]
9,345,044
https://en.wikipedia.org/wiki/Clitocybe%20dealbata
Clitocybe dealbata, also known as the ivory funnel, is a small white funnel-shaped basidiomycete fungus widely found in lawns, meadows and other grassy areas in Europe and North America. Also known as the sweating mushroom, or sweat producing clitocybe, it derives these names from the symptoms of poisoning. It contains potentially deadly levels of muscarine. Taxonomy and naming Clitocybe dealbata was initially described by British naturalist James Sowerby in 1799 as Agaricus dealbatus, its specific epithet derived from the Late Latin verb dealbare 'to whitewash', inexorably calling to mind the Biblical "whited sepulchre", that is outwardly pleasing but inwardly toxic. It gained its current genus name in 1874 when reclassified by French naturalist Claude Casimir Gillet. However, this species is often considered a synonym of Clitocybe rivulosa and according to Bon the name C. dealbata may be invalid (a nomen dubium) as James Sowerby's definition conflicts with Elias Magnus Fries's. Description A small white or white dusted with buff-coloured mushroom, the 2–4 cm diameter cap is flattened to depressed with adnate to decurrent crowded white gills. The stipe is 2–4 cm tall and 0.5–1 cm wide. The spore print is white. There is no distinctive taste or smell. It is one of a number of similar poisonous species such as the false champignon (Clitocybe rivulosa) which can be confused with the edible fairy ring champignon (Marasmius oreades), or miller (Clitopilus prunulus). Distribution and habitat The ivory funnel is found in grassy habitats in summer and autumn. Often gregarious, it can form fairy rings. Unfortunately, they often occur in grassy areas where they may be encountered by children or pets. This may increase risk of accidental consumption. Toxicity The main toxic component of Clitocybe dealbata is muscarine, and thus the symptoms are like those of nerve agent poisoning, namely greatly increased salivation, sweating (perspiration), and the flow of tears (lacrimation) within 15–30 minutes of ingestion. With large doses, these symptoms may be followed by abdominal pain, severe nausea, diarrhea, blurred vision, and labored breathing. Intoxication generally subsides within two hours. Death is rare, but may result from arrhythmia or respiratory failure in severe cases. The specific antidotes are M1 receptor blockers like atropine, scopolamine and diphenhydramine. See also List of deadly fungi References External links Toxicity, Mushrooms - Muscarine dealbata Deadly fungi Fungi of Europe Fungi found in fairy rings Poisonous fungi Taxa named by James Sowerby Fungus species
Clitocybe dealbata
[ "Biology", "Environmental_science" ]
598
[ "Poisonous fungi", "Fungi", "Toxicology", "Fungus species" ]
9,345,201
https://en.wikipedia.org/wiki/Siemens%20SL10
The Siemens SL10 is a sliding mobile phone with a four-color screen (red, green, blue, and white). It was the second mobile phone with a multicolor screen after the Siemens S10 and the first sliding mobile phone. References SL10 Mobile phones with infrared transmitter
Siemens SL10
[ "Technology" ]
59
[ "Mobile technology stubs", "Mobile phone stubs" ]
9,345,461
https://en.wikipedia.org/wiki/Refectory
A refectory (also frater, frater house, fratery) is a dining room, especially in monasteries, boarding schools and academic institutions. One of the places the term is most often used today is in graduate seminaries. The name derives from the Latin reficere "to remake or restore," via Late Latin refectorium, which means "a place one goes to be restored" (cf. "restaurant"). Refectories and monastic culture Communal meals are the times when all monks of an institution are together. Diet and eating habits differ somewhat by monastic order, and more widely by schedule. The Benedictine rule is illustrative. The Rule of St Benedict orders two meals. Dinner is provided year-round; supper is also served from late spring to early fall, except for Wednesdays and Fridays. The diet originally consisted of simple fare: two dishes, with fruit as a third course if available. The food was simple, with the meat of mammals forbidden to all but the sick. Moderation in all aspects of diet is the spirit of Benedict's law. Meals are eaten in silence, facilitated sometimes by hand signals. A single monk might read aloud from the Scriptures or writings of the saints during the meals. Size, structure, and placement Refectories vary in size and dimension, based primarily on wealth and size of the monastery, as well as when the room was built. They share certain design features. Monks eat at long benches; important officials sit at raised benches at one end of the hall. A lavabo, or large basin for hand-washing, usually stands outside the refectory. Tradition also fixes other factors. In England, the refectory is generally built on an undercroft (perhaps in an allusion to the upper room where the Last Supper reportedly took place) on the side of the cloister opposite the church. Benedictine models are traditionally generally laid out on an east–west axis, while Cistercian models lie north–south. Norman refectories could be as large as long by wide (such as the abbey at Norwich). Even relatively early refectories might have windows, but these became larger and more elaborate in the high medieval period. The refectory at Cluny Abbey was lit through thirty-six large glazed windows. The twelfth-century abbey at Mont Saint-Michel had six windows, five feet wide by twenty feet high. Eastern Orthodox In Eastern Orthodox monasteries, the trapeza (, refectory) is considered a sacred place, and in some cases is even constructed as a full church with an altar and iconostasis. Some services are intended to be performed specifically in the trapeza. There is always at least one icon with a lampada (oil lamp) kept burning in front of it. The service of the Lifting of the Panagia is performed at the end of meals. During Bright Week, this service is replaced with the Lifting of the Artos. In some monasteries, the Ceremony of Forgiveness at the beginning of Great Lent is performed in the trapeza. All food served in the trapeza should be blessed, and for that purpose, holy water is often kept in the kitchen. Modern usage As well as continued use of the historic monastic meaning, the word refectory is often used in a modern context to refer to a café or cafeteria that is open to the public—including non-worshipers such as tourists—attached to a cathedral or abbey. This usage is particularly prevalent in Church of England buildings, which use the takings to supplement their income. Many universities in the UK also call their student cafeteria or dining facilities the refectory. The term is rare at American colleges, although Brown University calls its main dining hall the Sharpe Refectory, the main dining hall at Rhodes College is known as the Catherine Burrow Refectory, and, in August of 2019, Villanova University chose the name 'The Refectory' for the "sophisticated-yet-casual restaurant service" (open to students and the public) to purposefully acknowledge and recognize the history of the refectory name to connote "a dining room for communal meals at academic institutions and monasteries". See also Refectory table References Sources Adams, Henry, Mont Saint-Michel and Chartres. New York: Penguin, 1986. Fernie, E. C. The Architecture of Norman England. Oxford: Oxford University Press, 2000. Harvey, Barbara. Living and Dying in England, 1100-1450. Oxford: Clarendon Press, 1995. Singman, Jeffrey. Daily Life in Medieval Europe. Westport, CT: Greenwood Press, 1999. Webb, Geoffrey. Architecture in Britain: the Middle Ages. Baltimore: Penguin, 1956. External links Refectory in Russian Orthodox Convent, Jerusalem Church architecture Restaurants by type Rooms Eastern Orthodox liturgy Sacral architecture
Refectory
[ "Engineering" ]
985
[ "Rooms", "Sacral architecture", "Architecture" ]
9,346,085
https://en.wikipedia.org/wiki/PHD%20finger
The PHD finger was discovered in 1993 as a Cys4-His-Cys3 motif in the plant homeodomain (hence PHD) proteins HAT3.1 in Arabidopsis and maize ZmHox1a. The PHD zinc finger motif resembles the metal binding RING domain (Cys3-His-Cys4) and FYVE domain. It occurs as a single finger, but often in clusters of two or three, and it also occurs together with other domains, such as the chromodomain and the bromodomain. Role in epigenetics The PHD finger, approximately 50-80 amino acids in length, is found in more than 100 human proteins. Several of the proteins it occurs in are found in the nucleus, and are involved in chromatin-mediated gene regulation. The PHD finger occurs in proteins such as the transcriptional co-activators p300 and CBP, Polycomb-like protein (Pcl), Trithorax-group proteins like ASH1L, ASH2L and MLL, the autoimmune regulator (AIRE), Mi-2 complex (part of histone deacetylase complex), the co-repressor TIF1, the JARID1-family of demethylases and many more. Structure The NMR structure of the PHD finger from human WSTF (Williams Syndrome Transcription Factor) shows that the conserved cysteines and histidine coordinate two Zn2+ ions. In general, the PHD finger adopts a globular fold, consisting of a two-stranded beta-sheet and an alpha-helix. The region consisting of these secondary structures and the residues involved in coordinating the zinc-ions are very conserved among species. The loop regions I and II are variable and could contribute functional specificity to the different PHD fingers. Function The PHD fingers of some proteins, including ING2, YNG1 and NURF, have been reported to bind to histone H3 tri-methylated on lysine 4 (H3K4me3), while other PHD fingers have tested negative in such assays. A protein called KDM5C has a PHD finger, which has been reported to bind histone H3 tri-methylated lysine 9 (H3K9me3). Based on these publications, binding to tri-methylated lysines on histones may therefore be a property widespread among PHD fingers. Domains that bind to modified histones, are called epigenetic readers as they specifically recognize the modified version of the residue and binds to it. The modification H3K4me3 is associated with the transcription start site of active genes, while H3K9me3 is associated with inactive genes. The modifications of the histone lysines are dynamic, as there are methylases that add methyl groups to the lysines, and there are demethylases that remove methyl groups. KDM5C is a histone H3 lysine 4 demethylase, which means it is an enzyme that can remove the methyl groups of lysine 4 on histone 3 (making it H3K4me2 or H3K4me1). One can only speculate if the H3K9me3-binding of KDM5C PHD domain provides a crosstalk between trimethylation of H3K9 and the demethylation of H3K4me3. Such crosstalks have been suggested earlier with other domains involved in chromatin regulation, and may provide a strictly coordinated regulation. Another example is the PHD finger of the BHC80/PHF21A protein, which is a component of the LSD1 complex. In this complex, LSD1 specifically demethylates H3K4me2 to H3K4me0, and BHC80 binds H3K4me0 through its PHD finger to stabilize the complex at its target promoters, presumably to prevent further re-methylation. This is the first example of a PHD finger recognizing lysine methyl-zero status. References Further reading Protein domains Protein structural motifs
PHD finger
[ "Biology" ]
859
[ "Protein structural motifs", "Protein domains", "Protein classification" ]
9,346,431
https://en.wikipedia.org/wiki/Death%20by%20Black%20Hole
Death by Black Hole: And Other Cosmic Quandaries is a 2007 popular science book written by Neil deGrasse Tyson. It is an anthology of several of Tyson's most popular articles, all published in Natural History magazine between 1995 and 2005, and was featured in an episode of The Daily Show with Jon Stewart. Summary Death by Black Hole is divided into seven sections: The Nature of Knowledge, The Knowledge of Nature, Ways and Means of Nature, The Meaning of Life, When the Universe Turns Bad, Science and Culture, and Science and God. Section 1 comprises five chapters: Chapter 1, "Coming to Our Senses", discusses how important the augmentation of our five basic senses (sight, hearing, taste, smell, touch) is for expanding scientific knowledge. Tools that convert (seemingly) latent aspects of our environment into quantities we can sense greatly ease scientific discovery. For example, night vision goggles convert the near-infrared spectrum into the visible spectrum, making it easier for biologists to observe nocturnal animal behavior. Chapter 2, "On Earth as in the Heavens", addresses the history of physics and how it came to be known that physical laws observed on Earth are also observed on the sun and the other planets. In short, how physics became a study of the universal rather than just the terrestrial. Chapter 3, "Seeing Isn't Believing", hints at the pitfalls of generalizing from too little evidence. It begins by making the point that although we know the Earth is round, it appears flat when one observes only a small, local portion of it. Chapter 4, "The Information Trap", observes that we can view our surroundings at many different scales, and may find different phenomena at different scales. For instance, on a macroscopic scale classical mechanics describes the physical behaviors we observe, while on a smaller scale, quantum mechanics comes into play. Chapter 5, "Stick-in-the-Mud Science", guides the reader through a series of experiments based primarily on watching how the shadow of a stick, stuck upright in the earth, changes as time passes. For example, one can observe that, in the northern hemisphere, over the course of a day, the shadow of the stick will trace out a semi-circle as it moves clockwise. External links WorldCat.org record for this book Book TV author talk about this book References 2007 non-fiction books Astronomy books Cosmology books Popular physics books Books by Neil deGrasse Tyson W. W. Norton & Company books
Death by Black Hole
[ "Astronomy" ]
509
[ "Astronomy books", "Astronomy book stubs", "Works about astronomy", "Astronomy stubs" ]
9,346,488
https://en.wikipedia.org/wiki/Expression%20language
An expression language is a computer language for creating a machine readable representation of specific domain knowledge. Examples include: Advanced Boolean Expression Language, an obsolete hardware description language for hardware descriptions Data Analysis Expressions (DAX), an expression language developed by Microsoft and used in Power Pivot, among other places Jakarta Expression Language, a domain-specific language used in Jakarta EE web applications. Formerly known as "Unified Expression Language", "Expression Language" or just "the Expression Language"). Rights Expression Languages, machine processable language used for representing immaterial rights such as copyright and license information Computer languages
Expression language
[ "Technology" ]
122
[ "Computer languages", "Computing stubs", "Computer science", "Computer science stubs" ]
9,346,573
https://en.wikipedia.org/wiki/Multiplex%20ligation-dependent%20probe%20amplification
Multiplex ligation-dependent probe amplification (MLPA) is a variation of the multiplex polymerase chain reaction that permits amplification of multiple targets with only a single primer pair. It detects copy number changes at the molecular level, and software programs are used for analysis. Identification of deletions or duplications can indicate pathogenic mutations, thus MLPA is an important diagnostic tool used in clinical pathology laboratories worldwide. History Multiplex ligation-dependent probe amplification was invented by Jan Schouten, a Dutch scientist. The method was first described in 2002 in the scientific journal Nucleic Acid Research. The first applications included the detection of exon deletions in the human genes BRCA1, MSH2 and MLH1, which are linked to hereditary breast and colon cancer. Now MLPA is used to detect hundreds of hereditary disorders, as well as for tumour profiling. Description MLPA quantifies the presence of particular sequences in a sample of DNA, using a specially designed probe pair for each target sequence of interest. The process consists of multiple steps: The sample DNA is denatured, resulting in single-stranded sample DNA. Pairs of probes are hybridized to the sample DNA, with each probe pair designed to query for the presence of a particular DNA sequence. Ligase is applied to the hybridized DNA, combining probe pairs that are hybridized immediately next to each other into a single strand of DNA that can be amplified by PCR. PCR amplifies all probe pairs that have been successfully ligated, using fluorescently labeled PCR primers. The PCR products are quantified, typically by (capillary) electrophoresis. Each probe pair consists of two oligonucleotides, with sequence that recognizes adjacent sites of the target DNA, a PCR priming site, and optionally a "stuffer" to give the PCR product a unique length when compared to other probe pairs in the MLPA assay. Each complete probe pair must have a unique length, so that its resulting amplicons can be uniquely identified during quantification, avoiding the resolution limitations of multiplex PCR. Because the forward primer used for probe amplification is fluorescently labeled, each amplicon generates a fluorescent peak which can be detected by a capillary sequencer. Comparing the peak pattern obtained on a given sample with that obtained on various reference samples, the relative quantity of each amplicon can be determined. This ratio is a measure for the ratio in which the target sequence is present in the sample DNA. Various techniques including DGGE (Denaturing Gradient Gel Electrophoresis), DHPLC (Denaturing High Performance Liquid Chromatography), and SSCA (Single Strand Conformation Analysis) effectively identify SNPs and small insertions and deletions. MLPA, however, is one of the only accurate, time-efficient techniques to detect genomic deletions and insertions (one or more entire exons), which are frequent causes of cancers such as hereditary non-polyposis colorectal cancer (HNPCC), breast, and ovarian cancer. MLPA can successfully and easily determine the relative copy number of all exons within a gene simultaneously with high sensitivity. Relative ploidy An important use of MLPA is to determine relative ploidy. For example, probes may be designed to target various regions of chromosome 21 of a human cell. The signal strengths of the probes are compared with those obtained from a reference DNA sample known to have two copies of the chromosome. If an extra copy is present in the test sample, the signals are expected to be 1.5 times the intensities of the respective probes from the reference. If only one copy is present the proportion is expected to be 0.5. If the sample has two copies, the relative probe strengths are expected to be equal. Dosage quotient analysis Dosage quotient analysis is the usual method of interpreting MLPA data. If a and b are the signals from two amplicons in the patient sample, and A and B are the corresponding amplicons in the experimental control, then the dosage quotient DQ = (a/b) / (A/B). Although dosage quotients may be calculated for any pair of amplicons, it is usually the case that one of the pair is an internal reference probe. Applications MLPA facilitates the amplification and detection of multiple targets with a single primer pair. In a standard multiplex PCR reaction, each fragment needs a unique amplifying primer pair. These primers being present in a large quantity result in various problems such as dimerization and false priming. With MLPA, amplification of probes can be achieved. Thus, many sequences (up to 40) can be amplified and quantified using just a single primer pair. MLPA reaction is fast, inexpensive and very simple to perform. MLPA has a variety of applications including detection of mutations and single nucleotide polymorphisms, analysis of DNA methylation, relative mRNA quantification, chromosomal characterisation of cell lines and tissue samples, detection of gene copy number, detection of duplications and deletions in human cancer predisposition genes such as BRCA1, BRCA2, hMLH1 and hMSH2 and aneuploidy determination. MLPA has potential application in prenatal diagnosis both invasive and noninvasive. Recent studies have shown that MLPA (as well as another variants such as iMLPA) is a robust technique for inversion characterisation. Variants iMLPA Giner-Delgado, Carla, et al. described a variant of MLPA combining it with iPCR. They call these new method iMLPA and its procedure is the same as MLPA but there are necessary two additional steps at the beginning: First, a DNA treatment with restriction enzymes that cut on both sides of the region of interest is necessary. The fragments obtained from digestion are recircularized and linked The probe design is quite similar. Each probe will be formed by two parts that have at least: a target sequence, which is a region that contains the sequence complementary to the region of interest, so that the correct hybridization can occur. And a primer sequence at the end, it is a sequence whose design varies and is what will allow the design of primers and subsequent fragment amplification. In addition, one of the parts of the probe usually contains a stuffer between the target sequence and the primer sequence. The use of different stuffers allows the identification of probes with the same primer sequences but different target sequences, that is key for multiple amplification of several different fragments in a single reaction. The next step continues with the typical MLPA protocol. References External links Further applications of MLPA Biochemistry detection methods Molecular biology Laboratory techniques Polymerase chain reaction Amplifiers Gene tests
Multiplex ligation-dependent probe amplification
[ "Chemistry", "Technology", "Biology" ]
1,441
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "Chemical tests", "Gene tests", "nan", "Biochemistry detection methods", "Biochemistry", "Amplifiers", "Molecular biology" ]
9,346,993
https://en.wikipedia.org/wiki/Emotional%20conflict
Emotional conflict is the presence of different and opposing emotions relating to a situation that has recently taken place or is in the process of being unfolded. They may be accompanied at times by a physical discomfort, especially when a functional disturbance has become associated with an emotional conflict in childhood, and in particular by tension headaches "expressing a state of inner tension...[or] caused by an unconscious conflict". For C. G. Jung, "emotional conflicts and the intervention of the unconscious are the classical features of...medical psychology". Equally, "Freud's concept of emotional conflict as amplified by Anna Freud...Erikson and others is central in contemporary theories of mental disorder in children, particularly with respect to the development of psychoneurosis". In childhood development "The early stages of emotional development are full of potential conflict and disruption". Infancy and childhood are a time when "everything is polarised into extremes of love and hate" and when "totally opposite, extreme feelings about them must be getting put together too. Which must be pretty confusing and painful. It's very difficult to discover you hate someone you love". Development involves integrating such primitive emotional conflicts, so that "in the process of integration, impulses to attack and destroy, and impulses to give and share are related, the one lessening the effect of the other", until the point is reached at which "the child may have made a satisfactory fusion of the idea of destroying the object with the fact of loving the same object". Once such primitive relations to the mother or motherer have been at least partially resolved, "in the age period two to five or seven, each normal infant is experiencing the most intense conflicts" relating to wider relationships: "ideas of love are followed by ideas of hate, by jealousy and painful emotional conflict and by personal suffering; and where conflict is too great there follows loss of full capacity, inhibitions...symptom formation". Defences Defenses against emotional conflict include "splitting and projection. They deal with intrapsychic conflict not by addressing it, but by sidestepping it". Displacement too can help resolve such conflicts: "If an individual no longer feels threatened by his father but by a horse, he can avoid hating his father; here the distortion way a way out of the conflict of ambivalence. The father, who had been hated and loved simultaneously, is loved only, and the hatred is displaced onto the bad horse". Physical symptoms Inner emotional conflicts can result in physical discomfort or pain, often in the form of tension headaches, which can be episodic or chronic, and may last from a few minutes or hours, to days - associated pain being mild, moderate, or severe. "The physiology of nervous headaches still presents many unsolved problems", as in general do all such "physical alterations...rooted in unconscious instinctual conflicts". However physical discomfort or pain without apparent cause may be the way our body is telling us of an underlying emotional turmoil and anxiety, triggered by some recent event. Thus for example a woman "may be busy in her office, apparently in good health and spirits. A moment later she develops a blinding headache and shows other signs of distress. Without consciously noticing it, she has heard the foghorn of a distant ship, and this has unconsciously reminded her of an unhappy parting". In the workplace With respect to the post-industrial age, "LaBier writes of 'modern madness', the hidden link between work and emotional conflict...feelings of self-betrayal, stress and burnout". His "idea, which gains momentum in the post-yuppie late eighties...concludes that real professional success without regret of emotional conflict requires insanity of one kind or another". Cultural examples Advice on fiction writing emphasises the "necessity of creating powerful, emotional conflicts" in one's characters: "characters create the emotional conflict and the action emerges from the characters". Shakespeare's sonnets have been described as "implying an awareness of the possible range of human feelings, of the existence of complex and even contradictory attitudes to a single emotion" For Picasso "the presence of death is always coincident with the taste for life...the superb violence of these emotional transports have led some people to call his work expressionist". See also Ambivalence Conflict management Emotional intelligence Honne and tatemae Love-hate relationship Love and hate (psychoanalytic concepts) Neurosis Psychosomatic medicine Splitting (psychology) References Further reading "Modern Madness", Douglas LaBier : The Hidden Link Between Work and Emotional Conflict Psychodynamics Emotion Emotional issues Cognitive dissonance
Emotional conflict
[ "Biology" ]
957
[ "Emotion", "Behavior", "Human behavior" ]
9,347,711
https://en.wikipedia.org/wiki/Neurokinin%20A
Neurokinin A (NKA), formerly known as Substance K, is a neurologically active peptide translated from the pre-protachykinin gene. Neurokinin A has many excitatory effects on mammalian nervous systems and is also influential on the mammalian inflammatory and pain responses. Introduction Neurokinin A (formally known as substance K) is a member of the tachykinin family of neuropeptide neurotransmitters. Tachykinins are important contributors to nociceptive processing, satiety, and smooth muscle contraction. Tachykinins are known to be highly excitatory neurotransmitters in major central neural systems. Neurokinin A is ubiquitous in both the central and peripheral mammalian nervous systems, and seems to be involved in reactions to pain and the inflammatory responses. It is produced from the same preprotachykinin A gene as the neuropeptide substance P. Both substance P and neurokinin A are encoded by the same mRNA, which when alternatively spliced can be translated into either compound. It has various roles in the body of humans and other animals, specifically stimulation of extravascular smooth muscle, vasodilation, hypertensive action, immune system activation, and pain management. The deduced amino acid sequence of neurokinin A is as follows: His Lys Thr Asp Ser Phe Val Gly Leu Met (HKTDSFVGLM) with amidation at the C-terminus. Mechanism of action Modified from: Sun J, Ramnath RD, Tamizhselvi R, Bhatia M."Neurokinin A engages neurokinin-1 receptor to induce NF-kappaB-dependent gene expression in murine macrophages: implications of ERK1/2 and PI 3-kinase/Akt pathways." Am J Physiol Cell Physiol. 2008 Sep;295(3):C679-91 Like Substance P [SP], Neurokinin A is present in excitatory neurons and secretory cells of the hypothalamic–pituitary–adrenal axis. Additionally both SP neurokinin A is found in the neurosensory system and modulates a wide range of inflammatory and tissue repairing processes . In various tissues, such as the skin, the release of bioactive tachykinins by sensory nerve fibers C, that extend from the dorsal root ganglia into the epidermis, directly influence the activity of keratinocytes. Inflammation, tissue healing and cell proliferation have been linked to both SP and neurokinin A release into surrounding tissues. Nervous system The overstimulation of the hypothalamic–pituitary–adrenal axis system and elevated secretion of corticotropin-releasing hormone from the hypothalamus, have been studied in many clinical manifestations of pathological depression. Studies have shown that stress-induced activation of the noradrenergic prefrontal lobe system may be under the control of both endogenously released corticotropin-releasing hormone and SP and neurokinin A. This study directly links the secretion of neurokinin A and SP to certain forms of depression characterized by the corticoid receptor hypothesis of depression. Inflammatory responses within the central nervous system (CNS) are often the result of traumatic injury or exposure to infectious agents. Inflammation provides a protective immune response to such stresses may also result in progressive damage to the CNS. There is significant evidence to indicate that tachykinins are a major component of the neural inflammatory response at peripheral tissues as well as the CNS. The ability to regulate tachykinin secretion represents an important mechanism for designing potentially useful drugs to treat inflammation. Neurokinin A has been associated with the chemokines interleukin-1 and interleukin-6, both of which are heavily involved in the inflammatory process during infections. Neuronal tissue can be severely damaged either through physical trauma or intracellular stresses, either chronic or acute. Either of these scenarios can result in calcium overload, protein degradation, the unfolded protein response or an accumulation of DNA damage. Endogenous cellular responses are activated within nerve tissue in response to damage in order to protect cellular, protein, and nucleic acid integrity. A large variety of neuroprotective signaling mechanisms exist, which can be manipulated by drugs to reduce damage from cellular damage in neurons. Tachykinins thus have a number of neuroprotective physiological roles in medical conditions Immune system The immune system is a highly integrated system which receives input from many sources, such as sites of injury, nociceptors and white blood cells. Chemical signals therefore are an important component of paracrine, autocrine and endocrine signaling. Neurokinin A was shown to be a potent chemo attractor for T-cells increasing the migration into infected tissues. This migration is necessary for the pathogen seeking activity of T-cells. Some chemokines trigger the intravascular adhesion of T-cells whereas others direct the migration of leukocytes into and within the extravascular space. Since lymphocytes must be positioned correctly to interact with other cells, the pattern of chemokine receptors and the type and distribution of chemokines in tissues critically influence immune responses. The molecular mechanism behind neurokinin's role as a chemoattractor is currently unclear. Neurokinin A has an inhibitory effect on the formation of myeloid cells, and appear to be involved in one specific receptor since the effect can be completely abolished by a NK-2 receptor-selective antagonist. The inhibitory effect of neuronkinin A is countered by the excitatory effect of a structurally similar compound: substance P. The opposite effects on myelogenesis by substance P and neurokinin A may represent an important feedback mechanism for maintenance of homeostasis. Respiratory system The binding of neurokinin A to the NKR-2 results in bronchoconstriction, mucus production in the lungs and process neurogenic inflammation. This release is propagated through the stimulation of e-NANC nerves in the bronchial epithelium via an axon-reflex mechanism. Cardiovascular system Neurokinin has been shown to contribute to both bradycardia and myocardial infarctions through the activation of NK2 receptors. The dual sensory-motor function of neurokinin A containing afferent neurons is a component of the intracardiac nervous system. Varicose processes of tachykinin-containing nerves are abundant in coronary arteries and in the cardiac ganglia. The diverse responses that are triggered by locally released tachykinins produce beneficial effects such as modulation of ganglion transmission. However, it is also possible that excessive stimulation of cardiac afferents and release of tachykinins, during pathological conditions such as myocardial infarction, could contribute to certain human pathologies. Receptor Tachykinins selectively bind and activate the G-protein coupled receptors TACR1(NK1R), TACR2(NK2R), and TACR3(NK3R). Neurokinin A binds to the G-protein coupled receptor ultimately increasing the release of inositol-phosphate and calcium second messengers. Each receptor demonstrates a specific affinity for either neurokinin A or substance P peptides. Both peptides, however, can act as full agonists on either receptor, although their potency is decreased when not bound to their specific receptor. NK-2 receptor NK-2 receptors are expressed predominantly in the CNS. Networks involved in emotional processing, such as the prefrontal cortex, cingulate cortex, and amygdala, show the highest concentration of NK-2 receptors. NK-2 receptor antagonists have been theorized to have antidepressant benefits and are presently in clinical trials. As a consequence of its ability to stimulate intestinal smooth muscle, NKA is considered to be specifically active in regulating intestinal motility by its action on NK2 receptors. Antagonists MEN 11420 has been demonstrated to be a potent, selective and competitive antagonist of tachykinin NK2 receptors, both in animal and human models. In vivo animal models, MEN 11420 produces an effective and long-lasting blockade of the NK2 receptors expressed in the smooth muscle of the intestinal, genito-urinary and respiratory tract. History Neurokinin A was isolated from porcine spinal cord in 1931 by von Euler and Gaddum. Structure Tachykinins are a structurally related group of neuropeptides sharing the C-terminal sequence Phe-X-Gly-Leu-Met-NH2. The amino acid sequence of substance P and neurokinin A are well conserved across mammals species. Structure of mammalian neurokinin A was obtained using CD spectropolarimetry and 2D proton NMR. Analysis showed that in water, the peptide adopts an extended conformation while in the presence of micelles (a model cell membrane system), an alpha helical conformation is induced in the central core (Asp4-Met10). Genetic overview The pre-protachykinin-1 and pre-protachykinin-2 genes in mice encode four very distinct peptides with varying physiological function. Alternative splicing of the pre-protachykinin-1 gene gives rise to four different peptide precursors (alphatac1, betatac1, deltatac1 und gammatac1), which are further processed into several related peptides including neurokinin A and substance P. The alpha tac1 and beta tac1 precursors encode synthesis of both Substance P and neurokinin A. Modified from:Nakanishi, Shigetada. "Molecular Mechanisms Of Intercellular Communication In The Hormonal And Neural Systems." IUBMB Life 58.5/6 (2006): 349-357 Mouse models pre-protachykinin-1 -/- mice show normal fertility and behavioral patterns (litter-mate socialization and pup rearing), but have a reduced sense of anxiety when threatened, compared to both wild-type mice and other mouse models of depression. Applications Cancer Circulating concentrations of neurokinin A is an independent indicator of poor prognosis in certain cancers such as carcinoids. Patients presenting with neurokinin A plasma concentrations of >50 pmol/L showed a poorer 3 year survival rate than patients presenting with neurokinin A concentrations of less than 50 pmol/L. These types of studies show that measuring tachykinin levels in human patients may have clinical relevance. Patients with Midgut Carcinoid disease (MGC) commonly receive neurokinin A test to determine the progression of their disease. Midgut Carcinoid disease is an uncommon disease with occurrence rates of approximately 1.4 per 100,000 of the population per year. MGC has an unpredictable disease progression depending on the patient, symptoms and progression range from rapid and aggressive to chronic. Treatment is difficult because of the varying degrees of severity, so assessing the extent of the disease is extremely important in effective treatment. Asthma The blocking of neuropeptide signaling has come become a novel therapeutic target for suppression of bronchial constriction in asthma patients. Bronchoconstriction is among the most prominent and extensively studied effects caused by tachykinins. Tachykinins have numerous effects in the respiratory systems especially in asthma patients who are more responsive to tachykinin administration. Through studies with human airways researchers have examined the role tachykinins play in bronchoconstriction, most notably through the receptor NK2, though regulation of NK2 receptors seems to be mediated by the activity of NK1 receptors eluting to complicated inhibition mechanism. Administration of DNK333 (a dual tachykinin receptor NK1/NK2 antagonist) have shown protective activity against neurokinin A induced bronchoconstriction. Psychiatric disorders Neurokinin A is involved in many stress induced neurological disorders, such as depression, schizophrenia and epilepsy. Affective disorders Affective disorders are characterized by a frequent, fluctuating alteration in mood, affecting the patient's thoughts, emotions, and behaviors. Affective disorders include depression, anxiety, and bipolar disorder. A number of approaches have been utilized to study the role that neurokinin A plays in the manifestation and continuation of human affective disorders. The measurement of serum peptide levels in depressed patients as well as anxious patients displayed higher plasma levels of tachykinins than their low-anxiety counterparts. In addition to studies of plasma levels of TKs, cerebrospinal fluid (CSF) levels of neurokinin A have also been directly correlated with depression. Under states of depression, neurokinin immunoreactivity is increased in the frontal cortex, and decreased in the striatum. These peptide levels were not found to be normalized by lithium treatment in mice. Elevated levels of tachykinins in CSF have been found in patients with fibromyalgia syndrome, a disorder that is strongly correlated with depression in human patients. Tachykinin ligands have been extensively studied and determined to be functionally linked to the control of affective phenotypes in a complex physiological manner. Epilepsy Epilepsy is a broad category of disorders with varying types of severity and presented symptoms. Neurokinins have been experimentally determined as possible predictor in the generation of certain forms of epilepsy. Experimentally when substance P is injected into the rat hippocampus, it significantly lowers the initiation threshold for seizures induced in a dose-dependent manner. Experimental data thus has indicated a pro-convulsant role for the Pre-protachykinin-1 gene and thus substance P and neurokinin A. Further reading NCBI AJPLUNG Science Direct Journal of Immunology References External links Journal Further reading Peptides
Neurokinin A
[ "Chemistry" ]
2,941
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
9,347,993
https://en.wikipedia.org/wiki/Specificity%20constant
In the field of biochemistry, the specificity constant (also called kinetic efficiency or ), is a measure of how efficiently an enzyme converts substrates into products. A comparison of specificity constants can also be used as a measure of the preference of an enzyme for different substrates (i.e., substrate specificity). The higher the specificity constant, the more the enzyme "prefers" that substrate. The following equation, known as the Michaelis–Menten model, is used to describe the kinetics of enzymes: {E} + S <=>[k_f][k_r] ES ->[k_{cat}] {E} + P where E, S, ES, and P represent enzyme, substrate, enzyme–substrate complex, and product, respectively. The symbols , , and denote the rate constants for the "forward" binding and "reverse" unbinding of substrate, and for the "catalytic" conversion of substrate into product, respectively. The Michaelis constant in turn is defined as follows: The Michaelis constant is equal to the substrate concentration at which the enzyme converts substrates into products at half its maximal rate and hence is related to the affinity of the substrate for the enzyme. The catalytic constant () is the rate of product formation when the enzyme is saturated with substrate and therefore reflects the enzyme's maximum rate. The rate of product formation is dependent on both how well the enzyme binds substrate and how fast the enzyme converts substrate into product once substrate is bound. For a kinetically perfect enzyme, every encounter between enzyme and substrate leads to product and hence the reaction velocity is only limited by the rate the enzyme encounters substrate in solution. Hence the upper limit for is equal to rate of substrate diffusion which is between 108 and 109 s−1M−1. See also Turnover number References Enzyme kinetics
Specificity constant
[ "Chemistry" ]
375
[ "Chemical kinetics", "Enzyme kinetics" ]
9,348,093
https://en.wikipedia.org/wiki/Energy%20engineering
Energy engineering is a multidisciplinary field of engineering that focuses on optimizing energy systems, developing renewable energy technologies, and improving energy efficiency to meet the world's growing demand for energy in a sustainable manner. It encompasses areas such as energy harvesting and storage, energy conversion, energy materials, energy systems, energy efficiency, energy services, facility management, plant engineering, energy modelling, environmental compliance, As one of the most recent engineering disciplines to emerge, energy engineering plays a critical role in addressing global challenges like climate change, carbon reduction, and the transition from fossil fuels to renewable energy sources and sustainable energy. Energy engineering is one of the most recent engineering disciplines to emerge. Energy engineering combines knowledge from the fields of physics, math, and chemistry with economic and environmental engineering practices. Energy engineers apply their skills to increase efficiency and further develop renewable sources of energy. The main job of energy engineers is to find the most efficient and sustainable ways to operate buildings and manufacturing processes. Energy engineers audit the use of energy in those processes and suggest ways to improve the systems. This means suggesting advanced lighting, better insulation, more efficient heating and cooling properties of buildings. Although an energy engineer is concerned about obtaining and using energy in the most environmentally friendly ways, their field is not limited to strictly renewable energy like hydro, solar, biomass, or geothermal. Energy engineers are also employed by the fields of oil and natural gas extraction. Purpose The primary purpose of energy engineering is to optimize the production and use of energy resources while minimizing energy waste and reducing environmental impact. This discipline is vital for designing systems that consume less energy, meet carbon reduction targets, and improve the energy efficiency of processes in industrial, commercial, and residential sectors. Often applied to building design, heavy consideration is given to HVAC, lighting, refrigeration, to both reduce energy loads and increase efficiency of current systems. Energy engineering is increasingly seen as a major step forward in meeting carbon reduction targets. Since buildings and houses consume over 40% of the United States energy, the services an energy engineer performs are in demand. History Human civilizations have long relied on the conversion of energy for various purposes, from the use of fire to the development of water wheels, windmills, and, eventually, electricity generation. The formalization of energy engineering began during the industrial revolution and accelerated in the mid-20th century with advancements in electrical power systems, nuclear energy, and renewable energy technologies. The oil crisis of 1973 highlighted the need for increased energy efficiency and energy independence, leading to the establishment of new government programs and industry standards. In addition, the energy crisis of 1979 brought to light the need to get more work out of less energy. The United States government passed several laws to promote increased energy efficiency, such as United States public law 94-413, the Federal Clean Car Incentive Program. Power engineering Power engineering, often viewed as a subset of electrical engineering, focuses on the generation, transmission, distribution, and utilization of electrical power. This subfield covers critical infrastructure such as power plants, electric grids, and energy storage systems, ensuring the efficient and reliable delivery of energy across various sectors. Emerging technologies in power engineering include the development of smart grids, microgrids, and advanced energy storage systems like lithium-ion batteries and hydrogen fuel cells, which are central to the future of renewable energy integration. Leadership in Energy and Environmental Design Leadership in Energy and Environmental Design (LEED) is a program created by the United States Green Building Council (USGBC) in March 2000. LEED is a program that encourages green building and promotes sustainability in the construction of buildings and the efficiency of the utilities in the buildings. In 2012 the United States Green Building Council asked the independent firm Booz Allen Hamilton to conduct a study on the effectiveness of LEED program. "This study confirmed that green buildings generate substantial energy savings. From 2000–2008, green construction and renovation generated $1.3 billion in energy savings. Of that $1.3 billion, LEED-certified buildings accounted for $281 million." The study also found the summation of all green construction supported 2.4 million jobs. Energy efficiency Energy efficiency is seen two ways. The first view is that more work is done from the same amount of energy used. The other perception is that the same amount of work is accomplished with less energy used in the system. Some ways to get more work out of less energy is to "Reduce, Reuse, and Recycle" the materials used in daily life. The advancement of technology has led to other uses of waste. Technology such as waste-to-energy facilities which convert solid wastes through the process of gasification or pyrolysis to liquid fuels to be burned. The Environmental Protection Agency stated that the United States produced 250 million tons of municipal waste in 2010. Of that 250 million tons roughly 54% gets thrown in land fills, 33% is recycled, and 13% goes to energy recovery plants. In European countries who pay more for fuel, such as Denmark where the price of gas neared in 2010, have more fully developed waste-to energy facilities. In 2010 Denmark sent 7% of waste to landfills, 69% was recycled, and 24% was sent to waste-to-energy facilities. There are several other developed Western European countries that also have taken energy engineering into consideration. Germany's "Energiewende", a policy which set the goal by 2050 to meet 80% of electrical needs from renewable energy sources. Statistics As of 2023, the median annual salary for energy engineers in the U.S. ranges from $75,000 to $95,000, depending on experience and location. Energy engineers with expertise in renewable energy and energy storage tend to receive higher salaries due to the growing demand for sustainable solutions. The gender distribution in the field remains prominent, with around 80% male engineers, though efforts to increase diversity are underway through scholarships and mentorship programs. The job market for energy engineers is expected to grow rapidly over the next decade, driven by the shift towards clean energy and sustainable solutions to modern climate issues. Education To become an energy engineer, a bachelor's degree in energy engineering or related fields such as mechanical, electrical, or environmental engineering is typically required. Many universities now offer specialized energy engineering programs with a focus on renewable energy, energy storage, and grid management. Advanced certifications like the Certified Energy Manager (CEM) credential, offered by the Association of Energy Engineers, and graduate programs in sustainable energy systems further improve career plans. Also, several universities across the world have established departments or centers offering energy engineering degrees, to better prepare future engineers for their career. One of those programs is the IEP PEM Certification which is offered at Virginia Tech University. Emerging Technologies Emerging technologies in energy engineering are reshaping the way energy is produced, stored, and consumed. Innovations such as next-generation solar panels, modern wind turbine innovations, energy storage systems (such as flow batteries and hydrogen fuel cells), and smart grid technologies are paving the way for a more sustainable energy future. These technologies are critical in reducing reliance on fossil fuels and ensuring the stability of renewable energy systems. Other advances include artificial intelligence and machine learning applications for optimizing energy use in real-time, and carbon capture and storage (CCS) systems to mitigate emissions from existing power plants. Energy Engineering in Policy and Society Energy engineers play a key role in shaping energy policies and regulations worldwide. Their expertise is essential in setting standards for energy efficiency, renewable energy integration, and reducing carbon footprints. Global initiatives like the Paris Agreement and the European Green Deal are influencing energy engineering practices, pushing the field toward more sustainable and equitable energy solutions. Additionally, energy engineers are increasingly involved in public and private sector collaborations, working with governments and corporations to design and implement large-scale energy infrastructure projects which would have both societal and political impacts. Notes References External links Association of Energy Engineers World Energy Engineering Congress Penn State Energy Engineering Energy Managers Association Engineering disciplines
Energy engineering
[ "Engineering" ]
1,624
[ "Energy engineering", "nan" ]
9,348,251
https://en.wikipedia.org/wiki/DuPont%20Central%20Research
In 1957, the research organization of the Chemicals Department of E. I. du Pont de Nemours and Company was renamed Central Research Department, beginning the history of the premier scientific organization within DuPont and one of the foremost industrial laboratories devoted to basic science. Located primarily at the DuPont Experimental Station and Chestnut Run, in Wilmington, Delaware, it expanded to include laboratories in Geneva, Switzerland, Seoul, South Korea, Shanghai, China, and India (Hyderabad). In January, 2016 a major layoff marked the end of the organization. History The company established a tradition of basic scientific research starting with hiring of Wallace Carothers in 1928 and his systemization of polymer science that led to the development of polyamides such as nylon-6,6 and polychloroprene (neoprene) in the early 1930s. This tradition waned during World War II then underwent a renaissance in the 1950s. The establishment of Central Research in 1957 formalized a corporate commitment to basic research. The execution and publication of high quality research assisted recruiting and promoted the image of DuPont while raising morale among the CRD staff. The purpose of the research was to discover "the next nylon", because Carothers' success and the resulting commercialization of nylon had driven the Company's profits through the 1950s. (This research objective was never met.) Nonetheless, another important stated goal for CRD was “diversification through research,” and CRD produced a stream of scientific innovations that contributed to many different businesses throughout the corporation. CRD combined industrial and fundamental research, and the mix of the two features was often determined by the head of CR&D. The title expanded from Director of Research to Vice President of Technology to Chief Technology Officer with varying degrees of impact on research throughout the corporation as well as in CRD. The name of CRD also changed to reflect the times, starting with Chemicals Department and moving through Central Research Department (CRD), Central Research and Development Department (CR&DD), to the present Central Research and Development (CR&D). CRD conducted research in a number of topical areas, often requiring an interdisciplinary approach. DuPont's explored chemical reactions in supercritical water in the 1950s to support its production of CrO2 for magnetic recording tapes. Hyperbaric recrystallization of ultra-high molecular weight polyethylene led to DuPont's business in Hylamer polyethylene for bearing surfaces in hip and knee replacement arthroplasty. Urea and uracil compounds discovered in CRD were potent and selective herbicides, propelling DuPont into the agricultural chemicals business and culminating in sulfonylurea herbicides. Potassium titanyl phosphate or KTP is a versatile nonlinear optical material, originally designed to frequency doubling red lasers to green for bloodless laser eye surgery; it now find additional application in urological surgery and hand-held green laser pointers. In the 1950s, the CRD housed a broad-based research program aimed largely at the synthesis and study of new classes of compounds. Synthesis of new organic and inorganic compounds accounted for about half of the total research. When the National Institute of Health invited DuPont to submit compounds to its screening efforts, they rated DuPont as submitting by far the most diverse range of compounds – pharmaceutical companies were submitting things that looked like pharmaceuticals, but DuPont submitted compounds that would be classed internally as catalysts, optical materials, monomers, oligomers, ligands, inorganics, and other unusual materials. In addition to chemical synthesis, CRD maintained efforts centered on new physical and analytical techniques, chemical structure and reaction mechanism, and solid-state physics. DuPont continued in polymer research. Biological research has increased significantly. Until recent years, a substantial portion of research was of an academic nature. This academic research was reflected in the general atmosphere of the organization. In the late 1960s, CRD established a program for recruiting postdoctoral fellows. These fellowships were generally for two years and had the expectation that the fellow would leave to an academic institution. Every year one or two DuPont scientists would take one year leaves of absence for university study and teaching. It was also accepted that every year a number of scientists would leave DuPont for academic positions and that several professors would join the staff permanently. A notable example was Richard Schrock, who left CRD for MIT and won the Nobel Prize for Chemistry. CRD was supported by numerous high-profile consultants who have made significant contributions to DuPont. Jack Roberts of Caltech and Speed Marvel each consulted for well over 50 years and provided a steady supply of well-trained chemists. Robert Grubbs, who shared the Nobel Prize with Schrock, consulted for many years. These academic connections were sources of new generations of CRD researchers. The scientific accomplishments of Theodore L. Cairns, William D. Phillips, Earl Muetterties, Howard E. Simmons, Jr., and George Parshall were recognized by their election to the National Academy of Sciences. CRD management fostered an open and collaborative style. At its founding, the division of labor in CRD was “management,” “bench chemists,” and “technicians,” with the management and bench chemists having separate but overlapped promotional tracks. Under the Hay Grade system of pay levels that was employed then and now, there were eight professional or promotional levels for the “bench chemists,” yet there was a single undistinguished title. This approach promoted interaction. The Hay Grades for those in management started higher and ended considerably higher, but there was significant overlap with the bench chemist levels. Thus it was not unusual for a supervisor or manager to have one or more scientists reporting to him (there were no females in management at this time) who were at higher pay levels than he was. There was one reported instance where the supervisor never got to pass pay raises to the “bench chemist” because management didn't want to make him feel bad; the next level Manager who did pass on the pay notification said, “They didn’t care how I felt.” Titles explicitly tied to salary level were instituted in May, 1993, but the openness remains today as does the situation of Managers managing higher level scientists. At the beginning of CRD, “technicians” in CRD were usually high-school educated and often had military service. They were clearly just extra hands for the bench chemists who were all PhDs and the bench chemists were expected to spend most of their time at the bench. It was virtually impossible for a technician to progress in CRD, but they could at plant sites and would sometimes move for the opportunity. Starting in the early 1990s, mostly as a result of the growth of the pharmaceutical and life science efforts, technicians with bachelor's degrees and later, master's degrees became the norm. There are even some technicians holding PhDs from foreign universities. Nonetheless, it remains difficult for a technician to break into the bench chemist ranks and they usually transfer to business units in search of more opportunity. Many of the PhDs who came to CRD transferred to business units. From the 1980s to early 90s, management tried to move all PhDs to a business unit within their first five years. The PhDs had spent their entire lives in an academic environment, so they knew nothing else, but it was realized that at some point they would grow up and realize that working at the bench was not what some of them would want to do their entire career. The issue was that they were too senior and naive to move into entry-level positions in businesses and their competition were similarly aged BS engineers who would have had about five years of experience keeping a plant running. Of those who took the opportunity, about half returned to CR&D. Of those who returned, about half left again. The relatively high turnover provided more opportunity for CRD to hire outstanding new PhDs. Transfers to business units became less common in the 1990s and the average age of CRD personnel rose considerably as a result. With baby-boomers starting to retire, there is more recruiting and there is a noticeable rejuvenation of the staff. Responsibility for the technical direction of research has shifted to the chemist as they carry out short-term projects in support of the business units. PhDs who get MBAs are now more common. Unlike the early years, all management has had business unit experience and many were hired into business units, coming into CRD later in their careers. These managers are often far more administrative in their approach, not having the strong technical backgrounds required to keep up with their technical employees. Some managers have come to rely upon their senior technical staff, but there is no clear guideline on the role that these senior scientists can or should play in managing the programs and careers of the younger scientists. In late 2015, the name of the organization was changed to DuPont Science and Innovation portending the major layoff on January 4, 2016, which marked the end of the organization as a major force in research. Combined, the Molecular Sciences and Engineering and Materials Science and Engineering portions of CR&D went from 330 employees to 34 in the new Science and Innovation organization. Organofluorine chemistry On April 6, 1938, Roy Plunkett at DuPont's Jackson Laboratory in New Jersey was working with gases related to DuPont's Freon refrigerants when he and his associates discovered that a sample of gaseous tetrafluoroethylene had polymerized spontaneously into a white, waxy solid. The polymer was polytetrafluoroethylene (PTFE) commercialized by DuPont as Teflon in 1945. Because DuPont was basic in a variety of fluorinated materials, it was logical that organofluorine chemistry became important to DuPont. The discovery that tetrafluoroethylene would cyclize with a wide variety of compounds to give fluorinated compounds opened up routes to a range of organofluorine compounds. The hazards and difficulties of handling highly reactive and corrosive fluorinating reagents could be accommodated by DuPont's emphasis on safety and DuPont's association with the Manhattan Project provided many chemists and engineers with the background necessary to carry out the work. Availability of the Pressure Research Lab on the Experimental Station provided the necessary protection for most but not all of those reactions that went awry. Notable scientists included William Middleton, David England, Carl Krespan, William Sheppard, Owen Webster, Bruce Smart, Malli Rao, Robert Wheland, and Andrew Feiring, all of whom filed many patents for DuPont. Sheppard wrote one of the important early books on the subject. Smart's book followed. Smart's comments in Chemical Reviews in 1996, “Scientific and commercial interests in fluorine chemistry burgeoned after 1980, largely fueled by the need to replace industrial chlorofluorocarbons and the rapidly growing practical opportunities for organofluorine compounds in crop protection, medicine and diverse materials applications. Although fluorine is much less abstruse now than when I entered the field a generation ago, it remains a specialized topic and most chemists are unfamiliar, or at least uncomfortable, with the synthesis and behavior of organofluorine compounds,” remain true today. CRD undertook a program on alternatives for chlorofluorocarbons in refrigerants in the late 1970s after the first warnings of damage to stratospheric ozone were published. The Catalysis Center of CRD, under the leadership of Leo Manzer, was quick to respond with new technology to produce alternative hydrochlorofluorocarbons (HCFCs) that were commercialized as DuPont's Suva refrigerants. Cyanocarbon chemistry During the 1960s and 1970s, CRD developed a program under the direction of Theodore Cairns to synthesize long-chain cyanocarbons analogous to long-chain fluorocarbons like Teflon. The work culminated in a series of twelve papers in the Journal of the American Chemical Society in 1958. Several authors of those papers grew to prominent positions at DuPont including Richard E. Benson (Associate Director, CRD), Theodore L. Cairns (Research Director, CRD), Richard E. Heckert (CEO of DuPont), William D. Phillips (Associate Director, CRD), Howard E. Simmons (Research Director and VP, CRD), and Susan A. Vladuchick (Plant Manager). This trend indicates the importance of technical qualification for promotion in the company at that time. The publication stimulated other researchers to investigate these compounds. Prospective applications included dyes, pharmaceuticals, pesticides, organic magnets, and incorporation in new types of polymers. No commercial applications resulted from this extensive research effort. Partly for this work, Cairns was awarded medals for Creative Work in Synthetic Organic Chemistry by the American Chemical Society and the Synthetic Organic Award of the Chemical Manufacturers Association. Another line of chemistry developed around Owen Webster's synthesis of diiminosuccinonitrile (DISN), which could be converted to diaminomaleonitrile (DAMN), leading to another series of patent and papers. Simmons used sodium maleonitriledithiolate for the preparation many novel substances of including tetracyanothiophene, tetracyanopyrrole, and pentacyanocyclopentadiene. Metal oxides Arthur Sleight led a team focused on perovskites, such as the K-Bi-Pb-O system, that laid the groundwork for subsequent breakthroughs in high-temperature superconductors. In solution phase chemistry of oxides, the work of Walter Knoth on organic soluble polyoxoanions led to the development of the now large area with numerous applications in oxidation catalysis. Dynamic NMR spectroscopy Indicative of interplay between applications and fundamental science were many studies on stereodynamics conducted at CRD by Jesson, Meakin, and Muetterties. One of the early studies focused on the non-rigidity of SF4, a reagent relevant to the preparation of fluorocarbons. Subsequent studies led to the discovery of the first stereochemically non-rigid octahedral complexes of the type FeH2(PR3)4. Polymer science Owen Webster discovered group-transfer polymerization (GTP), the first new polymerization process developed since living anionic polymerization. The major aspects of the mechanism of the reaction were determined and the process was quickly converted to commercial application for automotive finishes and ink jet inks. The basic process of group transfer also has application to general organic synthesis, including natural products. At about the same time, Andrew Janowicz developed a useful version of cobalt catalyzed chain transfer for controlling the molecular weight of free radicalpolymerizations. The technology has been further developed by Alexei Gridnev and Steven Ittel. It, too, was quickly commercialized and a fundamental understanding of the process developed over a longer period of time. Rudolph Pariser was the director of the Advanced Materials Science and Engineering at the time of these advances. In 1995, Maurice Brookhart, professor at the University of North Carolina and a DuPont CRD consultant, invented a new generation of post-metallocene catalysts for olefin coordination polymerization based upon late transition metals with his postdoctoral student, Lynda Johnson who later joined CRD. The technology, DuPont's Versipol olefin polymerization technology, was developed by a substantial team of CRD scientists over the next ten years. Organometallic chemistry CRD developed a major interest in inorganic and organometallic chemistry. Earl Muetterties established a program aimed at fundamental borane chemistry. Walter Knoth discovered the first polyhedral borane anion, B10H10=, and also discovered that the borane anions displayed a substitution chemistry similar to that of aromatic hydrocarbons. Norman Miller discovered the B12H12= anion in an effort to find a new route to B10H10=. George Parshall joined CRD in 1954. His industrial sabbatical at Imperial College London with Geoffrey Wilkinson in 1960-61 introduced him to organometallic chemistry. Muetterties left DuPont to join the faculty of Cornell in 1973. After Muetterties and Parshall, the organometallic chemistry group was led by Steven Ittel and then Henry Bryndza before it was dispersed throughout a number of groups in CRD. Parshall and Ittel coauthored a book on “Homogeneous Catalysis” that has become the standard reference on the subject. The seminal contributions of Richard Cramer and Fred Tebbe are acknowledged by their named compounds, “Cramer’s dimer,” Rh2Cl2(C2H4)4, and the “Tebbe reagent.” Tebbe had an influence on his lab partner, Richard Schrock who initiated a program on M=C chemistry at DuPont and continued it when he moved to MIT. The chemistry forms the basis for olefin metathesis, and Schrock ultimately shared the Nobel Prize with Robert Grubbs, a CRD consultant, for the metathesis work. Anthony Arduengo’s persistent carbenes opened up a new area of chemistry and they have proven to be important ligands in the metathesis process. There was a vigorous effort on the activation of C-H bonds with contributions by Parshall, Thomas Herskovitz, Ittel, and David Thorn. Chad Tolman developed his “ligand cone angle” theory that developed into the widely accepted electronic and steric effects of ligands on inorganic and organometallic complexes. Organometallic chemistry in CRD has further included R. Thomas Baker's heterobinuclear complexes, Patricia L. Watson's organolanthanides, William A. Nugent's metal-ligand multiple bonds, Jeffery Thompson's and Mani Subramanyam's development of technetium complexes for radiopharmaceuticals, and Bob Burch's and Karin Karel's fluoro-organometallic chemistry. The major outlet for organometallic chemistry is homogeneous catalysis. DuPont developed a major technology based upon the nickel catalyzed addition of two molecules of hydrogen cyanide to butadiene, giving adiponitrile, a nylon intermediate, initially through the work of William C. Drinkard. The mechanistic work to provide an understanding of the technology was done in CRD and led to a large program on next-generation technology before the business was sold to Koch Industries. Other applications of homogeneous catalysis studied in CRD include ethylene polymerization, cyclohexane oxidation to adipic acid, and butadiene carbonylation to nylon intermediates. Approaches to catalyst systems have included homogeneous organometallic catalysts, heterobinuclear catalysts, polyoxometalates, enzymes, catalytic membrane reactors and supported organometallics. Photochemistry and physics David M. McQueen, one of the early Directors of CRD was a physical chemist from the University of Wisconsin–Madison. His research on photochemistry and photography resulted in thirty-five patents. It was his background that got CRD started in photochemistry and photophysics. David Eaton later headed a strong team involved in photopolymerization color proofing for the printing industry. There was a strong program in inorganic non-linear optical materials that resulted in optical frequency doubling for the “green lasers” mentioned above. This program was extended into organic materials with NLO properties. There was also a strong effort on materials for the display industry and methods for preparing devices for displays. These included printable electronics, thermal transfer methods for color filters, carbon nanotubes for field emission displays, and OLED materials and devices. A substantial effort was made on next generation photoresists for the semiconductor industry containing hydrocarbon and fluorocarbon monomers to replace wavelengths of 193 nm with 157 nm wavelengths for better resolution. Though most of the requirements were achieved, the need for that shorter wavelength node was eliminated by the introduction of immersion lithography and new fluids for immersion lithography continue to be of substantial interest. Development of phase-shift masks was commercialized. Biological sciences One area always deemed important for diversification of CRD's programs was related to the biological sciences. Charles Stine had promoted biochemistry as a field of research for Du Pont and Stine Laboratories are named in his honor as a result. In the early 1950s, CRD began a program to investigate chemicals for biological applications. Charles Todd prepared substituted ureas as potential antibacterial agents, which when screened, proved to be effective herbicides. These led to DuPont's very successful and very selective sulfonylurea herbicides. CRD's program included agricultural and veterinary chemicals and bacteriological and microbiological studies. The culmination of this work was DuPont's purchase of Pioneer Hi-Bred Seeds and its integration into DuPont's agrichemical enterprise. In the mid- 1950s, CRD began work on the chemistry of nitrogen fixation in plants, a study that would develop into a major effort over the next decade. In 1963, Ralph Hardy joined the CRD and brought Du Pont's nitrogen fixation research to international prominence with more than a hundred papers on the subject. Chemical Week called him, "one of the nation's top achievers in the dual role of scientist and scientific manager," though such managers remained common in CRD through the 1960s and 70s. Fermentation microbiology and selective genetic modification became important to the CRD development of a biological route to 1,3-propylene glycol a new monomer for making polyester. The availability of this new monomer led to the development and commercialization of Sorona, a premium polyester. Substantial success was also achieved in the synthesis of unnatural peptides and proteins to accomplish specific functions and prediction of their tertiary structures. Advances in DNA sequencing technology based on synthesis of novel fluorescent labels led to Qualicon, a DuPont venture that identifies bacteria by examination of their DNA using PCR. This technology has led to significant improvements in the safety of the food supply chain in the United States and around the world. General references David A. Hounshell and John Kenley Smith. Science and Corporate Strategy. DuPont R&D, 1902–1980. New York: Cambridge University Press, 1988. J. J. Bohning. Howard E. Simmons, Jr., Oral History. Philadelphia: Chemical Heritage Foundation, 1993. R. C. Ferguson. William D. Phillips and nuclear magnetic resonance at DuPont. In Encyclopedia of Nuclear Magnetic Resonance, Vol. 1, Eds. D. M. Grant and R. K. Harris, pp. 309–13, John Wiley & Sons, 1996. R. G. Bergman, G. W. Parshall, and K. N. Raymond. Earl L. Muetterties, 1927–1984. In Biographical Memoirs, vol. 63, pp. 383–93. Washington, D.C.: National Academy Press, 1994. B. C. McKusick and Theodore L. Cairns, Cyanocarbons in Kirk-Othmer Encyclopedia of Chemical Technology, 2nd Edition, 6, 625-33 (1965) References Central Research Chemical companies of the United States Chemical research institutes Research institutes in Delaware Organizations based in Delaware 1957 establishments in the United States Education in Geneva Research institutes in Switzerland Research institutes in Hyderabad, India Education in Seoul Research institutes in South Korea Education in Shanghai Research institutes in China Research institutes established in 1957 Scientific organizations established in 1957 American companies established in 1957 1957 establishments in Delaware
DuPont Central Research
[ "Chemistry" ]
4,859
[ "Chemical research institutes" ]
9,348,405
https://en.wikipedia.org/wiki/Technological%20innovation
Technological innovation is an extended concept of innovation. While innovation is a rather well-defined concept, it has a broad meaning to many people, and especially numerous understanding in the academic and business world. Innovation refers to adding extra steps to developing new services and products in the marketplace or in the public that fulfill unaddressed needs or solve problems that were not in the past. Technological Innovation however focuses on the technological aspects of a product or service rather than covering the entire organization business model. It is important to clarify that Innovation is not only driven by technology, but can also be driven by various other factors, including market demand, social and environmental factors, and process improvements. Definition Technological innovation is the process where an organization (or a group of people working outside a structured organization) embarks in a journey where the importance of technology as a source of innovation has been identified as a critical success factor for increased market competitiveness. The wording "technological innovation" is preferred to "technology innovation". "Technology innovation" gives a sense of working on technology for the sake of technology. "Technological innovation" better reflects the business consideration of improving business value by working on technological aspects of the product or services. These advancements would show improvement for the business's that adapt to this new technology. Moreover, in a vast majority of products and services, there is not one unique technology at the heart of the system. It is the combination, integration, and interaction of different technologies that make the product or service successful. Process If the process of technological innovation is formalized (typically within an organization: a company, a public body, a think tank, a university, etc.) it can be referred to as technological innovation management (or Technology Innovation Management - TIM). The "management" aspect refers to the inputs, outputs and constraints a "manager" or team of "managers" are responsible to govern the process of technological innovation in a way that aligns with the company strategy. In a context where technological innovation is not to be guided along known paths within the organization, the wording and concept of technological innovation leadership is preferred. On many occasions, especially in start-ups and new ventures, technological innovation is performed in an unknown context. The boundaries and constraints of the technology at work are not precisely known. Hence it requires leaders and not managers to give the vision and coach the team to explore the unknown part of the technology. Innovation in businesses Technological innovation will impact prices of stock in companies. This can be due to new inventions in technology which make it easier for jobs to be done in the market. Investors see bigger returns on investments of companies with new technology due to innovations that have changed the market. Although companies that can’t keep up with the pace of change and adapt to disruptive innovation often find themselves floundering. With new innovations being added to companies value this in turn will create an increase in profits of the company thus increasing stock prices for the company. The stock market is a way that companies can raise money for the company’s production or operations by selling shares of stock in the company. With newly raised money, companies can invest that money into new advancements which will bring more profit's in the future. Although companies do adopt technological innovations often, some decide to not which leads to major gaps between what is the new "normal" and what used to be "old fashioned". Innovations benefit companies but leave those who do not adapt to them become outpaced. Companies that do not respond to different market changes from innovation, tend to miss out on opportunities which could end up ruining a company. Corollary Technological innovation: is a continuous process, within an internal or external venture, build-out to create value with innovation; starts with the ideation process and ends up with the commercialization of a viable product or service, in response to a proven market need; is a guide for the venture management to decide what technology directions to take, based on portfolio management, and execution monitoring; is driven by entrepreneurial/intrapreneurial spirit, supported by internal/external funding fosters collaboration, perspectives, and resources to fuel creativity, problem-solving, and breakthrough discoveries. allows for businesses to further invest in themselves which presents growth in businesses. is not solely driven by technological advancements but also by the interplay of economic, political, environmental, and ethical considerations. entails changes in social and political life as a result. Pope Francis, in his 2015 encyclical letter, Laudato si', asks His answer is that laws must regulate innovation and social relationships at national and local levels: "individual states can no longer ignore their responsibility for planning, coordination, oversight and enforcement within their respective borders. ... One authoritative source of oversight and coordination is the law, which lays down rules for admissible conduct in the light of the common good. References Innovation Digital technology Technology in society Social information processing Sociology of technology Information Age Information society
Technological innovation
[ "Technology" ]
1,001
[ "Information and communications technology", "Information Age", "Information society", "Digital technology", "Computing and society", "nan" ]
11,821,707
https://en.wikipedia.org/wiki/Timeline%20of%20intelligent%20design
This timeline of intelligent design outlines the major events in the development of intelligent design as presented and promoted by the intelligent design movement. Creationism 1920s: Fundamentalist–Modernist Controversy – in an upsurge of fundamentalist religious fervor, anti-evolutionary sentiment stopped U.S. public schools from teaching evolution, through state laws such as Tennessee's 1925 Butler Act, and by getting evolution removed from biology textbooks nationwide. 1959 National Defense Education Act, responding to fears of backwardness raised by the 1957 Sputnik, promoted science and Biological Sciences Curriculum Study textbooks teaching evolution were used in almost half of high schools, though the prohibitions were still in place and a 1961 attempt to repeal the Butler Act failed. 1961 publication of The Genesis Flood. 1965 The term "scientific creationism" gained currency. 1968 Michael Polanyi article in Science titled "Life's Irreducible Structure" on comparisons between living organisms and machines. 1968 Epperson v. Arkansas ruled against state laws prohibiting the teaching of evolution, concluding that they violate the Establishment Clause of the First Amendment to the United States Constitution which prohibits state aid to religion. States may not alter the curriculum to conform to the beliefs of particular religious sects. 1975 Daniel v. Waters rules that a state law requiring biology textbooks discussing "origins or creation of man and his world" to give equal treatment to creation as per Genesis is unconstitutional, creationists change to Creation science omitting explicit biblical references. 1977 Hendren v. Campbell rules that use of the 1970 Creation Research Society textbook Biology: A Search For Order In Complexity, though claimed to present a balanced view of evolution and Biblical Creation, promotes a specific sectarian religious view, and is unconstitutional in public schools. "We may note that with each new decision of the courts religious proponents have attempted to modify or tailor their approach to active lobbying in state legislatures and agencies. Softening positions and amending language, these groups have, time and again, forced the courts to reassert and redefine the prohibitions of the First Amendment. Despite new and continued attempts by such groups, however, the courts are bound to determine, if possible, the purpose of the approach." Creation science school textbooks and the Foundation for Thought and Ethics 1980 Foundation for Thought and Ethics (FTE) formed by ordained minister Jon Buell as a "Christian think-tank", its first activity to be the editing of a book "showing the scientific evidence for creation.". 1981 FTE filed IRS declaration that it had been "established to introduce biblical perspective into the mainstream of America's humanistic society, confronting the secular thought of modern man with the truth of God's Word." It said their "first project is a rigorous scientific critique of the theory of prebiotic evolution. Next, we will develop a two-model high school biology textbook that will fairly and impartially view the scientific evidences for creation side by side with evolution. (In this case Scripture or even religious doctrine would violate the separation of church and state.)" The first was The Mystery of Life's Origin (published in 1984), the second eventually became Of Pandas and People. 1981 state of Arkansas passed a law, Act 590, mandating that "creation science" be given equal time in public schools with evolution, and defining creation science as positing the "creation of the universe, energy, and life from nothing," as well as explaining the earth's geology "by occurrence of a worldwide flood." McLean v. Arkansas ruling issued on January 5, 1982, is that the Act was unconstitutional, the creationists' methods were not scientific but took the literal wording of the Book of Genesis and attempted to find scientific support for it. The clear, specific definition of science used to rule that "creation science" is religion, not science, had a powerful influence on subsequent rulings. 1982 Louisiana's "Balanced Treatment for Creation-Science and Evolution-Science in Public School Instruction" Act (Creationism Act) forbids the teaching of the theory of evolution in public schools unless accompanied by instruction in "creation science." Thus two states had passed these "equal time" laws. 1983 Percival Davis and Dean H. Kenyon produce Creation Biology Textbook Supplements, an early draft of the work later retitled Of Pandas and People. Charles Thaxton was the project chairman and academic editor. The ID movement begins 1984 book The Mystery of Life's Origin by Charles Thaxton, Walter Bradley and Roger Olsen, foreword by Kenyon, argued that "it is fundamentally implausible that unassisted matter and energy organized themselves into living systems”. It said the first cell would have been too complex to form through natural unguided processes, so there must have been intervention by an intelligent agency, possibly an intelligent alien. Barbara Forrest describes this as the beginning of the ID movement. 1984 Kenyon's affidavit for what becomes Edwards v. Aguillard gives Definitions "Creation-science means origin through abrupt appearance in complex form, and includes biological creation, biochemical creation (or chemical creation), and cosmic creation.", "Creation-science does not include as essential parts the concepts of catastrophism, a world-wide flood, a recent inception of the earth or life, from nothingness (ex nihilo), the concept of kinds, or any concepts from Genesis or other religious texts." Statements included "The creationist scientific conclusion is that empirical data currently in hand demand the inference that the first living organisms were created." and "The origin of printed texts, manufactured devices, and biomolecular systems require intelligent design and engineering knowhow (Wilder-Smith 1970). In each case the characteristic order of the system must be impressed on matter 'from the outside.'" It claims creation and evolution the only scientific explanations of life — what Forrest calls "the dual model". This is later characterised by the DI's Witt as "There Kenyon described a science open to intelligent causes but one free of religious presuppositions or assertions about the identity of the designer. He described how he did origins science, how a science open to intelligent causes ought to be done." Witt claims that this is a different creation science from Young Earth Creationism (YEC). 1985 District Court "Aguillard v. Treen" held that there can be no valid secular reason for prohibiting the teaching of evolution, a theory historically opposed by some religious denominations. The court further concluded that "the teaching of 'creation-science' and 'creationism,' as contemplated by the statute, involves teaching 'tailored to the principles' of a particular religious sect or group of sects." (citing Epperson v. Arkansas (1968)). The District Court therefore held that the Creationism Act violated the Establishment Clause either because it prohibited the teaching of evolution or because it required the teaching of creation science with the purpose of advancing a particular religious doctrine. The court of Appeals affirmed. 1985 Michael Denton's book: Evolution: A Theory in Crisis. Prominent figures in ID credit his critical examination of Darwinism with their change of view (Behe, Johnson). 1986 FTE copyrighted draft entitled Biology and Creation by Kenyon & Davis. (note Charles Thaxton academic editor, not clear from when) Autumn 1986 FTE, under the name of "Austin Analytic Consulting", carried out survey of 300 high-school science teachers to show potential mainline publishers that a market existed for a supplementary textbook to "balance" evolution teaching in class. 1987 FTE copyrighted draft entitled Biology and Origins by Kenyon & Davis. 1987 FTE's founder Jon Buell sought a publisher for the book, telling a Boston firm "A new independent scientific poll (report enclosed) shows almost half of the nation's biology teachers include some creation in their view of biological origins. Many more who don't still believe it should be included in science curriculum. ... The U.S. Fifth Circuit Court of Appeals says that teachers are free to teach scientific information that happens to support creation if they wish. In ruling on the so-called Louisiana "Balanced Treatment Act" this Spring the U.S. Supreme Court may not affirm state-mandated teaching of creation, but they will almost certainly let stand the above academic freedom for teachers." "The enclosed projections showing revenues of Over 6.5 million in five years are based upon modest expectations for the market provided the U.S. Supreme Court does not uphold the Louisiana "Balanced Act". If, by chance it should uphold it, then you can throw out these projections, the nationwide market would be explosive!" "the book will not be subject to the major criticism of creation, that the supernatural lies outside of science, because its central statement is that scientific evidence points to an intelligent cause, but that science is silent as to whether that intelligence is within or beyond the material universe. So the book is not appealing to the supernatural." Edwards v. Aguillard ruling, Pandas August 1986 Amicus Curae brief by scientific organisations and 72 Nobel Prize winning scientists set out argument that the Louisiana Act's definition of "creation-science" was religious dogma, including creation ex nihilo, created kinds of life, worldwide deluge and young earth, the legislation described conventional "creation-science" and not the "abrupt appearance" construct presented to the court which was ill-defined and "a post hoc invention, created for the purpose of defending this unconstitutional Act." They asserted that: June 19, 1987 Supreme Court ruled in Edwards v. Aguillard that the Louisiana Creationism Act violated the Establishment Clause of the First Amendment: it lacked a clear secular purpose, did not protect academic freedom as claimed, and instead of encouraging "the teaching of all scientific theories about human origins ... [had the] purpose of discrediting evolution by counterbalancing its teaching at every turn with the teaching of creationism. ... endorses religion by advancing the religious belief that a supernatural being created humankind ... [Its] primary purpose was to change the public school science curriculum to provide persuasive advantage to a particular religious doctrine that rejects the factual basis of evolution in its entirety." However, the statement that "teaching a variety of scientific theories about the origins of humankind to school children might be validly done with the clear secular intent of enhancing the effectiveness of science instruction." left a loophole for ID. 1987 FTE copyrighted draft retitled Of Pandas and People: The Central Questions of Biological Origins, reference to Edwards decision added in footnote, as in earlier drafts had definition "Creation means that the various forms of life began abruptly through the agency of an intelligent creator with their distinctive features already intact. Fish with fins and scales, birds with feathers, beaks, and wings, etc." Creation becomes intelligent design 1987 (according to a 2005 apologia by the DI's Witt) Thaxton's definition of "creation-science" had been overruled at Edwards by being equated to YEC. As the academic editor for FTE, serving as the editor for Pandas, Thaxton needed a new term and found it in a phrase he'd picked up from a NASA scientist – intelligent design. He thought "That's just what I need, it's a good engineering term ... it seemed to jibe. When I would go to meetings, I noticed it was a phrase that would come up from time to time. And I went back through my old copies of Science magazine and found the term used occasionally." Soon the term intelligent design was incorporated into the language of the book. 1987 Shortly after the Supreme Court decision, in a new draft of Pandas, approximately 150 uses of the root word "creation", such as "creationism" and "creationist", were systematically changed to refer to intelligent design, with "creationists" being changed to "design proponents" or, in one instance, "cdesign proponentsists". Accordingly, in the definition "creation" was changed to "intelligent design", so that it now read "Intelligent design means that various forms of life began abruptly through an intelligent agency, with their distinctive features already intact. Fish with fins and scales, birds with feathers, beaks, wings, etc." This wording was essentially unchanged when published in 1989 and in the 1993 2nd. edition. Johnson vs. evolution 1987–1988 academic year, Phillip E. Johnson had a year's sabbatical as a visiting professor at University College London. 1987 He read the Blind Watchmaker by evolutionary biologist Richard Dawkins and Evolution: A Theory in Crisis by the creationist Michael Denton, then Isaac Asimov's Guide to Science, and found purpose in life – he read the amicus briefs in Edwards and concluded that the definition of science was loaded against creationism. Johnson decided that the creationists had lost that case because of their unfair exclusion from science by the scientific community's naturalistic definition of science. Consequently, creationists must redefine science to restore the supernatural. Autumn term 1987 Johnson met Stephen C. Meyer who was working on a doctorate in philosophy at the University of Cambridge, and writing a thesis that analyzed methodological issues in origins sciences. June 23–26, 1988, Charles Thaxton [editor of Of Pandas and People ] held a conference titled Sources of Information Content in DNA in Tacoma, Washington, and presented the conference with a paper titled "In Pursuit of Intelligent Causes: Some Historical Background", arguing "that intelligent causes are a viable option today for science". Stephen C. Meyer was at the conference, and later recalled that "The term intelligent design came up in 1988 at a conference in Tacoma, Wash., called Sources of Information Content in DNA ... Charles Thaxton referred to a theory that the presence of DNA in a living cell is evidence of a designing intelligence. We weren't political; we were thinking about molecular biology and information theory. This wasn't stealth creationism." Meyer brought a copy of Johnson's draft book, and Paul A. Nelson remembered "Stephen Meyer, at the time a graduate student at Cambridge University, attended Thaxton’s conference, bringing with him a manuscript from (as Meyer put it with a grin) 'this wild lawyer I met in the UK.' I can still recall my excitement at the conference when I read through the manuscript, which later became Darwin on Trial." The conference also gained the attention of Denton and Plantinga. There was now a question of finding a suitable umbrella term for the emerging movement: Thaxton had avoided the word "design" as this aroused opposition in biology, he reviewed historic wording such as "creative intelligence". August 1988 Johnson's draft of "Position Paper on Darwinism" (this was issued to Campion Center participants a few days before 30 November 1989 revised summary). reviews sequence of cases, predicts "We shall hear more about “abrupt appearance,” whether it is called by that name or another one, as the creationists recover from the collapse of their legislative campaign and turn their energies back to the activities that historically have produced their biggest successes. Those activities have been aimed not at legislatures but at administrative agencies—local ones especially." Notes "Because the term 'creation-science' has been sullied, most recently in Edwards v. Aguillard, the creationists’ new pseudoscience will carry a new name, or perhaps several new names. Its content will be fully sterilized: it will avoid explicit supernaturalism, and it will speak not of any god but of a nebulous 'intelligence' or 'intelligent cause.' " Work already done by the done by the Foundation for Thought and Ethics, outlines TMoLO, "The Foundation recently has been seeking a publisher for another manuscript, Biology and Origin .. [which it wants] to become a school book and to carry its sterilized fundamentalism directly into public-school science classrooms." It had sponsored an opinion poll in 1986; "Most biology teachers, the Foundation says, think that creationist doctrines should be brought into science classes to countervail evolutionary views, and most would welcome a supplemental text that would help them to present creationist doctrines in their own classrooms!" December 1988 Thaxton decided to use the label "intelligent design" instead of creationism for his new movement. (a term edited into Pandas drafts in 1987)December 1988, Thaxton lectured at Princeton and as an overhead visuals, used a July news article clipping headlined "Space Face". It discussed speculation about the 1976 photograph of a sphinxlike "face on Mars" taken by the Viking 1 orbiter, and had a comment from a scientist about deciphering "intelligent design" in nature. The phrase worked well in Thaxton's lecture. Buell had a publication deadline of 1989 for Pandas, and Thaxton had to choose a term for its use of design theory: "Finally, the day came when we were going to have to decide". 1988–1990 Meyer introduced Johnson to Denton and Paul Nelson: "I met Steve Meyer, who was in England at the time. Through Steve, I got to know the others, who were developing what became the Intelligent Design movement. Michael Denton stayed in my home for three days while he was in the United States. Meyer introduced me to Paul Nelson, and so on. One by one, these people came together.". Of Pandas and People published 1989 survey found that more than 30% of a national sample of high school biology teachers wanted to teach "creation science". August 1989 Of Pandas and People was published, printed by "Haughton Publishing Co." (Horticultural Printers, Inc. of Dallas, with no other books in print). It included all of the basic arguments of intelligent design in essentially modern form (except for Behe's irreducible complexity argument which appeared in the 1993 edition). In 2004, Jon Buell of the FTE stated this was "the first place where the phrase 'intelligent design' appeared in its present use." Campaign to get intelligent design into schools 1989 Haughton and the FTE campaigned to get Pandas into schools across the U.S. – mobilizing local Christian conservative groups to push school boards and individual teachers to adopt the book and also to get themselves elected to school boards and local educational committees. They claimed that intelligent design was "accepted science, a view that is held by many highly qualified scientists". September 12, 1989, at the Alabama hearings on approved school textbooks. Pandas was on the list but not in the libraries for public viewing as required. An Eagle Forum chapter director praised Pandas as an exemplary scientific text presenting an alternative to modern evolutionary theory based on "intelligent design". With NCSE assistance, written criticism was sent to committee members and on October 2, a majority of the State Textbook Committee voted against Pandas, partly because of its thinly disguised religious underpinnings. This decision was subject to adoption by the State Board of Education in December. November 1989, Haughton advertised Pandas in the monthly of the National Science Teachers Association (NSTA) and other journals, claiming it had been "prepared with academic integrity" and had been "Authored by mainstream, published science educators", and promoted it at teachers' association conventions. November 1989, Pandas was promoted by members of religiously oriented citizen pressure groups like Concerned Women for America and Citizens for Excellence in Education. It was under consideration for state adoption in both Idaho and Alabama, and to be submitted in Texas and other states in the coming months. With grass-roots promotion it also had a good chance of showing up in local districts of non-adoption states. December 1989 a church campaign in Alabama gathered over 11,800 signatures on a petition to add Pandas to the list of approved school textbooks, after weeks of urging from a Christian radio station in Tuscaloosa. December 14, 1989, at the Alabama State Board of Education meeting to consider adoption of the textbook list, Haughton Publishing made an elaborate presentation. A Birmingham businessman presented petitions with over 11,800 signatures urging the board to adopt supplementary materials presenting "Intelligent Design" as an alternative to evolution. The attorney for Haughton, Hare, charged that opponents had falsely painted Pandas as a creationist text, and said that "Intelligent Design" does not compel belief in the supernatural. The Board requested legal advice, and a January hearing was set up just to consider Pandas. January 8, 1990, Buell and Thaxton were amongst speakers for Pandas at the hearing, but the publisher Haughton tried to withdraw and end the hearing on procedural grounds. The meeting continued, but Haughton then threatened to sue the committee members if they rejected the book rather than accepting that it had been withdrawn, as rejection would injure future sales prospects. The committee passed a resolution recognizing its withdrawal. Active promotion by creationists of "Pandas for public school use continued throughout the 1990s, then after 2000 activity largely died down. Discovery Institute founded, Johnson's views November 30, 1989, Johnson wrote to Campion Center participants that "the August 1988 draft of my paper which was distributed to you only a few days ago is a bit lengthy and dense" so sends them the latest draft of his "Position paper on Darwinism" as an "informal summary of my views" (from the book he was working on) stated "The important issue is not the relationship of science and creationism, but the relationship of science and materialist philosophy." He wanted school textbooks to acknowledge alleged problems with evolution. "More importantly, the universities should be opened up to genuine intellectual inquiry into the fundamental assumptions of Darwinism and scientific materialism. The possibility that Darwinism is false, and that no replacement theory is currently available." 1990 "At that time there was a little funding to pay for people to come to Seattle occasionally for a conference. So they had me speak at one in 1989 to look me over. I soon became the leader of the group." (Johnson, November 2000) Witham says that in 1990 "the intelligent design fraternity held a meeting to scrutinize the California lawyer", Johnson is quoted as saying [later] "It's a question of looking someone over ... I very much approve of that." Yerxa writes that in 1990, Meyer invited Johnson to Portland, Oregon, "and introduced him to his associates, the nucleus of the future Discovery Institute.". 1990 Haughton admitted sales of Pandas so far had been single-copy. Instead of attempts to get state textbook approval, the FTE was now directing efforts "outside the schools" to the grass-roots level, targeting local school boards, teacher's groups, and parents. May 1990 a FTE letter by Jon Buell announced a new sales campaign as they'd found it best to approach the local school system through the biology teacher. It included an 18-minute video with the endorsements of a number of scientists, educators, and an authority on First Amendment law, and a Suggested Plan of Action for volunteers suggesting: finding a sympathetic biology teacher (perhaps a fellow church member) who then convinces the curriculum committee and/or administration to approve use of Pandas without need for funding, then a local church purchases the books and donates them to the school. 1990 Discovery Institute (DI) is founded by Bruce Chapman, but lacks a defining issue. October 1990 Johnson's booklet Evolution as Dogma: The Establishment of Naturalism was published under the auspices of the FTE by Haughton Publishing. In this, Johnson said that "Darwinism" is "a theory of naturalistic evolution, which means that it absolutely rules out any miraculous or supernatural intervention at any point. Everything is conclusively presumed to have happened through purely material mechanisms that are in principle accessible to scientific investigation, whether they have yet been discovered or not." He stated that "Victory in the creation-evolution dispute therefore belongs to the party with the cultural authority to establish the ground rules that govern the discourse. If creation is admitted as a serious possibility, Darwinism cannot win, and if it is excluded a priori Darwinism cannot lose." He cited the logic of what he called "the Natural Academy of Sciences", as accepted by the Supreme Court at Edwards, that "creation-science" is not science because it does not rely upon naturalistic explanations, but holds "that the creation of the universe, the earth, living things, and man was accomplished through supernatural means inaccessible to human understanding". November 1990 - the FTE's First Things published critiques of Johnson's Evolution as Dogma article, and his own response "A Reply to My Critics". 1991 professor Phillip A. Bishop at the University of Alabama was told to stop proselytizing students in class and teaching "intelligent design theory" in an optional class. At Bishop v. Aronov he sued the college on free speech and academic freedom grounds, and won at District Court but the Appeals Court found that the university had a right to set the curriculum. Johnson's first book, Darwin on Trial June 3, 1991 Johnson's first book, Darwin on Trial published by Regnery Gateway (Intervarsity edition 1992) and described a creationist in the broadest sense as "simply a person who believes that the world (and especially mankind) was designed, and exists for a purpose." Johnson claimed that Darwinism inherently and explicitly denies such a belief and therefore constitutes a naturalistic philosophy intrinsically opposed to religion.It does not use the term "intelligent design" for Johnson's ideas, though it does mention at one point that "the presence of intelligent design in the cosmos is so obvious that even an atheist like Pagels cannot help noticing it ...", and in the citations list includes Of Pandas and People, saying "This book is 'creationist' only in the sense that it juxtaposes a paradigm of 'intelligent design' with the dominant paradigm of (naturalistic) evolution", and makes the case for the former. It does not rely on the authority of the Bible." 1991: Johnson has said of this period that "By the time Darwin on Trial was published, I had pretty well worked out the strategy I thought would, in time, win this campaign, and I've been able to convince most of the young-earth creationists and the old-earth creationists that this is the right way to proceed." March 1992, as Johnson recalled, "The movement we now call the Wedge made its public debut at a conference of scientists and philosophers held at Southern Methodist University in March 1992, following the publication of my book Darwin on Trial. The conference brought together as speakers some key Wedge figures, particularly Michael Behe, Stephen Meyer, William Dembski, and myself" to debate "Darwinists, headed by Michael Ruse", on the proposition that "Darwinism and neo-Darwinism [have] an a priori commitment to metaphysical naturalism". He writes "Once it becomes clear that the Darwinian theory rests upon a dogmatic philosophy rather than the weight of the evidence, the way will be open for dissenting opinions to get a fair hearing. In a nutshell, that is the Wedge strategy." From 1992 onwards, ID proponents engaged in a schedule of conferences, publication, lectures, mostly at universities, websites, radio and TV appearances, and later blogging and podcasting. Mar-Apr 1992, Televangelist James Dobson's newsletter directed his supporters to march down to the school board and demand of Of Pandas and People be used when evolution is taught. July 1992 in the Scientific American, Gould reviewed Johnson's book Darwin on Trial, making no mention of ID. 1992 Johnson wrote an anti-naturalistic response, which Scientific American refused to print: Dembski, Behe, Meyer and 36 other anti-evolutionists responded by mass-mailing a copy of it to scientists and biology departments all over the U.S., along with a supporting letter in which they called themselves the "Ad Hoc Origins Committee" and "Scientists Who Question Darwinism" January 1993 Johnson wrote claiming that it was wrong for theists to accept evolution (without mentioning ID) "Their position, which I call theistic naturalism, starts from the premise that God refrains from interference with those parts of reality that natural science has staked out as its own territory. ... the fundamental disagreement is not over the age of the earth or the method of creation; it is over whether we owe our existence to a purposeful Creator or a blind materialistic process". June 1993, the ID movement met again at Pajaro Dunes in California, organized by Johnson, with participants including Scott Minnich, Michael Behe, Stephen C. Meyer, Jonathan Wells and Dean Kenyon. (Paul Nelson gives list) "and this meeting is generally acknowledged as the birth of the Intelligent Design movement", Behe first presented his ideas about "irreducible complexity" Pandas revised, DI meets ID 1993 2nd. edition Of Pandas and People published. References to "evolution" and "evolutionists" were changed to "Darwinism" and "Darwinists" to make the distinction between "evolution" which can mean "change in living things over time" and "Darwinism" referring to mutation and natural selection. Chapter 6 Biochemical Similarities was extensively revised by Behe, who added sections on the complex mechanism of blood clotting and on the origin of proteins, introducing Behe's irreducibly complexity argument in all but name. Charles Thaxton's A Word to the Teacher at the end of the book was supplanted by Notes to teachers written by M. D. Hartwig, and S. C. Meyer. December 1993, Johnson's Darwin on Trial revised, with minor changes to footnotes, a new section on embryology and an epilogue. December 1993 Bruce Chapman, president and founder of the Discovery Institute, noticed an essay in the Wall Street Journal by Meyer about a dispute when biology lecturer Dean H. Kenyon taught intelligent design creationism in introductory classes. 1994 the "Origins Resource Association" began a campaign to force creationist doctrines including ID into science classes in Livingston Parish, Louisiana: affects Barbara Forrest who leads resistance. 1994 Stephen C. Meyer introduces Bruce Chapman to idea of intelligent design approach to re-establishing spiritual values and getting funding. By 1995 Chapman and George Gilder were negotiating with the Howard Ahmanson family for a grant to set up the CRSC. August 1994 "In a pattern that is becoming familiar all over the country, a newly elected school board ..." Plan to purchase thirty copies of Pandas to distribute to science teachers, plus as many additional copies as teachers might request "Also, if local school control comes to pass, as advocated by Texas' new governor George Bush, we can expect creationism to be proposed again in Plano and many other communities in the state." November 14, 1994, the WSJ discusses Pandas – Phillip Johnson is reported as believing that "... a bit more candor about the nature of the designer might be in order. 'You're playing Hamlet without Hamlet if you don't say something about that,' he says." To Eugenie Scott, it disguised religion as science, which is of questionable honesty: Johnson agreed that a more explicit expression of the motivation of belief was in order, but countered: "The fact is they're working against enormous prejudice here, and enormous bigotry. And they're vying to put it in terms that the courts and science will allow to exist." On December 5 he wrote to the WSJ stating that scientific organizations and textbooks use "creationism" to mean literal YEC, so it's not dishonest for Pandas to repudiate the label in order to question the "dogmatic philosophy" of evolution "defined in scientific usage as a completely naturalistic system in which God played no discernible part." 1995 John Buell FTE fund raising letter "Production of supplemental textbook for biology is already complete. The teachers are now using it in all 50 states. This book Of Pandas and People is favorably influencing the way origins is taught in thousands of public school classrooms." "Our commitment is to see the monopoly of naturalistic curriculum in the schools broken." Theistic realism, DI takes up ID and founds CRSC "By the mid-1990s Johnson was collaborating with other critics of naturalistic evolution in forming the intelligent-design (ID) movement." Abrahamsons get involved with DI 1995, Johnson released another book, "Reason in the Balance: The Case Against Naturalism in Science, Law and Education" opposing the methodological naturalism of science in which "The Creator belongs to the realm of religion, not scientific investigation", and promoting "theistic realism" which "assumes that the universe and all its creatures were brought into existence for a purpose by God", expecting "this 'fact' of creation to have empirical, observable consequences. 1995 Behe's Darwinism, Science or Philosophy? published by the FTE. May 1995 " 'The whole point of Darwinism is to explain the world in a way that excludes any role for a Creator,' says Johnson. 'What is being sold in the name of science is a completely naturalistic understanding of reality.' ""If scientists are wrong about Darwinism, are they also wrong about the notion of intelligent design? Might not the notion of design be worthy of a second look?A new breed of young Evangelical scholars thinks the answer to both questions is yes. They are arguing persuasively that design is not only scientific, but is also the most reasonable explanation for the origin of living things. And they're gaining a hearing." [i.e. Meyer, Dembski: also Paul Nelson and Behe, describes IC] Summer 1995 conference titled "The Death of Materialism and the Renewal of Culture", source of CRSC. 1996, Behe released his book, Darwin's Black Box. August 10, 1996 Center for the Renewal of Science and Culture announced in Discovery Institute Press Release, to examine and confront "materialistic bias in science", "the idea that God is either dead or irrelevant". CRSC "will award research fellowships to scholars, hold conferences, and disseminate research findings among opinionmakers and the general public." Director Stephen Meyer, co-director John G. West working with Phillip Johnson and Michael Behe. 1996-97 full-time Discovery research fellows to be William Dembski, Paul Nelson and Jonathan Wells. Founded "specifically to address the Darwinian controversy in public education" by Discovery Institute president Bruce Chapman, with help from Stephen C. Meyer. At some stage, Charles B. Thaxton and Walter L. Bradley become DI fellows at the CRSC (In 2002 the name was changed to the Center for Science and Culture.) August 31, 1996 – In A review of The Battle of the Beginnings: Why Neither Side is Winning the Creation-Evolution Debate by Del Ratzsch, Johnson argues against naturalism in science and its acceptance by theistic evolution, notes Ratzsch's reference to "an 'upper tier; of creationists" who "advance concepts like 'intelligent design' and 'irreducible complexity' as legitimate descriptions of biological reality", and identifies his group as this "upper tier". He states "My colleagues and I speak of 'theistic realism' -- or sometimes, 'mere creation"—as the defining concept of our movement. This means that we affirm that God is objectively real as Creator, and that the reality of God is tangibly recorded in evidence accessible to science, particularly in biology. We avoid the tangled arguments about how or whether to reconcile the Biblical account with the present state of scientific knowledge, because we think these issues can be much more constructively engaged when we have a scientific picture that is not distorted by naturalistic prejudice. If life is not simply matter evolving by natural selection, but is something that had to be designed by a creator who is real, then the nature of that creator, and the possibility of revelation, will become a matter of widespread interest among thoughtful people who are currently being taught that evolutionary science has show God to be a product of the human imagination." 1996 "Mere Creation" conference at Biola University in California, organized by CRSC to plan strategy — very important, "a major research conference bringing together scientists and scholars who reject naturalism as an adequate framework for doing science and who seek a common vision of creation united under the rubric of intelligent design." – no actual research, but produced strategy. June 24, 1996, Eugenie C. Scott wrote that "phrases like 'intelligent design theory', or 'abrupt appearance theory' are used instead of 'creation science', 'creationism', and related terms. I call this newest stage of antievolutionism 'Neocreationism'." 1997 Johnson's Defeating Darwinism by Opening Minds states "God is our true Creator. ... I speak of a God who acted openly and who left his fingerprints all over the evidence. Does such a God really exist, or is he a fantasy like Santa Claus? That is the subject of this book." and "If we understand our own times, we will know that we should affirm the reality of God by challenging the domination of materialism and naturalism in the world of the mind." c. 1998 William A. Dembski's The Design Inference and Mere Creation The wedge strategy c. 1998 DI / CRSC Wedge document leaked February 5, 1999. 1999 Johnson speech (does not use term ID) claimed that science when applied to questions of origins means "applied materialistic philosophy" explaining "the whole world and the cosmos ... without any reference to God as the Creator, without any supernatural acts, and on the basis of invariable natural laws that were the same from the beginning", so Darwinian "evolution contradicts not just the Book of Genesis, but every word in the Bible from beginning to end. I have built an intellectual movement in the universities and churches that we call The Wedge. ... the Darwinian theory isn't true. ... where might you get the truth? When I preach from the Bible, as I often do at churches and on Sundays, I don't start with Genesis. I start with John 1:1. In the beginning was the word. In the beginning was intelligence, purpose, and wisdom. The Bible had that right. And the materialist scientists are deluding themselves". 1999 Johnson's article The Wedge says his "own writing and speaking represents the sharp edge of the Wedge. I make the first penetration, seeking always only to legitimate a line of inquiry rather than to win a debate", with Behe, Dembski and "a lot more" following into the opening. Teach the controversy 1999 strategies: argue that individual teachers have a constitutional right to present creationist material, and that "evidence against evolution" should be taught in the science classroom as a way to improve teaching and learning. Attempts to teach IC and introduce Pandas. Resources for teachers ... abundantly available from both "creation science ministries" and conservative religious groups. 1999 David DeWolf, Stephen C. Meyer and Mark DeForrest coauthored a 40-page booklet, Intelligent Design in Public School Science Curricula: A Legal Guidebook, published by the FTE. It claims Edwards v. Aguillard mandated "teaching a variety of scientific theories about the origins of humankind" subject to a "clear secular intent of enhancing.. science instruction." 1999 Skagit County's Burlington-Edison School District finds that for almost 10 years the high-school science teacher Roger DeHart had been omitting state-approved biology textbook teaching on evolution, and using Pandas. Aug. 17, 1999, Philip Kitcher, professor of the philosophy of science at Columbia University, in online debate in Slate magazine with Johnson, coins neo-creo: "Enter the neo-creos, scavenging the scientific literature, they take claims out of context and pretend that everything about evolution is controversial. ... But it's all a big con." May 10, 2000, DI briefing of Congress, "Scientific Evidence of Intelligent Design and its Implications for Public Policy and Education," also addressed the social, moral, and political consequences of Darwinism. Creation-evolution debate had primarily been active at the state and local level, a new effort to involve Congress, took place as the Senate entered its second week of debate on overhauling federal K-12 education programs. Nancy Pearcey "For Darwinists, religion must give way to a new science-based cosmic myth with the power to bind humans together in a new world order. She then asked what this means for morality and argued that people were right to be concerned that all the above would undercut morality." July 2000 Dean Kenyon and David DeWolf of CRSC: Kenyon states "Scientific creationism ... is actually one of the intellectual antecedents of the Intelligent Design movement. June 2001 Rick Santorum introduces The Santorum Amendment to "Teach the Controversy" partially written by Johnson (and based on a law journal article written by DI activist David DeWolf) inviting, left out of bill but kept in conference report. December 2002 DI lobbying to get ID into Ohio science standards Ohio House Bill 481. Bills all failed, ID excluded by name in the approved standard but it included the phrase "critically analyze aspects of evolutionary theory" used as excuse for the new "teach the controversy" strategy. January 2004 Dembski The Design Revolution: Answering the Toughest Questions About Intelligent Design page 22 "Theism, whether Christian, Jewish, or Muslim, holds that God by wisdom created the world. The origin of the world and its subsequent ordering thus result from the designing activity of an intelligent agent, God.Naturalism, on the other hand, allows no place for intelligent agency, except at the end of a blind, purposeless, material process." 2004 ©. FTE, draft for new version of Pandas, mentions 10th anniversary, authors listed as Michael J. Behe, Percival Davis, William A. Dembski, Dean H. Kenyon, Jonathan Wells. Contents list, preface, notes to teachers, notes to students, epilogue, but no main content. March 9, 2004, Ohio State Board of Education approved by majority vote model lesson Critical Analysis of Evolution – Grade 10. Ohio Roundtable reported that a motion to remove the "Critical Analysis" lesson had been defeated: its sponsor had "claimed that the lesson was a 'religious effort, cloaked as science,' even though the lesson contains no religious statements whatsoever." The roundtable added that "It is very IMPORTANT to understand that the lesson contains only the scientific challenge to macroevolution. There is NO religious content and NO promotion of any alternative theories, including intelligent design." 2004 Paul Nelson interviewed by a magazine called Touchstone: A Journal of Mere Christianity – "Easily, the biggest challenge facing the ID community is to develop a full-fledged theory of biological design. We don't have such a theory right now, and that's a real problem. Without a theory, it's very hard to know where to direct your research focus. Right now, we've got a bag of powerful intuitions and a handful of notions such as irreducible complexity and specified complexity, but as yet, no general theory of biological design." 2004 the school board of Grantsburg, Wisconsin, voted to have ID taught as an alternative to evolution. By late summer 2005 letters urging reversal had been organised by a department of University of Wisconsin–Madison and clergy nationwide, the Clergy Letter Project, resulting in the board largely reversing their decision. April 8, 2004 first of the Academic Freedom bills promoting intelligent design passed unanimously by the Alabama Senate. On May 17, 2004, the Alabama House adjourned the 2004 legislative session without voting on the bill, allowing it to lapse. On February 8, 2005, a pair of virtually identical bills were simultaneously introduced in the Alabama Senate and House, again under the description of "The Academic Freedom Act." Kitzmiller lawsuit June 7, 2004, at Dover, Pennsylvania, the Dover Area School District School Board considered a new biology textbook. William Buckingham objected, wanting a textbook that gave a balanced view between creationism and evolution. He subsequently proposed Of Pandas and People, after acrimonious debate it was left off the list on August 2. October 4, 2004, Buckingham announced acceptance of 50 donated copies of Pandas. On October 18 the full School Board voted 6–3 to amend the district's curriculum to include intelligent design. Buckingham states a law firm has offered pro bono legal representation. December 12, 2004, Phillip Johnson stated in an interview "What the Dover board did is not what I'd recommend. ... Just teach evolution with a recognition that it's controversial ..." December 14, 2004, 11 parents, ACLU, Americans United and Pepper Hamilton LLP file lawsuit Kitzmiller v. Dover Area School District, lead plaintiff Tammy Kitzmiller, the mother of a ninth grader in the biology class. On December 20, the District voted for the Thomas More Law Center to represent it pro bono. May 2005 Kansas school board hearings led by John Calvert, director of the Kansas office of the Intelligent Design Network, boycotted by mainstream scientists as an "anti-science crusade." September 26, 2005 to November 4, 2005, Kitzmiller trial before Judge John E. Jones III November 2005 Kansas school board voted 6–4 for new science standards criticising evolution, redefining science, then turned out in elections. December 20, 2005, Kitzmiller decision; Judge Jones issued his findings of fact and decision as his 139 page MEMORANDUM OPINION. After the Kitzmiller lawsuit February 2006 Kansas school board voted 6–4 for new standards supporting evolution. February 2006 Ohio Governor Bob Taft requests legal review of the state's "teach the controversy" curriculum standards, Ohio State Board of Education members vote 11–4 to drop all of the "teach the controversy". Spring 2006 Phillip Johnson states in interview "I also don't think that there is really a theory of intelligent design at the present time to propose as a comparable alternative to the Darwinian theory, which is, whatever errors it might contain, a fully worked out scheme. There is no intelligent design theory that's comparable. Working out a positive theory is the job of the scientific people that we have affiliated with the movement. Some of them are quite convinced that it's doable, but that's for them to prove ... No product is ready for competition in the educational world." June 2007 Behe's book The Edge of Evolution: The Search for the Limits of Darwinism, claims that variation for the building blocks of evolution are not due to random mutation in DNA, but instead produced by an intelligent designer. Reiterates argument for irreducible complexity, calculating improbability on 2 or more beneficial mutations happening simultaneously, rather than one by one as evolutionary theory requires. 2007, A new biology textbook intended to replace Of Pandas and People, entitled Explore Evolution is published by Hill House Publishers. The book is authored by Stephen C. Meyer, Scott Minnich and Paul A. Nelson, Jonathan Moneymaker and Ralph Seelke. 2007 William A. Dembski and Jonathan Wells rewrote "Of Pandas and People" as a college textbook, The Design of Life. When asked in a December interview whether his research concluded that God is the Intelligent Designer, Dembski stated "I believe God created the world for a purpose. The Designer of intelligent design is, ultimately, the Christian God." April, 2008, the pro-intelligent design movie Expelled: No Intelligence Allowed is debuted. May, 2008 a Wall Street Journal article describes the common goal of Academic Freedom bills is to expose more students to articles and videos that undercut evolution, most of which are produced by advocates of intelligent design or Biblical creationism. December 2008 an article in Scientific American detailed how "Creationists continue to agitate against the teaching of evolution in public schools, adapting their tactics to match the roadblocks they encounter. Past strategies have included portraying creationism as a credible alternative to evolution and disguising it under the name "intelligent design." Other tactics misrepresent evolution as scientifically controversial and pretend that advocates for teaching creationism are defending academic freedom. "Academic freedom" was the creationist catchphrase of choice in 2008 ... the Discovery Institute subsequently retreated to a strategy to undermine the teaching of evolution, introducing a flurry of labels and slogans—"teach the controversy," "critical analysis" and "academic freedom"—to promote its version of the fallback strategy ... despite the lofty language, the ulterior intent and likely effect of these bills are evident: undermining the teaching of evolution in public schools." See also Freiler v. Tangipahoa Parish Board of Education Selman v. Cobb County School District Notes References External links Miller, Kenneth R., (1999) Of Pandas and People: A Brief Review A Philosophical Premise of 'Naturalism'? by Mark Isaak 2002 Behe's empty box on Richard Dawkins' site, Last Updated: Wednesday, November 28, 2001 Anselm Atkins on Behe: letter to a friend on Black Box. History Forum Addresses Creation/Evolution Controversy – development of creationism in early 20th century etc. , ResearchID.org, a pro-intelligent design wiki Intelligent design Intelligent design
Timeline of intelligent design
[ "Engineering" ]
10,233
[ "Intelligent design", "Design" ]
11,821,770
https://en.wikipedia.org/wiki/Signature-tagged%20mutagenesis
Signature-tagged mutagenesis (STM) is a genetic technique used to study gene function. Recent advances in genome sequencing have allowed us to catalogue a large variety of organisms' genomes, but the function of the genes they contain is still largely unknown. Using STM, the function of the product of a particular gene can be inferred by disabling it and observing the effect on the organism. The original and most common use of STM is to discover which genes in a pathogen are involved in virulence in its host, to aid the development of new medical therapies/drugs. Basic premise The gene in question is inactivated by insertional mutation; a transposon is used which inserts itself into the gene sequence. When that gene is transcribed and translated into a protein, the insertion of the transposon affects the protein structure and (in theory) prevents it from functioning. In STM, mutants are created by random transposon insertion and each transposon contains a different 'tag' sequence that uniquely identifies it. If an insertional mutant bacterium exhibits a phenotype of interest, such as susceptibility to an antibiotic it was previously resistant to, its genome can be sequenced and searched (using a computer) for any of the tags used in the experiment. When a tag is located, the gene that it disrupts is also thus located (it will reside somewhere between a start and stop codon which mark the boundaries of the gene). STM can be used to discover which genes are critical to a pathogen's virulence by injecting a 'pool' of different random mutants into an animal model (e.g. a mouse infection model) and observing which of the mutants survive and proliferate in the host. Those mutant pathogens that don't survive in the host must have an inactivated gene, required for virulence. Hence, this is an example of a negative selection method. References Genetics Mutagenesis
Signature-tagged mutagenesis
[ "Biology" ]
408
[ "Genetics techniques", "Genetic engineering" ]
11,821,775
https://en.wikipedia.org/wiki/Poles%20of%20astronomical%20bodies
The poles of astronomical bodies are determined based on their axis of rotation in relation to the celestial poles of the celestial sphere. Astronomical bodies include stars, planets, dwarf planets and small Solar System bodies such as comets and minor planets (e.g., asteroids), as well as natural satellites and minor-planet moons. Poles of rotation The International Astronomical Union (IAU) defines the north pole of a planet or any of its satellites in the Solar System as the planetary pole that is in the same celestial hemisphere, relative to the invariable plane of the Solar System, as Earth's north pole. This definition is independent of the object's direction of rotation about its axis. This implies that an object's direction of rotation, when viewed from above its north pole, may be either clockwise or counterclockwise. The direction of rotation exhibited by most objects in the solar system (including Sun and Earth) is counterclockwise. Venus rotates clockwise, and Uranus has been knocked on its side and rotates almost perpendicular to the rest of the Solar System. The ecliptic remains within 3° of the invariable plane over five million years, but is now inclined about 23.44° to Earth's celestial equator used for the coordinates of poles. This large inclination means that the declination of a pole relative to Earth's celestial equator could be negative even though a planet's north pole (such as Uranus's) is north of the invariable plane. In 2009 the responsible IAU Working Group decided to define the poles of dwarf planets, minor planets, their satellites, and comets according to the right-hand rule. To avoid confusion with the "north" and "south" definitions relative to the invariable plane, the poles are called "positive" and "negative." The positive pole is the pole toward which the thumb points when the fingers of the right hand are curled in its direction of rotation. The negative pole is the pole toward which the thumb points when the fingers of the left hand are curled in its direction of rotation. This change was needed because the poles of some asteroids and comets precess rapidly enough for their north and south poles to swap within a few decades using the invariable plane definition. The projection of a planet's north pole onto the celestial sphere gives its north celestial pole. The location of the celestial poles of some selected Solar System objects is shown in the following table. The coordinates are given relative to Earth's celestial equator and the vernal equinox as they existed at J2000 (2000 January 1 12:00:00 TT) which is a plane fixed in inertial space now called the International Celestial Reference Frame (ICRF). Many poles precess or otherwise move relative to the ICRF, so their coordinates will change. The Moon's poles are particularly mobile. Some bodies in the Solar System, including Saturn's moon Hyperion and the asteroid 4179 Toutatis, lack a stable north pole. They rotate chaotically because of their irregular shape and gravitational influences from nearby planets and moons, and as a result the instantaneous pole wanders over their surface, and may momentarily vanish altogether (when the object comes to a standstill with respect to the distant stars). Magnetic poles Planetary magnetic poles are defined analogously to the Earth's North and South magnetic poles: they are the locations on the planet's surface at which the planet's magnetic field lines are vertical. The direction of the field determines whether the pole is a magnetic north or south pole, exactly as on Earth. The Earth's magnetic axis is approximately aligned with its rotational axis, meaning that the geomagnetic poles are relatively close to the geographic poles. However, this is not necessarily the case for other planets; the magnetic axis of Uranus, for example, is inclined by as much as 60°. Orbital pole In addition to the rotational pole, a planet's orbit also has a defined direction in space. The direction of the angular momentum vector of that orbit can be defined as an orbital pole. Earth's orbital pole, i.e. the ecliptic pole, points in the direction of the constellation Draco. Near, far, leading and trailing poles In the particular (but frequent) case of synchronous satellites, four more poles can be defined. They are the near, far, leading, and trailing poles. For example, Io, one of the moons of Jupiter, rotates synchronously, so its orientation with respect to Jupiter stays constant. There will be a single, unmoving point of its surface where Jupiter is at the zenith, exactly overhead – this is the near pole, also called the sub- or pro-Jovian point. At the antipode of this point is the far pole, where Jupiter lies at the nadir; it is also called the anti-Jovian point. There will also be a single unmoving point which is farthest along Io's orbit (best defined as the point most removed from the plane formed by the north-south and near-far axes, on the leading side) – this is the leading pole. At its antipode lies the trailing pole. Io can thus be divided into north and south hemispheres, into pro- and anti-Jovian hemispheres, and into leading and trailing hemispheres. These poles are mean poles because the points are not, strictly speaking, unmoving: there is continuous libration about the mean orientation, because Io's orbit is slightly eccentric and the gravity of the other moons disturbs it regularly. See also Galactic coordinate system Planetary coordinate system References Astronomical coordinate systems Astronomical objects
Poles of astronomical bodies
[ "Physics", "Astronomy", "Mathematics" ]
1,166
[ "Outer space", "Astronomical coordinate systems", "Physical objects", "Coordinate systems", "Astronomical objects", "Matter" ]
11,822,678
https://en.wikipedia.org/wiki/Multi-user%20MIMO
Multi-user MIMO (MU-MIMO) is a set of multiple-input and multiple-output (MIMO) technologies for multipath wireless communication, in which multiple users or terminals, each radioing over one or more antennas, communicate with one another. In contrast, single-user MIMO (SU-MIMO) involves a single multi-antenna-equipped user or terminal communicating with precisely one other similarly equipped node. Analogous to how OFDMA adds multiple-access capability to OFDM in the cellular-communications realm, MU-MIMO adds multiple-user capability to MIMO in the wireless realm. SDMA, massive MIMO, coordinated multipoint (CoMP), and ad hoc MIMO are all related to MU-MIMO; each of those technologies often leverages spatial degrees of freedom to separate users. Technology MU-MIMO leverages multiple users as spatially distributed transmission resources, at the cost of somewhat more expensive signal processing. In comparison, conventional single-user MIMO (SU-MIMO) involves solely local-device multiple-antenna dimensions. MU-MIMO algorithms enhance MIMO systems where connections among users count greater than one. MU-MIMO may be generalized into two categories: MIMO broadcast channels (MIMO BC) and MIMO multiple-access channels (MIMO MAC) for downlink and uplink situations, respectively. Again in comparison, SU-MIMO may be represented as a point-to-point, pairwise MIMO. To remove ambiguity of the words receiver and transmitter, we can adopt the terms access point (AP) or base station, and user. An AP is the transmitter and a user the receiver for downlink connections, and vice versa for uplink connections. Homogeneous networks are freed from this distinction since they tend to be bi-directional. MIMO broadcast (MIMO BC) MIMO BC represents a MIMO downlink case where a single sender transmits to multiple receivers within the wireless network. Examples of advanced-transmit processing for MIMO BC are interference-aware precoding and SDMA-based downlink user scheduling. For advanced-transmit processing, qfz has to be known at the transmitter (CSIT). That is, knowledge of CSIT allows throughput improvement, and methods to obtain CSIT become of significant importance. MIMO BC systems have an outstanding advantage over point-to-point SU-MIMO systems, especially when the number of antennas at the transmitter, or AP, is larger than the number of antennas at each receiver (user). The categories of precoding techniques which may be used by MIMO BC include, one, those using dirty paper coding (DPC) and linear techniques and two, hybrid (analog and digital) techniques. Precoding may also be achieved through means of a so-called steering matrix, which can be applied in multiple configurations. MIMO MAC Conversely, the MIMO multiple-access-channel or MIMO MAC represents a MIMO uplink case in the multiple sender to single receiver wireless network. Examples of advanced receive processing for MIMO MAC are joint interference cancellation and SDMA-based uplink user scheduling. For advanced receive processing, the receiver has to know the channel state information at the receiver (CSIR). Knowing CSIR is generally easier than knowing CSIT. However, knowing CSIR costs a lot of uplink resources to transmit dedicated pilots from each user to the AP. MIMO MAC systems outperform point-to-point MIMO systems especially when the number of receiver antennas at an AP is larger than the number of transmit antennas at each user. Cross-layer MIMO Cross-layer MIMO enhances the performance of MIMO links by solving certain cross-layer problems that may occur when MIMO configurations are employed in a system. Cross-layer techniques can be used to enhance the performance of SISO links as well. Examples of cross-layer techniques are Joint Source-Channel Coding, Adaptive Modulation and Coding (AMC, or "Link Adaptation"), Hybrid ARQ (HARQ), and user scheduling. Multi-user to multi-user The highly interconnected wireless ad hoc network increases the flexibility of wireless networking at the cost of increased multi-user interference. To improve the interference immunity, PHY/MAC-layer protocols have evolved from competition based to cooperative based transmission and reception. Cooperative wireless communications can actually exploit interference, which includes self-interference and other user interference. In cooperative wireless communications, each node might use self-interference and other user interference to improve the performance of data encoding and decoding, whereas conventional nodes are generally directed to avoid the interference. For example, once strong interference is decodable, a node decodes and cancels the strong interference before decoding the self-signal. The mitigation of low carrier-over-interference (CoI) ratios can be implemented across PHY/MAC/Application network layers in cooperative systems. Cooperative multiple antenna research – Apply multiple antenna technologies in situations with antennas distributed among neighboring wireless terminals. Cooperative diversity – Achieve antenna diversity gain by the cooperation of distributed antennas belonging to each independent node. Cooperative MIMO – Achieve MIMO advantages, including the spatial multiplexing gain, using the transmit or receiver cooperation of distributed antennas belonging to many different nodes. Cooperative relay – Apply cooperative concepts onto relay techniques, which is similar to cooperative diversity in terms of cooperative signalling. However, the main criterion of cooperative relay is to improve the tradeoff region between delay and performance, while that of cooperative diversity and MIMO is to improve the link and system performance at the expense of minimal cooperation loss. Relaying techniques for cooperation Store-and-forward (S&F), amplify-and-forward (A&F), decode-and-forward (D&F), coded cooperation, spatial coded cooperation, compress-and-forward (C&F), non-orthogonal methods Cooperative MIMO (CO-MIMO) CO-MIMO, also known as network MIMO (net-MIMO), or ad hoc MIMO, uses distributed antennas which belong to other users, while conventional MIMO, i.e., single-user MIMO, only employs antennas belonging to the local terminal. CO-MIMO improves the performance of a wireless network by introducing multiple-antenna advantages, such as diversity, multiplexing and beamforming. If the main interest hinges on the diversity gain, it is known as cooperative diversity. It can be described as a form of macro-diversity, used for example in soft handover. Cooperative MIMO corresponds to transmitter macro-diversity or simulcasting. A simple form that does not require any advanced signal processing is single frequency networks (SFN), used especially in wireless broadcasting. SFNs combined with channel adaptive or traffic adaptive scheduling is called dynamic single frequency networks (DSFN). CO-MIMO is a technique useful for future cellular networks which consider wireless mesh networking or wireless ad hoc networking. In wireless ad hoc networks, multiple transmit nodes communicate with multiple receive nodes. To optimize the capacity of ad hoc channels, MIMO concepts and techniques can be applied to multiple links between the transmit and receive node clusters. Contrasted to multiple antennas in a single-user MIMO transceiver, participating nodes and their antennas are located in a distributed manner. So, to achieve the capacity of this network, techniques to manage distributed radio resources are essential. Strategies such as autonomous interference cognition, node cooperation, and network coding with dirty paper coding have been suggested to optimize wireless network capacity. See also Distributed antenna system Mesh network Mobile ad hoc network Phased array Space-division multiple access Space–time coding/processing References External links MU-MIMO Beamforming by Constructive Interference, Wolfram Demonstrations Project Peel, C. B., Spencer, Q. H., Swindlehurst, A. L., & Haardt, M. (2004). An introduction to the multi-user MIMO downlink. IEEE communications Magazine, 61. Information theory Radio resource management
Multi-user MIMO
[ "Mathematics", "Technology", "Engineering" ]
1,645
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
11,823,032
https://en.wikipedia.org/wiki/SmartFrog
SmartFrog (Smart Framework for Object Groups) is a Java based open-source framework for helping host large-scale applications across component-based distributed system. It is proposed to make the design, configuration, deployment and management of distributed systems easier, correct and automatic. SmartFrog mainly consists of three aspects: SmartFrog Language, a runtime system, and a library of SmartFrog components that implement the SmartFrog component model. History SmartFrog was originally developed in Hewlett-Packard's European Research Labs. It has been used in HP research on infrastructure automation and service automation as well as in a variety of HP products. SmartFrog became open to public in January 2004 under GNU Lesser General Public License (LGPL) licence hosted on SourceForge. As a result, users and developers outside the lab can also make their contributions by using, extending the framework or reporting bugs. In 2017, following Hewlett-Packard's corporate split into HP Inc. and Hewlett Packard Enterprise, whilst under Hewlett Packard Enterprise's ownership, SmartFrog was relicensed under the Apache License 2.0. Technologies SmartFrog Language SmartFrog Language is a configuration description language used for describing component collections and system configurations, such as which software components belong to the system, what the configuration parameters are, how the components are bound to other components in the system, and in what sequence the components work. Component model In SmartFrog, component is the most important and basic part. A system is considered to be a collection of applications, each of which are composed by a collection of components. Every component is written in a SmartFrog file in Java, which represents the existence and default attributes of components. Runtime system The runtime system interprets the descriptions written in SmartFrog Language and manages the components based on the interpretation results. It also provides users with tools to interact with components. Features As a framework Instead of a package or library, SmartFrog is a framework, a building block to help build software systems. SmartFrog can be extended by adding new components into the framework, which means it has much wider applicability and could acquire new functionality. Template mechanism In SmartFrog, every component is defined as a template. Typically for every new service, new components are created and activated. However, some general purpose components can be reused in different service. For the template mechanism, system configuration is easy to adapt to different requirements and the default configuration can be kept. Prototyping also allows keeping all the transformation and history of configurations of the system. Using SmartFrog to build large-scale distributed system, one can reuse some components and does not need to completely rewrite the whole application. Users can easily write or create simple SmartFrog components to install, uninstall, configure, start, and stop the system using the configuration description notation. Cross-client model There are many software system similar to SmartFrog, but few of them uses the same model as what is used in SmartFrog. The largest category of systems is based on client-server model, where the configuration data for all clients are held in a server and each client is designed to match the configuration data stored in the server. However, in SmartFrog, cross-client model is used so that each client is treated as an independent entity. This provides SmartFrog with the ability to coordinate across large range of nodes, carry out autonomic actions and result in higher scalability. Security SmartFrog has two running modes: secure and insecure. In insecure mode, there is no restriction on client connection to SmartFrog, and the plain-text communication can be eavesdropped and intercepted. In this mode, the system is vulnerable to malign attacks. SmartFrog needs to protect against malign deployment or other management action. In secure mode, SmartFrog uses public key infrastructure (PKI) system. Only clients that are certificated by specified certificate authority (CA) can connect to the SmartFrog Daemon. In addition, SmartFrog signs all components and descriptions with a certificate and only the signed ones can be deployed. Communications are encrypted using Transport Layer Security (TLS) protocols. Related projects The GridWeaver Project The GridWeaver project started at 2002 and lasted a year. The project collaborators were the School of Informatics of Edinburgh University, HP Laboratories, and Edinburgh Parallel Computing Centre (EPCC).This project was aimed to find solutions to the problems of automating the configuration and management of the next generation of Grid computing fabrics. The project compared SmartFrog and Local ConFiGuration system (LCFG) in terms of strengths and weaknesses, as well as investigating how these tools are utilized to solve problems. SFJS SFJS is a configuration language, runtime and component library developed by Configured Things, a company co-founded by Patrick Goldsack, one of the SmartFrog project's lead authors. SFJS is a spiritual successor to SmartFrog, built on Node.js. See also Comparison of open source configuration management software LCFG - an established configuration framework for managing large numbers of systems References External links Official Website Project pages on SourceForge Referenced by a Google Usenix Paper. Configuration management
SmartFrog
[ "Engineering" ]
1,086
[ "Systems engineering", "Configuration management" ]
11,823,468
https://en.wikipedia.org/wiki/Paracoccidioides%20brasiliensis
Paracoccidioides brasiliensis is a dimorphic fungus and one of the two species that cause paracoccidioidomycosis (the other being Paracoccidioides lutzii). The fungus has been affiliated with the family Ajellomycetaceae (division Ascomycota) although a sexual state or teleomorph has not yet been found. History Paracoccidioides brasiliensis was first discovered by Adolfo Lutz in 1908 in Brazil. Although Lutz did not suggest a name for the disease caused by this fungus, he made note of structures he called "pseudococcidica" together with mycelium in cultures grown at 25 °C. In 1912, Alfonse Splendore proposed the name Zymonema brasiliense and described the features of the fungus in culture. Finally in 1930, Floriano de Almeida created the genus Paracoccidioides to accommodate the species, noting its distinction from Coccidioides immitis. Physiology Paracoccidioides brasiliensis is a nonphotosynthetic eukaryote with a rigid cell wall and organelles very similar to those of higher eukaryotes. Being a dimorphic fungus, it has the ability to grow an oval yeast-like form at 37 °C and an elongated mycelial form produced at room temperature. The mycelial and yeast phases differ in their morphology, biochemistry, and ultrastructure. The yeast form contains large amounts of α-(1,3)-linked glucan. The chitin content of the mycelial form is greater than that of the yeast form, but the lipid content of both phases is comparable. The yeast reproduces by asexual budding, where daughter cells are borne asynchronously at multiple, random positions across the cell surface. Buds begin by layers of cell wall increasing in optical density at a point that eventually gives rise to the daughter cell. Once the bud has expanded, a cleavage plane develops between the nascent cell and the mother cell. Following dehiscence, the bud scar disappears. In tissue, budding occurs inside the granulomatous center of the disease lesion, as visualized by hematoxylin and eosin (H&E) staining of histologic sections. Nonbudding cells measure 5–15 μm in diameter, whereas those with multiple spherical buds measure from 10–20 μm in diameter. In electron microscopy, cells with multiple buds have been found to have peripherally located nuclei and cytoplasm surrounding a large central vacuole. In the tissue form of P. brasiliensis, yeast cells are larger with thinner walls and a narrower bud base than those of the related dimorphic fungus, Blastomyces dermatitidis. The yeast-like form of P. brasiliensis contains multiple nuclei, a porous two-layered nuclear membrane, and a thick cell wall rich in fibers, whereas the mycelial phase has thinner cell walls with a thin, electron-dense outer layer. Dimorphism The mycelial form of P. brasiliensis can be converted to the yeast form in vitro by growth on brain heart infusion agar or blood-glucose-cysteine agar when incubated for 10–20 days at 37 °C. Under these conditions, hyphal cells either die or convert to transitional forms measuring 6–30 μm in diameter, which ultimately detach or remain on the hyphal cells, yielding buds. New buds develop mesosomes and become multinucleated. In contrast, yeast-like cultures can be converted to the mycelial form by reducing the incubation temperature from 37 to 25 °C. Initially, nutritional requirements of both the yeast and mycelial phases of P. brasiliensis were thought to be identical; however, later studies demonstrated the yeast form to be auxotrophic, requiring exogenous sulfur-containing amino acids including cysteine and methionine for growth. Ecology Although the habitat of P. brasiliensis remains unknown, it is commonly associated with soils in which coffee is cultivated. It has also been associated with the nine-banded armadillo, Dasypus novemcinctus. The disease caused by P. brasiliensis is mostly geographically restricted to Latin American countries such as Brazil, Colombia, and Venezuela, with the greatest number of cases seen in Brazil. The endemic areas are characterized by hot, humid summers, dry temperate winters, average annual temperatures between 17 and 23 °C, and annual rainfall between 500 and 800 mm. However, the precise ecology regularities of the fungus remain elusive, and P. brasiliensis has rarely been encountered in nature outside the human host. One such rare example of environmental isolation was reported in 1971 by Maria B.de Albornoz and colleagues who isolated P. brasiliensis from samples of rural soil collected in Paracotos in the state of Miranda, Venezuela. In in vitro studies, the fungus has been shown to grow when inoculated into soil and sterile horse or cow excrement. The mycelial phase has also been shown to survive longer than the yeast phase in acidic soil. Despite a sexual state not having been documented, molecular investigations suggest the existence of recombining populations of P. brasiliensis, potentially by means of an undiscovered sexual state. The existence of a sexual cycle in P. brasiliensis, is supported by both molecular and morphological data. A comparative genome analysis with other well-studied fungi demonstrated the presence of sex-related genes in both the yeast and mycelial phases of P. brasiliensis. Also crosses of isolates of different mating types led to the formation of young ascocarps (sexual structures) with constricted coiled hyphae related to the initial stage of mating. Epidemiology Paracoccidioides brasiliensis causes a disease known as paracoccidioidomycosis characterized by slow, progressive granulomatous changes in the head mucosa, notably the nose and sinuses or the skin. Uncommonly, the disease affects the lymphatic system, the central nervous system, the gastrointestinal tract, or the skeletal system. Due to the high proportion of cases affecting the oral mucosa, these tissues were originally thought to be the primary route of entry of fungus. However, strong evidence now indicates the respiratory tract is the chief point of entry and P. brasiliensis lung lesions occur in nearly a third of progressive cases. The disease is not contagious. Paracoccidioidomycosis is more frequently seen in adult males than females. The hormone estrogen is thought to inhibit the transformation of the mycelial to the yeast form, as supported by in vitro experimental data, and this factor may account for the relative resistance of women to infection. Detection and surveillance A number of serologic tests have been employed for the diagnosis of paracoccidioidomycosis. Double diffusion in agar gel and complement fixation test, are amongst the most commonly used tests in serodiagnosis. Culture extracts of the yeast or mycelia are exploited to produce effective, quick, and reproducible antigens. A study reported detection of 43 kD antigen in pooled sera of affected individuals, which might provide a basis for the development of a diagnostic test. Tests targeting the presence of serum antibodies to P. brasiliensis simultaneously detect both active and historical infections and cannot discriminate active infection. The evaluation of populations in endemic zones has shown roughly equal rates of seroconversion between men and women, suggesting equal rates of exposure, despite the strong male predominance shown by the clinical disease. Clinical manifestations Paracoccidioides brasiliensis causes mucous membrane ulceration of the mouth and nose with spread through the lymphatic system. A hypothesized portal of entry for the fungus to the body is through the periodontal membrane. The route of infection is assumed to be inhalation following which the infective propagule gives rise to the distinctive multipolar budding yeast forms in the lung resembling a "ship's wheel" seen in histological sections. Both immunologically normal and compromised people are at risk for infection. The lungs, lymph nodes, and mucous membrane of the mouth are the most frequently infected tissues. The pathological features of paracoccidioidomycosis are similar to those seen in coccidioidomycosis and blastomycosis. However, in the former, the lesions first appear in the lymphoid tissue and then extend to mucous membranes, producing localized to diffusive tissue necrosis of the lymph nodes. The typically extensive involvement of lymphoid tissue and the limited occurrence of the gastrointestinal tract, bone and prostate set the clinical picture of paracoccidioidomycosis apart from that of blastomycosis. References External links Fungi described in 1912 Onygenales Fungal pathogens of humans Fungus species
Paracoccidioides brasiliensis
[ "Biology" ]
1,860
[ "Fungi", "Fungus species" ]
11,824,035
https://en.wikipedia.org/wiki/Signal-to-interference%20ratio
The signal-to-interference ratio (SIR or S/I), also known as the carrier-to-interference ratio (CIR or C/I), is the quotient between the average received modulated carrier power S or C and the average received co-channel interference power I, i.e. crosstalk, from other transmitters than the useful signal. The CIR resembles the carrier-to-noise ratio (CNR or C/N), which is the signal-to-noise ratio (SNR or S/N) of a modulated signal before demodulation. A distinction is that interfering radio transmitters contributing to I may be controlled by radio resource management, while N involves noise power from other sources, typically additive white Gaussian noise (AWGN). Carrier-to-noise-and-interference ratio (CNIR) The CIR ratio is studied in interference limited systems, i.e. where I dominates over N, typically in cellular radio systems and broadcasting systems where frequency channels are reused in view to achieve high level of area coverage. The C/N is studied in noise limited systems. If both situations can occur, the carrier-to-noise-and-interference ratio (CNIR or C/(N+I)) may be studied. See also Carrier-to-noise ratio (CNR or C/N) Carrier-to-receiver noise density (C/N0) Co-channel interference (CCI) Crosstalk Signal-to-noise ratio (SNR or S/N) SINAD (ratio of signal-plus-noise-plus-distortion to noise-plus-distortion) References Engineering ratios Radio frequency propagation Radio resource management Interference Television terminology
Signal-to-interference ratio
[ "Physics", "Mathematics", "Engineering" ]
352
[ "Physical phenomena", "Spectrum (physical sciences)", "Metrics", "Radio frequency propagation", "Engineering ratios", "Quantity", "Electromagnetic spectrum", "Waves" ]
11,825,490
https://en.wikipedia.org/wiki/Gliese%20440
Gliese 440, also known as LP 145-141 or LAWD 37, is an isolated white dwarf located from the Solar System in the constellation Musca. It is the fourth closest known white dwarf to the Sun, after Sirius B, Procyon B, and van Maanen's star. History of observations Gliese 440 is known at least from 1917, when its proper motion was published by R. T. A. Innes and H. E. Wood in Volume 37 of Circular of the Union Observatory. The corresponding designation is UO 37. (Note: this designation is not unique for this star, that is all other stars, listed in the table in the Volume 37 of this Circular, also could be called by this name). Space motion Gliese 440 may be a member of the Wolf 219 moving group, which has seven possible members. These stars share a similar motion through space, which may indicate a common origin. This group has an estimated space velocity of 160 km/s and is following a highly eccentric orbit through the Milky Way galaxy. Properties White dwarfs are no longer generating energy at their cores through nuclear fusion, and instead are steadily radiating away their remaining heat. Gliese 440 has a DQ spectral classification, indicating that it is a rare type of white dwarf which displays evidence of atomic or molecular carbon in its spectrum. In 2019, Gliese 440 was observed passing in front of a more distant star. The bending of starlight by the gravitational field of Gliese 440 observed by the Hubble Space Telescope allowed its mass to be directly measured. The estimated mass of Gliese 440 is 0.56±0.08 M☉, which fits the expected range of a white dwarf with a carbon-oxygen core. This measurement marked the first direct gravitational mass determination of a single white dwarf. Gliese 440 has only 56% of the Sun's mass, but it is the remnant of a massive main-sequence star that had an estimated 4.4 solar masses. While it was on the main sequence, it probably was a spectral class B star (in the range B4–B9). Most of the star's original mass was shed after it passed into the asymptotic giant branch stage, just prior to becoming a white dwarf. Search for companions A survey with the Hubble Space Telescope revealed no visible orbiting companions, at least down to the limit of detection. Its proximity, mass and temperature have led to it being considered a good candidate to look for Jupiter-like planets. Its relatively large mass and high temperature mean that the system is relatively short-lived and hence of more recent origin. Hipparcos-Gaia proper motion shows an anomaly that hints to the presence of an exoplanet that has a mass of either 0.44 or 0.60 which is between Saturn and Jupiter. See also List of nearest stars and brown dwarfs List of exoplanets and planetary debris around white dwarfs References External links CC 658 Local Bubble Musca White dwarfs 057367 0440
Gliese 440
[ "Astronomy" ]
623
[ "Musca", "Constellations" ]