id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
230,711 | https://en.wikipedia.org/wiki/Johannes%20Diderik%20van%20der%20Waals | Johannes Diderik van der Waals (; 23 November 1837 – 8 March 1923) was a Dutch theoretical physicist and thermodynamicist famous for his pioneering work on the equation of state for gases and liquids. Van der Waals started his career as a schoolteacher. He became the first physics professor of the University of Amsterdam when in 1877 the old Athenaeum was upgraded to Municipal University. Van der Waals won the 1910 Nobel Prize in Physics for his work on the equation of state for gases and liquids.
His name is primarily associated with the Van der Waals equation of state that describes the behavior of gases and their condensation to the liquid phase. His name is also associated with Van der Waals forces (forces between stable molecules), with Van der Waals molecules (small molecular clusters bound by Van der Waals forces), and with Van der Waals radii (sizes of molecules). James Clerk Maxwell once said that, "there can be no doubt that the name of Van der Waals will soon be among the foremost in molecular science."
In his 1873 thesis, Van der Waals noted the non-ideality of real gases and attributed it to the existence of intermolecular interactions. He introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. Spearheaded by Ernst Mach and Wilhelm Ostwald, a strong philosophical current that denied the existence of molecules arose towards the end of the 19th century. The molecular existence was considered unproven and the molecular hypothesis unnecessary. At the time Van der Waals's thesis was written (1873), the molecular structure of fluids had not been accepted by most physicists, and liquid and vapor were often considered as chemically distinct. But Van der Waals's work affirmed the reality of molecules and allowed an assessment of their size and attractive strength. His new formula revolutionized the study of equations of state. By comparing his equation of state with experimental data, Van der Waals was able to obtain estimates for the actual size of molecules and the strength of their mutual attraction.
The effect of Van der Waals's work on molecular physics in the 20th century was direct and fundamental. By introducing parameters characterizing molecular size and attraction in constructing his equation of state, Van der Waals set the tone for modern molecular science. That molecular aspects such as size, shape, attraction, and multipolar interactions should form the basis for mathematical formulations of the thermodynamic and transport properties of fluids is presently considered an axiom. With the help of the Van der Waals's equation of state, the critical-point parameters of gases could be accurately predicted from thermodynamic measurements made at much higher temperatures. Nitrogen, oxygen, hydrogen, and helium subsequently succumbed to liquefaction. Heike Kamerlingh Onnes was significantly influenced by the pioneering work of Van der Waals. In 1908, Onnes became the first to make liquid helium; this led directly to his 1911 discovery of superconductivity.
Biography
Early years and education
Johannes Diderik van der Waals was born on 23 November 1837 in Leiden in the Netherlands. He was the eldest of ten children born to Jacobus van der Waals and Elisabeth van den Berg. His father was a carpenter in Leiden. As was usual for all girls and working-class boys in the 19th century, he did not go to the kind of secondary school that would have given him the right to enter university. Instead he went to a school of “advanced primary education”, which he finished at the age of fifteen. He then became a teacher's apprentice in an elementary school. Between 1856 and 1861 he followed courses and gained the necessary qualifications to become a primary school teacher and head teacher.
In 1862, he began to attend lectures in mathematics, physics and astronomy at the university in his city of birth, although he was not qualified to be enrolled as a regular student in part because of his lack of education in classical languages. However, Leiden University had a provision that enabled outside students to take up to four courses a year. In 1863 the Dutch government started a new kind of secondary school (HBS, a school aiming at the children of the higher middle classes). Van der Waals—at that time head of an elementary school—wanted to become a HBS teacher in mathematics and physics and spent two years studying in his spare time for the required examinations.
In 1865, he was appointed as a physics teacher at the HBS in Deventer and in 1866, he received such a position in The Hague, which was close enough to Leiden to allow Van der Waals to resume his courses at the university there. In September 1865, just before moving to Deventer, Van der Waals married the eighteen-year-old Anna Magdalena Smit.
Professorship
Van der Waals still lacked the knowledge of the classical languages that would have given him the right to enter university as a regular student and to take examinations. However, it so happened that the law regulating the university entrance was changed and dispensation from the study of classical languages could be given by the minister of education. Van der Waals was given this dispensation and passed the qualification exams in physics and mathematics for doctoral studies.
At Leiden University, on June 14, 1873, he defended his doctoral thesis Over de Continuïteit van den Gas- en Vloeistoftoestand (on the continuity of the gaseous and liquid state) under Pieter Rijke. In the thesis, he introduced the concepts of molecular volume and molecular attraction.
In September 1877, Van der Waals was appointed the first professor of physics at the newly founded Municipal University of Amsterdam. Two of his notable colleagues were the physical chemist Jacobus Henricus van 't Hoff and the biologist Hugo de Vries. Until his retirement at the age of 70, Van der Waals remained at the Amsterdam University. He was succeeded by his son Johannes Diderik van der Waals, Jr., who also was a theoretical physicist. In 1910, at the age of 72, Van der Waals was awarded the Nobel Prize in physics. He died at the age of 85 on March 8, 1923.
Scientific work
The main interest of Van der Waals was in the field of thermodynamics. He was influenced by Rudolf Clausius's 1857 treatise entitled Über die Art der Bewegung, welche wir Wärme nennen (On the Kind of Motion which we Call Heat). Van der Waals was later greatly influenced by the writings of James Clerk Maxwell, Ludwig Boltzmann, and Willard Gibbs. Clausius's work led him to look for an explanation of Thomas Andrews's experiments that had revealed, in 1869, the existence of critical temperatures in fluids. He managed to give a semi-quantitative description of the phenomena of condensation and critical temperatures in his 1873 thesis, entitled Over de Continuïteit van den Gas- en Vloeistoftoestand (On the continuity of the gas and liquid state). This dissertation represented a hallmark in physics and was immediately recognized as such, e.g. by James Clerk Maxwell who reviewed it in Nature in a laudatory manner.
In this thesis he derived the equation of state bearing his name. This work gave a model in which the liquid and the gas phase of a substance merge into each other in a continuous manner. It shows that the two phases are of the same nature. In deriving his equation of state Van der Waals assumed not only the existence of molecules (the existence of atoms was disputed at the time), but also that they are of finite size and attract each other. Since he was one of the first to postulate an intermolecular force, however rudimentary, such a force is now sometimes called a Van der Waals force.
A second major discovery was the 1880 the law of corresponding states, which showed that the Van der Waals equation of state can be expressed as a simple function of the critical pressure, critical volume, and critical temperature. This general form is applicable to all substances (see Van der Waals equation.) The compound-specific constants a and b in the original equation are replaced by universal (compound-independent) quantities. It was this law which served as a guide during experiments which ultimately led to the liquefaction of hydrogen by James Dewar in 1898 and of helium by Heike Kamerlingh Onnes in 1908.
In 1890, Van der Waals published a treatise on the Theory of Binary Solutions in the Archives Néerlandaises. By relating his equation of state with the second law of thermodynamics, in the form first proposed by Willard Gibbs, he was able to arrive at a graphical representation of his mathematical formulations in the form of a surface which he called Ψ (Psi) surface following Gibbs, who used the Greek letter Ψ for the free energy of a system with different phases in equilibrium.
Mention should also be made of Van der Waals's theory of capillarity, which in its basic form first appeared in 1893. In contrast to the mechanical perspective on the subject provided earlier by Pierre-Simon Laplace, Van der Waals took a thermodynamic approach. This was controversial at the time, since the existence of molecules and their permanent, rapid motion were not universally accepted before Jean Baptiste Perrin's experimental verification of Albert Einstein's theoretical explanation of Brownian motion.
Personal life
He married his wife Anna Magdalena Smit in 1865, and the couple had three daughters (Anne Madeleine, , Johanna Diderica) and one son, the physicist , who also worked at the University of Amsterdam. Jacqueline was a poet of some note. Van der Waals's nephew Peter van der Waals was a cabinet maker and a leading figure in the Sapperton, Gloucestershire school of the Arts and Crafts movement. His wife died of tuberculosis at 34 years old in 1881. After becoming a widower Van der Waals never remarried and was so shaken by the death of his wife that he did not publish anything for about a decade. He died in Amsterdam on March 8, 1923, one year after his daughter Jacqueline had died.
Honours
Van der Waals received numerous honors and distinctions, besides winning the 1910 Nobel Prize in Physics. He was awarded an honorary doctorate of the University of Cambridge; was made Honorary Member of the Imperial Society of Naturalists of Moscow, the Royal Irish Academy, and the American Philosophical Society (1916); Corresponding Member of the Institut de France and the Royal Academy of Sciences of Berlin; Associate Member of the Royal Academy of Sciences of Belgium; and Foreign Member of the Chemical Society of London, the National Academy of Sciences of the United States (1913), and of the Accademia dei Lincei of Rome. Van der Waals became a member of the Royal Netherlands Academy of Arts and Sciences in 1875. From 1896 until 1912, he was secretary of this society. He was furthermore elected as Honorary Member of the Netherlands Chemical Society in 1912.
Minor planet 32893 van der Waals is named in his honor.
Related quotes
See also
Van der Waals equation
Van der Waals strain
Van der Waals radius
Van der Waals force
Redlich–Kwong equation of state
Peng–Robinson equation of state
Notes
References
Citations
Sources
Further reading
Kipnis, A. Ya.; Yavelov, B. E.; Rowlinson, J. S. (trans.): Van der Waals and Molecular Science. (Oxford: Clarendon Press, 1996)
Sengers, Johanna Levelt: How Fluids Unmix: Discoveries by the School of Van der Waals and Kamerlingh Onnes. (Amsterdam : Koninklijke Nerlandse Akademie van Wetenschappen, 2002)
Shachtman, Tom: Absolute Zero and the Conquest of Cold. (Boston: Houghton Mifflin, 1999)
Van Delft, Dirk: Freezing Physics: Heike Kamerlingh Onnes and the Quest for Cold. (Amsterdam: Koninklijke Nerlandse Akademie van Wetenschappen, 2008)
Van der Waals, J. D.: Edited and Intro. J. S. Rowlinson: On the Continuity of the Liquid and Gaseous States. (New York: Dover Publications, 2004, 320pp)
External links
Scientists of the Dutch School Van der Waals, Royal Netherlands Academy of Arts and Sciences
Albert van Helden Johannes Diderik van der Waals 1837 – 1923 In: K. van Berkel, A. van Helden and L. Palm ed., A History of Science in the Netherlands. Survey, Themes and Reference (Leiden: Brill, 1999) 596 – 598.
including the Nobel Lecture, December 12, 1910 The Equation of State for Gases and Liquids
Museum Boerhaave
H.A.M. Snelders, Waals Sr., Johannes Diderik van der (1837–1923), in Biografisch Woordenboek van Nederland.
Biography of Johannes Diderik van der Waals (1837–1923) at the National Library of the Netherlands.
1837 births
1923 deaths
20th-century Dutch physicists
Dutch theoretical physicists
Thermodynamicists
Dutch Nobel laureates
Nobel laureates in Physics
Members of the American Philosophical Society
Members of the Royal Netherlands Academy of Arts and Sciences
Foreign associates of the National Academy of Sciences
Leiden University alumni
Academic staff of the University of Amsterdam
Scientists from Leiden
19th-century Dutch physicists | Johannes Diderik van der Waals | Physics,Chemistry | 2,762 |
11,885,652 | https://en.wikipedia.org/wiki/Technical%20debt | In software development and other information technology fields, technical debt (also known as design debt or code debt) is the implied cost of future reworking because a solution prioritizes expedience over long-term design.
Analogous with monetary debt, if technical debt is not repaid, it can accumulate "interest", making it harder to implement changes. Unaddressed technical debt increases software entropy and cost of further rework. Similarly to monetary debt, technical debt is not necessarily a bad thing, and sometimes (e.g. as a proof-of-concept) is required to move projects forward. On the other hand, some experts claim that the "technical debt" metaphor tends to minimize the ramifications, which results in insufficient prioritization of the necessary work to correct it.
As a change is started on a codebase, there is often the need to make other coordinated changes in other parts of the codebase or documentation. Changes required that are not completed are considered debt, and until paid, will incur interest on top of interest, making it cumbersome to build a project. Although the term is primarily used in software development, it can also be applied to other professions.
In a Dagstuhl seminar held in 2016, technical debt was defined by academic and industrial experts of the topic as follows: "In software-intensive systems, technical debt is a collection of design or implementation constructs that are expedient in the short term, but set up a technical context that can make future changes more costly or impossible. Technical debt presents an actual or contingent liability whose impact is limited to internal system qualities, primarily maintainability and evolvability."
Assumptions
Technical debt posits that an expedient design essentially reduces expense in the present, but causes extra expense in the future. This premise makes assumptions about the future:
That the product survives long enough to actually incur the future costs
That future events do not make the "long-term" design obsolete just as soon as the expedient design
That future advancements do not make reworking less expensive than present assumptions
Since the future is uncertain, it is possible that a perceived technical debt today may in fact look like a savings in the future. Although the debt scenario is considered more likely, the uncertainty further complicates design decisions.
Also, the calculation of technical debt typically considers the cost of employee work time, but a complete assessment should include other costs incurred or deferred by the design decision, such as training, licensing, tools, services, hardware, opportunity cost, etc.
Causes
Common causes of technical debt include:
Ongoing development, long series of project enhancements over time renders old solutions sub-optimal.
Insufficient up-front definition, where requirements are still being defined during development, development starts before any design takes place. This is done to save time but often has to be reworked later.
Business pressures, where the business considers getting something released sooner before the necessary changes are completed, hence builds up technical debt involving those uncompleted changes.
Lack of process or understanding, where businesses are blind to the concept of technical debt, and make decisions without considering the implications.
Tightly coupled components, where functions are not modular, the software is not flexible enough to adapt to changes in business needs.
Lack of a test suite, which encourages quick and risky band-aid bug fixes.
Lack of software documentation, where code is created without supporting documentation. The work to create documentation represents debt.
Lack of collaboration, where knowledge isn't shared around the organization and business efficiency suffers, or junior developers are not properly mentored.
Parallel development on multiple branches accrues technical debt because of the work required to merge the changes into a single source base. The more changes done in isolation, the more debt.
Deferred refactoring; As the requirements for a project evolve, it may become clear that parts of the code have become inefficient or difficult to edit and must be refactored in order to support future requirements. The longer refactoring is delayed, and the more code is added, the bigger the debt.
Lack of alignment to standards, where industry standard features, frameworks, and technologies are ignored. Eventually integration with standards will come and doing so sooner will cost less (similar to "delayed refactoring").
Lack of knowledge, when the developer doesn't know how to write elegant code.
Lack of ownership, when outsourced software efforts result in in-house engineering being required to refactor or rewrite outsourced code.
Poor technological leadership, where poorly thought out commands are handed down the chain of command.
Last minute specification changes. These have potential to percolate throughout a project, but there is insufficient time or budget to document and test the changes.
Laziness, where employees might not be willing or incentivized to put extra effort into code readability and documentation.
Service or repay the technical debt
Kenny Rubin uses the following status categories:
Happened-upon technical debt—debt that the development team was unaware existed until it was exposed during the normal course of performing work on the product. For example, the team is adding a new feature to the product and in doing so it realizes that a work-around had been built into the code years before by someone who has long since departed.
Known technical debt—debt that is known to the development team and has been made visible using one of many approaches.
Targeted technical debt—debt that is known and has been targeted for servicing by the development team.
Consequences
"Interest payments" are caused by both the necessary local maintenance and the absence of maintenance by other users of the project. Ongoing development in the upstream project can increase the cost of "paying off the debt" in the future. One pays off the debt by simply completing the uncompleted work.
The buildup of technical debt is a major cause for projects to miss deadlines. It is difficult to estimate exactly how much work is necessary to pay off the debt. For each change that is initiated, an uncertain amount of uncompleted work is committed to the project. The deadline is missed when the project realizes that there is more uncompleted work (debt) than there is time to complete it in. To have predictable release schedules, a development team should limit the amount of work in progress in order to keep the amount of uncompleted work (or debt) small at all times.
If enough work is completed on a project to not present a barrier to submission, then a project will be released which still carries a substantial amount of technical debt. If this software reaches production, then the risks of implementing any future refactors which might address the technical debt increase dramatically. Modifying production code carries the risk of outages, actual financial losses and possibly legal repercussions if contracts involve service-level agreements (SLA). For this reason we can view the carrying of technical debt to production almost as if it were an increase in interest rate and the only time this decreases is when deployments are turned down and retired.
While Manny Lehman's Law already indicated that evolving programs continually add to their complexity and deteriorating structure unless work is done to maintain them, Ward Cunningham first drew the comparison between technical complexity and debt in a 1992 experience report:
In his 2004 text, Refactoring to Patterns, Joshua Kerievsky presents a comparable argument concerning the costs associated with architectural negligence, which he describes as "design debt".
Activities that might be postponed include documentation, writing tests, attending to TODO comments and tackling compiler and static code analysis warnings. Other instances of technical debt include knowledge that isn't shared around the organization and code that is too confusing to be modified easily.
Writing about PHP development in 2014, Junade Ali said:
Grady Booch compares how evolving cities is similar to evolving software-intensive systems and how lack of refactoring can lead to technical debt.
In open source software, postponing sending local changes to the upstream project is a form of technical debt.
See also
Code smell (symptoms of inferior code quality that can contribute to technical debt)
Big ball of mud
Bus factor
Escalation of commitment
Manumation
Overengineering
Shotgun surgery
Software entropy
Software rot
Spaghetti code
SQALE
Sunk cost
TODO, FIXME, XXX
References
External links
Ward Explains Debt Metaphor, video from Ward Cunningham
OnTechnicalDebt The online community for discussing technical debt
Experts interviews on Technical Debt: Ward Cunningham, Philippe KRUCHTEN, Ipek OZKAYA, Jean-Louis LETOUZEY
Steve McConnell discusses technical debt
TechnicalDebt from Martin Fowler Bliki
Averting a "Technical Debt" Crisis by Doug Knesek
"Get out of Technical Debt Now!", a talk by Andy Lester
Lehman's Law
Managing Technical Debt Webinar by Steve McConnell
Boundy, David, Software cancer: the seven early warning signs or here, ACM SIGSOFT Software Engineering Notes, Vol. 18 No. 2 (April 1993), Association for Computing Machinery, New York, New York, US
Technical debt: investeer en voorkom faillissement by Colin Spoel
Technical debts: Everything you need to know
What is technical debt? from DeepSource blog
Metaphors
Software architecture
Software engineering terminology
Software maintenance | Technical debt | Technology,Engineering | 1,890 |
55,340,952 | https://en.wikipedia.org/wiki/Digital%20product%20design | Digital product design is an iterative design process used to solve a functional problem with a formal solution. A digital product designer identifies an existing problem, offers the best possible solution, and launches it to a market that demonstrates demand for the particular solution. The field is considered a subset of product design. Some digital products have both digital and physical components (such as Nike+ and Fitbit), but the term is mainly used for products produced through software engineering. Since digital product design have become mainstream in the creative industry, a digital product designer oftentimes is simply referred to as a "product designer" in job posts.
Career Path
Digital product design is a transdisciplinary field because a digital product designer sees the project from end-to-end; from recognizing an opportunity and understanding the customer's need through to final delivery. A digital product designer is an integral part of the creative team at every stage of the process who leads the UI and UX design throughout, and is also involved in the product strategy with an entrepreneurial mindset to help get it to the market. Although computer programming is not a required skill for a digital product designer, digital product designers need to have a high-level understanding of how all of the technical pieces work together in order to ensure that the creative vision is fully realized. Although part of a digital product designer's job is to oversee the look and feel of the product as well as the development of the software system, digital product design is a different creative process and career path from graphic design and web design.
Academic Degree
Digital product design is a broadly interdisciplinary and transdisciplinary course of study combining fields of Computer Technology, Industrial Design, Entrepreneurship, Marketing, and Humanities. Established as a modern degree addressing needs for cross-disciplinary education, one of its fundamental objectives is to develop lateral thinking skills across more rigidly defined academic areas recognized as a valuable component in expanding technological horizons.
Digital Product Design is part of the Advertising & Digital Design BFA program at Fashion Institute of Technology.
Digital Product Design is taught at Rhode Island School of Design.
Digital Product Design is a concentration in Communication Design MPS program at Parsons.
Digital Product Design is a certificate program at Pratt institute.
References
Product design | Digital product design | Engineering | 449 |
42,096,146 | https://en.wikipedia.org/wiki/Plakoridine%20A | Plakoridine A is an alkaloid isolated from the marine sponge Plakortis sp. There are three plakoridines known, named plakoridine A, B, and C.
References
Pyrrolidine alkaloids
Methyl esters
4-Hydroxyphenyl compounds
Ketenes | Plakoridine A | Chemistry | 67 |
64,333,687 | https://en.wikipedia.org/wiki/Marion%20McQuillan | Marion McQuillan (30 October 1921 – 24 June 1998) was a British metallurgist who specialised in the engineering uses for titanium and its alloys. She researched jet engine metals and was on the first team to research titanium for the Royal Aircraft Establishment Farnborough (RAE).
Biography
Marion Katherine Blight was born in Watford in 1921. Her mother worked in domestic service while her father was a shop assistant. McQuillan attended Wycombe High School before getting a scholarship to Henrietta Barnett’s School. McQuillan went to University in 1939 where she graduated from Girton College, Cambridge with a degree in metallurgy and natural sciences. She got her first job in 1942 in the Royal Aircraft Establishment Farnborough (RAE) in 1942. McQuillan researched jet engine metals and was a member of the first team to research titanium.
In 1946 she travelled through Germany and Austria as member of one of the many teams sent by the British Intelligence Objectives Sub-Committee, collecting technical information from universities, research establishments and factories.
She also worked at the Atomic Energy Research Establishment at Harwell, working on some of the early metallurgical problems of nuclear energy. From 1948-1951 she was at the Australian Royal Aircraft Establishment in Melbourne.
McQuillan returned to the UK where she began to work for ICI Metals (also known as IMI), in the Titanium Alloy Research Department where, within two years later she was head of the section. With her husband McQuillan published the seminal book “Titanium” in 1956. During the 1960s McQuillan registered 8 titanium alloy patents. In 1967 McQuillan was appointed technical director of the New Metals Division and by 1978 she became the first woman managing director of Imperial Metal Industries subsidiary, Enots.
Publications
Jun 1943. Further report on the use of aged chromate baths to specification DTD 911, Bath iii (30 minute hot chromate bath). Petch M K. RAE MR7147(A). Met/RTN/22
Feb 1944. Variations in corrosion properties over magnesium alloy sheet. Jones E R W Petch M K. RAE MR6858. Met/RTN/21, also in J. Inst. Metals, Nov. I946
Feb 1944. Protection of magnesium alloy sheet to specification DTD 118 by a modified form of the I.G. acid dip (bath iv of specification.DTD 911). Petch M K. RAE MR7588. Met/RTN/23
Mar 1944. Protection of magnesium alloys against corrosion by electrolytic chromate films. Petch M K. RAE MR3726(D). Met/RTN/17
Nov 1944. The protection of magnesium alloy components against corrosion by sprayed coatings of "Thickal" Latex. Petch M K. RAE MR7290. Met/RTN/22
1949. Some Observations on the Behaviour of Platinum/Platinum-Rhodium Thermocouples at High Temperatures. M K McQuillan. Journal of Scientific Instruments, Volume 26, Number 10
1956. Titanium - Metallurgy of the Rarer Metals – 4. by McQuillan MK.; Publisher: London, Butterworths, 1956.
1956. Titanium. McQuillan, A. D.; McQuillan, M. K.; Castle, J. G.Physics Today, vol. 9, issue 10, p. 24. Publication Date: 00/1956
1956. Titanium. Alan Dennis McQuillan; Marion Katharine McQuillan. Publisher: New York : Academic Press ; London : Butterworths Scientific Publications, 1956.
1957. Titanium. Alan D MacQuillan; Marion Katharine Macquillan. Publisher: London Butterworth [1957]
1958. Titan. Alan Denis McQuillan; Marion Katharine McQuillan; Sergej Georgievič Glazunov; Leonid Pavlovič Lužnikov.Language: Russian . Publisher: Moskva : Gosudarstvennoe Naučno-Tehničeskoe Izdatel'stvo Literatury po Černoj i Cvetnoj Metallurgii, 1958.
1978. McQuillan, Marion. Graduate Engineers in Production. Cranfield Inst of Tech, 1978.
1979. Graduate myth. Production Engineer (Volume: 58 , Issue: 4 , April 1979 )
Patents
GB772534A Improvements in or relating to titanium base alloys
CH457874A Verfahren zur Wärmebehandlung einer Titanlegierung
GB929931A Titanium-base alloys and their heat treatment
US3007824A Method of heat treating a ti-be alloy
US3118828A Electrode structure with titanium alloy base
FI35168A Sätt att framställa en elektrod
DE1112838B Verfahren zum Oberflaechenhaerten von (ª‡ú½ª‰) Ti-Legierungen
Awards
McQuillan was awarded the Rosenhain Medal in 1965. She was on the Interservices Metallurgical Research Council until 1989 and in 1967 served as vice-president of the Institute of Metals. In 1968 she was fundamental to the First International Conference on Titanium in London.
Personal life
She married fellow metallurgist Norman Petch whom she met in Cambridge but they divorced in 1944. She went on to marry metallurgist, Alan Dennis McQuillam in 1947. Her husband died in 1987. McQuillan died in Gloucestershire in 1998.
References
1921 births
1998 deaths
People from Watford
Metallurgists
People from Gloucestershire
People educated at Wycombe High School | Marion McQuillan | Chemistry,Materials_science | 1,163 |
454,446 | https://en.wikipedia.org/wiki/Abstraction%20%28mathematics%29 | Abstraction in mathematics is the process of extracting the underlying structures, patterns or properties of a mathematical concept, removing any dependence on real world objects with which it might originally have been connected, and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena. In other words, to be abstract is to remove context and application. Two of the most highly abstract areas of modern mathematics are category theory and model theory.
Description
Many areas of mathematics began with the study of real world problems, before the underlying rules and concepts were identified and defined as abstract structures. For example, geometry has its origins in the calculation of distances and areas in the real world, and algebra started with methods of solving problems in arithmetic.
Abstraction is an ongoing process in mathematics and the historical development of many mathematical topics exhibits a progression from the concrete to the abstract. For example, the first steps in the abstraction of geometry were historically made by the ancient Greeks, with Euclid's Elements being the earliest extant documentation of the axioms of plane geometry—though Proclus tells of an earlier axiomatisation by Hippocrates of Chios. In the 17th century, Descartes introduced Cartesian co-ordinates which allowed the development of analytic geometry. Further steps in abstraction were taken by Lobachevsky, Bolyai, Riemann and Gauss, who generalised the concepts of geometry to develop non-Euclidean geometries. Later in the 19th century, mathematicians generalised geometry even further, developing such areas as geometry in n dimensions, projective geometry, affine geometry and finite geometry. Finally Felix Klein's "Erlangen program" identified the underlying theme of all of these geometries, defining each of them as the study of properties invariant under a given group of symmetries. This level of abstraction revealed connections between geometry and abstract algebra.
In mathematics, abstraction can be advantageous in the following ways:
It reveals deep connections between different areas of mathematics.
Known results in one area can suggest conjectures in another related area.
Techniques and methods from one area can be applied to prove results in other related areas.
Patterns from one mathematical object can be generalized to other similar objects in the same class.
On the other hand, abstraction can also be disadvantageous in that highly abstract concepts can be difficult to learn. A degree of mathematical maturity and experience may be needed for conceptual assimilation of abstractions.
Bertrand Russell, in The Scientific Outlook (1931), writes that "Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say."
See also
Abstract detail
Generalization
Abstract thinking
Abstract logic
Abstract algebraic logic
Abstract model theory
Abstract nonsense
Concept
Mathematical maturity
References
Further reading
Mathematical terminology
Abstraction | Abstraction (mathematics) | Mathematics | 573 |
5,696,506 | https://en.wikipedia.org/wiki/Colossal%20Typewriter | Colossal Typewriter by John McCarthy and Roland Silver was one of the earliest computer text editors. The program ran on the PDP-1 at Bolt, Beranek and Newman (BBN) by December 1960.
About this time, both authors were associated with the Massachusetts Institute of Technology, but it is unclear whether the editor ran on the TX-0 on loan to MIT from Lincoln Laboratory or on the PDP-1 donated to MIT in 1961 by Digital Equipment Corporation. A "Colossal Typewriter Program" is in the BBN Program Library, and, under the same name, in the DECUS Program Library as BBN- 6 (CT).
See also
Expensive Typewriter
TECO
RUNOFF
TJ-2
Notes
1960 software
Text editors
History of software | Colossal Typewriter | Technology | 154 |
60,840,748 | https://en.wikipedia.org/wiki/Huawei%20Mediapad%20M5 | Huawei Mediapad M5 is a series of tablets designed and marketed by Huawei, comprising three models: an 8.4 inch model, a 10.8 inch model, and a 10.8 inch Pro model. Each model came with a Wi-Fi version and a Wi-Fi+LTE version.
For the larger variant, Huawei also includes a desktop mode which allows the user to switch the mobile interface to a traditional desktop interface, by pairing with a keyboard accessory that allows it to work like a laptop.
The Huawei Mediapad M5 is a compact, high-performance Android tablet released to the market in 2018, filling the market gap after Sony's exit from the tablet market.
In 2020, the next M6 model was released based on the 7 nm Kirin 980.
The M5 has a USB-C port, however it supports USB 2.0, not USB 3.0, so it does not support HDMI over USB. It does support screen mirroring.
References
Android (operating system) devices
Tablet computers introduced in 2018
Huawei products
Tablet computers | Huawei Mediapad M5 | Technology | 222 |
231,301 | https://en.wikipedia.org/wiki/Progressing%20cavity%20pump | A progressing cavity pump is a type of positive displacement pump and is also known as a progressive cavity pump, progg cavity pump, eccentric screw pump or cavity pump. It transfers fluid by means of the progress, through the pump, of a sequence of small, fixed shape, discrete cavities, as its rotor is turned. This leads to the volumetric flow rate being proportional to the rotation rate (bidirectionally) and to low levels of shearing being applied to the pumped fluid.
These pumps have application in fluid metering and pumping of viscous or shear-sensitive materials. The cavities taper down toward their ends and overlap. As one cavity diminishes another increases, the net flow amount has minimal variation as the total displacement is equal. This design results in a flow with little to no pulse.
It is common for equipment to be referred to by the specific manufacturer or product names. Hence names can vary from industry to industry and even regionally; examples include: Moineau (after the inventor, René Moineau).
The original 4 Manufacturing licenses were issued to; MOYNO pump [Americas], Mono pump [UK, Europe], Gardier [Belgium] and PCM.
A progressing cavity rotor and stator can also act as a motor (mud motor) when fluid is pumped through its interior. Applications include directional well drilling.
Theory
The progressing cavity pump normally consists of a helical rotor and a twin helix, twice the wavelength helical hole in a stator. The rotor seals tightly against the stator as it rotates, forming a set of fixed-size cavities in between.
The cavities move when the rotor is rotated but their shape or volume does not change. The pumped material is moved inside the cavities.
The principle of this pumping technique is frequently misunderstood. Often it is believed to occur due to a dynamic effect caused by drag, or friction against the moving teeth of the screw rotor. In reality it is due to the sealed cavities, like a piston pump, and so has similar operational characteristics, such as being able to pump at extremely low rates, even to high pressure, revealing the effect to be purely positive displacement. The rotor "climbs" the inner cavity in an orbital manner (see pump).
At a high enough pressure the sliding seals between cavities will leak some fluid rather than pumping it, so when pumping against high pressures a longer pump with more cavities is more effective, since each seal has only to deal with the pressure difference between adjacent cavities. Pump design begins with two (to three) cavities per stage. The number of stages (currently up to 24) is only limited by the ability to machine the tooling.
When the rotor is rotated, it rolls/climbs around the inside surface of the hole. The motion of the rotor is the same as the planet gears of a planetary gears system. As the rotor simultaneously rotates and moves around, the combined motion of the eccentrically mounted drive shaft is in the form of a hypocycloid. In the typical case of single-helix rotor and double-helix stator, the hypocycloid is just a straight line. The rotor must be driven through a set of universal joints or other mechanisms to allow for the eccentricity.
The rotor takes a form similar to a corkscrew, and this, combined with the off-center rotary motion, leads to the alternative name: eccentric screw pump.
Different rotor shapes and rotor/stator pitch ratios exist, but are specialized in that they don't generally allow complete sealing, so reducing low speed pressure and flow rate linearity, but improving actual flow rates, for a given pump size, and/or the pump's solids handling ability.
Operation
In operation, progressing cavity pumps are fundamentally fixed flow rate pumps, like piston pumps and peristaltic pumps, and this type of pump needs a fundamentally different understanding than the types of pumps to which people are more commonly introduced, namely ones that can be thought of as generating pressure. This can lead to the mistaken assumption that all pumps can have their flow rates adjusted by using a valve attached to their outlet, but with this type of pump this assumption is a problem, since such a valve will have practically no effect on the flow rate and completely closing it will involve very high pressures being generated. To prevent this, pumps are often fitted with cut-off pressure switches, rupture discs (deliberately weak and easily replaced), or a bypass pipe that allows a variable amount of a fluid to return to the inlet. With a bypass fitted, a fixed flow rate pump is effectively converted to a fixed pressure one.
At the points where the rotor touches the stator, the surfaces are generally traveling transversely, so small areas of sliding contact occur. These areas need to be lubricated by the fluid being pumped (hydrodynamic lubrication). This can mean that more torque is required for starting, and if allowed to operate without fluid, called 'run dry', rapid deterioration of the stator can result.
While progressing cavity pumps offer long life and reliable service transporting thick or lumpy fluids, abrasive fluids will significantly shorten the life of the stator. However, slurries (particulates in a medium) can be pumped reliably if the medium is viscous enough to maintain a lubrication layer around the particles and so protect the stator.
Typical design
Specific designs involve the rotor of the pump being made of a steel, coated with a smooth hard surface, normally chromium, with the body (the stator) made of a molded elastomer inside a metal tube body. The elastomer core of the stator forms the required complex cavities. The rotor is held against the inside surface of the stator by angled link arms, bearings (immersed in the fluid) allowing it to roll around the inner surface (un-driven). Elastomer is used for the stator to simplify the creation of the complex internal shape, created by means of casting, which also improves the quality and longevity of the seals by progressively swelling due to absorption of water and/or other common constituents of pumped fluids. Elastomer/pumped fluid compatibility will thus need to be taken into account.
Two common designs of stator are the "equal-walled" and the "unequal-walled". The latter, having greater elastomer wall thickness at the peaks allows larger-sized solids to pass through because of its increased ability to distort under pressure. The former have a constant elastomer wall thickness and therefore exceed in most other aspects such as pressure per stage, precision, heat transfer, wear, and weight. They are more expensive due to the complex shape of the outer tube.
History
In 1930, René Moineau, a pioneer of aviation, while inventing a compressor for jet engines, discovered that this principle could also work as a pumping system. The University of Paris awarded René Moineau a doctorate of science for his thesis on “A new capsulism”. His pioneering dissertation laid the groundwork for the progressing cavity pump.
Typical application areas
Food and drink pumping
Oil pumping
Coal slurry pumping
Sewage and sludge pumping
Viscous chemical pumping
Stormflow screening
Downhole mud motors in oilfield directional drilling (it reverses the process, turning the hydraulic into mechanical power)
Limited energy well water pumping
Specific uses
Grout or cement pumping
Lubrication oil pumping
Marine diesel fuel pumping
Mining slurry pumping
Oilfield mud motors
References
External links
Progressing cavity pump (PCP) systems - Society of Petroleum Engineers PetroWiki article
Progressing Cavity Pumps - Oil Well Production Artificial Lift - The seminal textbook by Henri Cholet and Christian Wittrisch
Progressing cavity pumps - A description of progressing cavity pump technology and principle from one of the world's largest PCP manufacturers
Progressing cavity pump systems for artificial lift – Part 1: Pumps - ISO 15136-1:2009 Petroleum and natural gas industries
Progressive Cavity Pumps - A concise description of pump operation from the Food and Agriculture Organization
Pumps | Progressing cavity pump | Physics,Chemistry | 1,663 |
52,496,956 | https://en.wikipedia.org/wiki/Institute%20for%20Advanced%20Sustainability%20Studies | The Research Institute for Sustainability (RIFS) in Potsdam, previously known as the Institute for Sustainability Studies (IASS), is part of the Helmholtz Association, with the GFZ German Research Centre for Geosciences. RIFS collaborates with a range of stakeholders to address sustainability challenges including researchers, governmental bodies, private sectors, and civil society. Its research covers areas such as climate change mitigation, sustainable governance, and cultural transformations in the Anthropocene. Additionally, RIFS promotes knowledge exchange through its Fellow Program, strengthening its sustainability initiatives.
Organization
The RIFS currently employs approximately 120 people from over 30 countries. In 2019 the Board of Directors was composed of the Institute's three Scientific Directors – Mark G. Lawrence, Patrizia Nanz and Ortwin Renn – and its Head of Administration, Jakob Meyer. The RIFS receives funding from the German Federal Ministry for Education and Research (85%) and the Federal State of Brandenburg (15%). The Institute's research program currently spans five areas: Democratic Transformations; Systemic Interdependencies: Nature, Technology, Society; Perceptions, Values, Orientation; Energy Systems and Societal Change; Governance for the Environment and Society. These research areas are supported in their work by a cross-cutting research area tasked with facilitating dialogue between science, policy-makers, and civil society actors.
History
The IASS (Now the RIFS) was founded in Potsdam, Germany, on 2 February 2009. Klaus Töpfer was the Institute's founding director. He led the Institute as its executive director until September 2015, together with scientific directors Carlo Rubbia (June 2010 – May 2015) and Mark G. Lawrence (from October 2011).
In January 2023, the IASS merged with the Helmholtz Association, Germany’s largest scientific organization, and was incorporated into the German Research Centre for Geosciences (GFZ), while retaining its scientific independence. The GFZ is Germany’s national research center for the study of the geosphere. Since then, the IASS has renamed itself the Research Institute for Sustainability.
Publications
The RIFS uses a number of publications formats to disseminate its findings and policy recommendations. These include:
RIFS Policy Briefs – Policy recommendations and assessments
RIFS Fact Sheets – Brief overviews of research relating to topics addressed by the Institute
RIFS Studies – Detailed research findings addressing a central issue
RIFS Working Paper – Interim research findings and interventions in current debates
Other publication formats include articles in scholarly journals, statements, monographs, and edited volumes. The institute also hosts a blog on its website.
References
External links
Research Institute for Sustainability
Institut für Klimawandel, Erdsystem und Nachhaltigkeit – Artikel im PotsdamWiki
Vorbild Princeton: Potsdam bekommt eine Denkfabrik für Klimaforschung – Spiegel Online, 30. Juni 2009
Climate change organizations
Environmental research institutes
Organizations established in 2009
Organisations based in Potsdam
2009 establishments in Germany | Institute for Advanced Sustainability Studies | Environmental_science | 621 |
387,109 | https://en.wikipedia.org/wiki/Fordism | Fordism is an industrial engineering and manufacturing system that serves as the basis of modern social and labor-economic systems that support industrialized, standardized mass production and mass consumption. The concept is named after Henry Ford. It is used in social, economic, and management theory about production, working conditions, consumption, and related phenomena, especially regarding the 20th century. It describes an ideology of advanced capitalism centered around the American socioeconomic systems in place in the post-war economic boom.
Overview
Fordism is "the eponymous manufacturing system designed to produce standardized, low-cost goods and afford its workers decent enough wages to buy them." It has also been described as "a model of economic expansion and technological progress based on mass production: the manufacture of standardized products in huge volumes using special purpose machinery and unskilled labor." Although Fordism was a method used to improve productivity in the automotive industry, the principle could be applied to any kind of manufacturing process. Major success stemmed from three major principles:
The standardization of the product (nothing is handmade, but everything is made through machines and molds by unskilled workers)
The employment of assembly lines, which use special-purpose tools and/or equipment to allow common-skilled workers to contribute to the finished product
Workers are paid higher "living" wages so that they can afford to purchase the products they make
The principles, coupled with a technological revolution during Henry Ford's time, allowed for his revolutionary form of labor to flourish. His assembly line was revolutionary though not original as it had previously been used at slaughterhouses. His most original contribution to the modern world was breaking down complex tasks into simpler ones, with the help of specialised tools. Simpler tasks created interchangeable parts that could be used the same way every time. That allowed for a very adaptable flexibility, creating an assembly line that could change its constituent components to meet the needs of the product being assembled. In reality, the assembly line had existed before Ford, although not in quite the same effectiveness as he would create. His real accomplishment was recognizing the potential by breaking it all down into its components, only to build it back up again in a more effective and productive combination, thereby producing an optimum method for the real world.
The major advantages of such a change was that it cut down on the manpower necessary for the factory to operate, and it deskilled the labour itself, cutting down on costs of production.
There are four levels of Fordism, as described by Bob Jessop.
Capitalist labour process: Through implementing highly organized, Taylorist methods of production, designed to produce higher output, output can be increased and workers fully utilized.
Accumulation regime: Under the adherence to a belief in a 'virtuous circle of growth,' by increasing productivity, wages rise resulting in higher productivity, demand, investment, and operational efficacy.
Social mode of economic regulation: Clarity is gained by analyzing the in/outflow of capital, both in micro- [wages, internal movement] and macro- [monetary body, commerciality, external relations].
Generic mode of 'societalization': Deciphering State's and company's roles in the day-to-day economic lifestyles and patterns of the workforce, their economic habits, and the regional impact.
Background
The Ford Motor Company was one of several hundred small automobile manufacturers that emerged between 1890 and 1910. After five years of producing automobiles, Ford introduced the Model T, which was simple and light but sturdy enough to drive on the country's primitive roads. The mass production of this automobile lowered its unit price, making it affordable for the average consumer. Furthermore, Ford substantially increased his workers' wages to combat rampant absenteeism and employee turnover, which approached 400% annually, which had the byproduct of giving them the means to become customers. That led to massive consumption. In fact, the Model T surpassed all expectations because it attained a peak of 60% of the automobile output within the United States.
The production system that Ford exemplified involved synchronization, precision, and specialization within a company.
Ford and his senior managers did not use the word "Fordism" themselves to describe their motivations or worldview, which they did not consider an "ism". However, many contemporaries framed their worldview as one and applied the name Fordism to it.
History
The term gained prominence when it was used by Antonio Gramsci in 1934 in his essay "Americanism and Fordism" in his Prison Notebooks. Since then, it has been used by a number of writers on economics and society, mainly but not exclusively in the Marxist tradition.
According to historian Charles S. Maier, Fordism proper was preceded in Europe by Taylorism, a technique of labor discipline and workplace organization, based upon supposedly scientific studies of human efficiency and incentive systems. It attracted European intellectuals, especially in Germany and Italy, from the fin de siècle to World War I.
After 1918, however, the goal of Taylorist labor efficiency thought in Europe moved to "Fordism", the reorganization of the entire productive process by the moving assembly line, standardization, and the mass market. The grand appeal of Fordism in Europe was that it promised to sweep away all the archaic residues of precapitalist society, by subordinating the economy, society, and even the human personality to the strict criteria of technical rationality. The Great Depression blurred the utopian vision of American technocracy, but World War II and its aftermath revived the ideal.
Later, under the inspiration of Gramsci, Marxists picked up the Fordism concept in the 1930s and developed Post-Fordism in the 1970s. Robert J. Antonio and Alessandro Bonanno (2000) trace the development of Fordism and subsequent economic stages, from globalization to neoliberal globalization, during the 20th century, and emphasized the United States role in globalization. "Fordism," for Gramsci, meant routine, intensified labor to promote production. Antonio and Bonanno argue that Fordism peaked in the post-World War II decades of American dominance and mass consumerism but collapsed from political and cultural attacks on the people in the 1970s.
Advances in technology and the end of the Cold War ushered in a new "neoliberal" phase of globalization in the 1990s. Antonio and Bonanno further suggest that negative elements of Fordism, such as economic inequality, remained, allowing related cultural and environmental troubles, which inhibited America's pursuit of democracy to surface.
Historian Thomas Hughes has detailed how the Soviet Union, in the 1920s and the 1930s, enthusiastically embraced Fordism and Taylorism by importing American experts in both fields as well as American engineering firms to build parts of its new industrial infrastructure. The concepts of the Five-Year Plan and the centrally-planned economy can be traced directly to the influence of Taylorism on Soviet thinking. Hughes quotes Joseph Stalin's Foundations of Leninism:
Hughes describes how, as the Soviet Union developed and grew in power, both the Soviets and the Americans chose to ignore or deny the contribution of American ideas and expertise. The Soviets did so because they wished to portray themselves as creators of their own destiny and not indebted to their rivals, while the Americans did so because they did not wish to acknowledge, during the Cold War, their part in creating a powerful rival.
Post-Fordism
The period after Fordism has been termed Post-Fordist and Neo-Fordist. The former implies that global capitalism has made a clean break from Fordism, including overcoming its inconsistencies, but the latter implies that elements of the Fordist ROA continued to exist. The Regulation School preferred the term After-Fordism (or the French Après-Fordisme) to denote that what comes after Fordism was or is not clear.
In Post-Fordist economies:
New information technologies are important.
Products are marketed to niche markets rather than in mass consumption patterns based on social class.
Service industries predominate over manufacturing.
The workforce is feminized.
Financial markets are globalized.
White collar creativity is needed.
Workers do not stay in one job for their whole lives.
'Just-in-time' systems in which products are manufactured after orders are placed.
Cultural references
The mass-produced robots in Karel Čapek's play R.U.R. have been described as representing "the traumatic transformation of modern society by the First World War and the Fordist assembly line."
A religion based on the worship of Henry Ford is a central feature of the technocracy in Aldous Huxley's Brave New World, where the principles of mass production are applied to the generation of people as well as to industry.
See also
Cognitive-cultural economy
Division of labour
High modernism
Manifest Destiny
Modern Times (film)
New Frontier
Post-Fordism
Progress
Scientism
Techno-progressivism
Technocentrism
Technological utopianism
References
Bibliography
Antonio, Robert J. and Bonanno, Alessandro. "A New Global Capitalism? From 'Americanism and Fordism' to 'Americanization-globalization.'" American Studies 2000 41 (2–3): 33–77. .
Banta, Martha. Taylored Lives: Narrative Production in the Age of Taylor, Veblen, and Ford. U. of Chicago Press, 1993. 431 pp.
Baca, George. "Legends of Fordism." Social Analysis Fall 2004: 171–180.
Doray, Bernard (1988). From Taylorism to Fordism: A Rational Madness.
Holden, Len. "Fording the Atlantic: Ford and Fordism in Europe" in Business History Volume 47, #1 January 2005 pp. 122–127.
Hughes, Thomas P. (2004). American Genesis: A Century of Invention and Technological Enthusiasm 1870–1970. 2nd ed. The University of Chicago Press.
Jenson, Jane. "'Different' but Not 'Exceptional': Canada's Permeable Fordism," Canadian Review of Sociology and Anthropology, Vol. 26, 1989.
Koch, Max. (2006). Roads to Post-Fordism: Labour Markets and Social Structures in Europe.
Ling, Peter J. America and the Automobile: Technology, Reform, and Social Change chapter on "Fordism and the Architecture of Production"
Link, Stefan J. Forging Global Fordism: Nazi Germany, Soviet Russia, and the Contest over the Industrial Order (2020) excerpt
Maier, Charles S. "Between Taylorism and Technocracy: European Ideologies and the Vision of Industrial Productivity." Journal of Contemporary History (1970) 5(2): 27–61. Fulltext online at Jstor
Nolan, Maty. Visions of Modernity: American Business and the Modernization of Germany Oxford University Press, 1994 online
Mead, Walter Russell. "The Decline of Fordism and the Challenge to American Power." New Perspectives Quarterly; Summer 2004: 53–61.
Meyer, Stephen. (1981) "The Five Dollar Day: Labor Management and Social Control in the Ford Motor Company, 1908–1921" State University of New York Press.
Spode, Hasso. "Fordism, Mass Tourism and the Third Reich." Journal of Social History 38(2004): 127–155.
Pietrykowski, Bruce. "Fordism at Ford: Spatial Decentralization and Labor Segmentation at the Ford Motor Company, 1920–1950," Economic Geography, Vol. 71, (1995) 383–401 online
Roediger, David, ed. "Americanism and Fordism - American Style: Kate Richards O'hare's 'Has Henry Ford Made Good?'" Labor History 1988 29(2): 241–252. Socialist praise for Ford in 1916.
Settis, Bruno. (2016) Fordismi. Storia politica della produzione di massa, Il Mulino, Bologna.
Shiomi, Haruhito and Wada, Kazuo. (1995). Fordism Transformed: The Development of Production Methods in the Automobile Industry Oxford University Press.
Tolliday, Steven and Zeitlin, Jonathan eds. (1987) The Automobile Industry and Its Workers: Between Fordism and Flexibility Comparative analysis of developments in Europe, Asia, and the United States from the late 19th century to the mid-1980s.
Watts, Steven. (2005). The People's Tycoon: Henry Ford and the American Century.
Williams, Karel, Colin Haslam and John Williams, "Ford versus 'Fordism': The Beginning of Mass Production?" Work, Employment & Society, Vol. 6, No. 4, 517–555 (1992). Stress on Ford's flexibility and commitment to continuous improvements.
Gielen, Pascal. (2009). The Murmuring of the Artistic Multitude. Global Art, Memory and Post-Fordism. Valiz: Amsterdam.
Production economics
Manufacturing
Social theories
Late modern economic history
Henry Ford
History of science and technology in the United States
Economic history of the United States
Modernity
Production and manufacturing | Fordism | Engineering | 2,640 |
70,395,602 | https://en.wikipedia.org/wiki/HD%20191829 | HD 191829 (HR 7714) is a solitary star located in the southern constellation Telescopium. It has an apparent magnitude of 5.632, making it faintly visible to the naked eye if viewed under ideal conditions. The star is situated at a distance of 710 light years but is receding with a heliocentric radial velocity of .
HD 191829 has a stellar classification of K4 III, indicating that the object is an ageing K-type giant. It has an angular diameter of , yielding a diameter 47 times that of the Sun at its estimated distance. At present it has 117% the mass of the Sun and shines at from its enlarged photosphere at an effective temperature of , giving it an orange glow. HD 191829 has a metallicity 135% that of the Sun and spins modestly with a projected rotational velocity of .
References
Telescopium
K-type giants
191829
7714
99747
Durchmusterung objects
Telescopii, 81 | HD 191829 | Astronomy | 207 |
69,710,383 | https://en.wikipedia.org/wiki/Materials%20Science%20and%20Engineering%20A | Materials Science and Engineering: A — Structural Materials: Properties, Microstructure and Processing is a peer-reviewed scientific journal. It is the section of Materials Science and Engineering dedicated to "theoretical and experimental studies related to the load-bearing capacity of materials as influenced by their basic properties, processing history, microstructure and operating environment" and is published monthly by Elsevier. The current editor-in-chiefs are H. W. Hahn (University of Oklahoma), E. J. Lavernia (Texas A&M University), and B. B. Wei (Northwestern Polytechnical University).
Abstracting and indexing
The journal is indexed and abstracted in the following bibliographic databases:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 6.4, ranking 9th out of 79 in the category 'Metallurgy & Metallurgical Engineering'.
References
External links
Physics review journals
Materials science journals
Elsevier academic journals
Academic journals established in 1993
English-language journals
Monthly journals | Materials Science and Engineering A | Materials_science,Engineering | 211 |
77,398,579 | https://en.wikipedia.org/wiki/Fausto%20Calderazzo | Fausto Calderazzo was an Italian inorganic chemist. He gained renown from numerous contributions in inorganic chemistry and organometallic chemistry. Hr was born in Parma, on March 8, 1930, where his father served in the Royal Italian army. He died in Pisa on June 1, 2014, at the age of 84.
Life and education
Fausto Calderazzoe entered the University of Florence, in November 1947. He worked in the laboratory of Luigi Sacconi. After the compulsory military service, Calderazzo joined the research group of Giulio Natta, a future Nobel laureate in Milan. He was a postdoctoral fellow with F. A. Cotton. His first independent position was at the Cyanamid European Research Institute in Geneva (1963-1968). There he was part of a team of future eminent scholars, including Carlo Floriani. For most of his career he was professor at the University of Pisa.
Research
Metal carbonyl chemistry
While in Milan, he discovered V(CO)6. He made seminal contributions to the mechanism of migratory insertion reactions, with emphasis on the stereochemical course of the carbonylation of CH3Mn(CO)5. His team later (in Pisa) developed syntheses of Na[Nb(CO)6] and Na[Ta(CO6)].
He contributed to the so-called "non classical" carbonyl compounds through studies on gold complexes.
Metal halides
Calderazzo extended his interest in carbonyl chemistry of the late transition metals halides. His group reported Au4Cl8, a simple mixed-valence gold chloride.
Early metal complexes
Many of his contributions focused on early transition metals. He carried out research on carbon dioxide complexes.
Starting with the preparation of V(η6-1,3,5-C6H3Me3)2, his group prepared many bis-mesitylene derivatives.
Awards and titles
Calderazzo was a member of the editorial or advisor board of international scientific journals. He was a member of the Société Royale de Chemie (1987), Società Chimica Italiana and Accademia Nazionale dei Lincei (1989). He received the A. Miolati award in Inorganic Chemistry in 1988 and the L. Sacconi Medal in 1998.
References
1930 births
2014 deaths
Inorganic chemists
Italian chemists | Fausto Calderazzo | Chemistry | 478 |
76,396,899 | https://en.wikipedia.org/wiki/GRB%20200522A | GRB 200522A is a large kilonova in the Constellation Pisces. It was first observed in May 2020 by the Hubble Space Telescope. It is the result of the largest neutron star explosion ever recorded, and was bright enough to be visible by Hubble 5.4 billion light years away.
Formation
GRB 200522A is believed to have been formed when two neutron stars collided and exploded, creating an extremely large and bright short-ray gamma burst. The brightness of the emission was 10 times that of predicted, and was around 10,000 times more powerful than the sun in its entire 10 billion year lifetime. These findings and numbers, aided by the Hubble, have concluded that the kilonova is masking an extremely large and magnetized nuetron star.
Reactions
Prominent astronomer and professor Wen-fai Fong stated about the kilanova “It's amazing to me that after 10 years of studying the same type of phenomenon, we can discover unprecedented behavior like this,”.
ADS stated "This is substantially lower than on-axis short GRB afterglow detections but is a factor of ≈8-17 more luminous than the kilonova of GW170817 and significantly more luminous than any kilonova candidate for which comparable observations exist."
See also
Gamma Ray
2020 in science
References
Astronomical objects discovered in 2020
Gamma-ray bursts
Hubble Space Telescope | GRB 200522A | Physics,Astronomy | 286 |
1,035,039 | https://en.wikipedia.org/wiki/Smooth%20number | In number theory, an n-smooth (or n-friable) number is an integer whose prime factors are all less than or equal to n. For example, a 7-smooth number is a number in which every prime factor is at most 7. Therefore, 49 = 72 and 15750 = 2 × 32 × 53 × 7 are both 7-smooth, while 11 and 702 = 2 × 33 × 13 are not 7-smooth. The term seems to have been coined by Leonard Adleman. Smooth numbers are especially important in cryptography, which relies on factorization of integers. 2-smooth numbers are simply the powers of 2, while 5-smooth numbers are also known as regular numbers.
Definition
A positive integer is called B-smooth if none of its prime factors are greater than B. For example, 1,620 has prime factorization 22 × 34 × 5; therefore 1,620 is 5-smooth because none of its prime factors are greater than 5. This definition includes numbers that lack some of the smaller prime factors; for example, both 10 and 12 are 5-smooth, even though they miss out the prime factors 3 and 5, respectively. All 5-smooth numbers are of the form 2a × 3b × 5c, where a, b and c are non-negative integers.
The 3-smooth numbers have also been called "harmonic numbers", although that name has other more widely used meanings.
5-smooth numbers are also called regular numbers or Hamming numbers; 7-smooth numbers are also called humble numbers, and sometimes called highly composite, although this conflicts with another meaning of highly composite numbers.
Here, note that B itself is not required to appear among the factors of a B-smooth number. If the largest prime factor of a number is p then the number is B-smooth for any B ≥ p. In many scenarios B is prime, but composite numbers are permitted as well. A number is B-smooth if and only if it is p-smooth, where p is the largest prime less than or equal to B.
Applications
An important practical application of smooth numbers is the fast Fourier transform (FFT) algorithms (such as the Cooley–Tukey FFT algorithm), which operates by recursively breaking down a problem of a given size n into problems the size of its factors. By using B-smooth numbers, one ensures that the base cases of this recursion are small primes, for which efficient algorithms exist. (Large prime sizes require less-efficient algorithms such as Bluestein's FFT algorithm.)
5-smooth or regular numbers play a special role in Babylonian mathematics. They are also important in music theory (see Limit (music)), and the problem of generating these numbers efficiently has been used as a test problem for functional programming.
Smooth numbers have a number of applications to cryptography. While most applications center around cryptanalysis (e.g. the fastest known integer factorization algorithms, for example: the general number field sieve), the VSH hash function is another example of a constructive use of smoothness to obtain a provably secure design.
Distribution
Let denote the number of y-smooth integers less than or equal to x (the de Bruijn function).
If the smoothness bound B is fixed and small, there is a good estimate for :
where denotes the number of primes less than or equal to .
Otherwise, define the parameter u as u = log x / log y: that is, x = yu. Then,
where is the Dickman function.
For any k, almost all natural numbers will not be k-smooth.
If where is -smooth and is not (or is equal to 1), then is called the -smooth part of . The relative size of the -smooth part of a random integer less than or equal to is known to decay much more slowly than .
Powersmooth numbers
Further, m is called n-powersmooth (or n-ultrafriable) if all prime powers dividing m satisfy:
For example, 720 (24 × 32 × 51) is 5-smooth but not 5-powersmooth (because there are several prime powers greater than 5, e.g. and ). It is 16-powersmooth since its greatest prime factor power is 24 = 16. The number is also 17-powersmooth, 18-powersmooth, etc.
Unlike n-smooth numbers, for any positive integer n there are only finitely many n-powersmooth numbers, in fact, the n-powersmooth numbers are exactly the positive divisors of “the least common multiple of 1, 2, 3, …, n” , e.g. the 9-powersmooth numbers (also the 10-powersmooth numbers) are exactly the positive divisors of 2520.
n-smooth and n-powersmooth numbers have applications in number theory, such as in Pollard's p − 1 algorithm and ECM. Such applications are often said to work with "smooth numbers," with no n specified; this means the numbers involved must be n-powersmooth, for some unspecified small number n. As n increases, the performance of the algorithm or method in question degrades rapidly. For example, the Pohlig–Hellman algorithm for computing discrete logarithms has a running time of O(n1/2)—for groups of n-smooth order.
Smooth over a set A
Moreover, m is said to be smooth over a set A if there exists a factorization of m where the factors are powers of elements in A. For example, since 12 = 4 × 3, 12 is smooth over the sets A1 = {4, 3}, A2 = {2, 3}, and , however it would not be smooth over the set A3 = {3, 5}, as 12 contains the factor 4 = 22, and neither 4 nor 2 are in A3.
Note the set A does not have to be a set of prime factors, but it is typically a proper subset of the primes as seen in the factor base of Dixon's factorization method and the quadratic sieve. Likewise, it is what the general number field sieve uses to build its notion of smoothness, under the homomorphism .
See also
Highly composite number
Rough number
Round number
Størmer's theorem
Unusual number
Notes and references
Bibliography
G. Tenenbaum, Introduction to analytic and probabilistic number theory, (AMS, 2015)
A. Granville, Smooth numbers: Computational number theory and beyond, Proc. of MSRI workshop, 2008
External links
The On-Line Encyclopedia of Integer Sequences (OEIS)
lists B-smooth numbers for small Bs:
2-smooth numbers: A000079 (2i)
3-smooth numbers: A003586 (2i3j)
5-smooth numbers: A051037 (2i3j5k)
7-smooth numbers: A002473 (2i3j5k7l)
11-smooth numbers: A051038 (etc...)
13-smooth numbers: A080197
17-smooth numbers: A080681
19-smooth numbers: A080682
23-smooth numbers: A080683
Analytic number theory
Integer sequences | Smooth number | Mathematics | 1,513 |
33,054,163 | https://en.wikipedia.org/wiki/Kepler-12 | Kepler-12 is a star with a transiting planet Kepler-12b in a 4-day orbit.
Characteristics
Kepler-12, known also as KIC 11804465 in the Kepler Input Catalog, is an early G-type to late F-type star. This corresponds strongly with a sunlike dwarf star nearing the end of the main sequence, and is about to become a red giant. Kepler-12 is located approximately away from Earth. The star also has an apparent magnitude of 13.438, which means that it cannot be seen from Earth with the unaided eye.
The star is slightly more massive, slightly more iron-rich and slightly hotter than the Sun. However, Kepler-12 is larger, with a radius of 1.483 times the Sun's radius.
Planetary system
The one currently known planet is a hot Jupiter with a radius 1.7 times that of Jupiter but less than half the mass.
References
Draco (constellation)
G-type main-sequence stars
Planetary systems with one confirmed planet
Planetary transit variables
20 | Kepler-12 | Astronomy | 214 |
56,353 | https://en.wikipedia.org/wiki/Linear%20span | In mathematics, the linear span (also called the linear hull or just span) of a set of elements of a vector space is the smallest linear subspace of that contains It is the set of all finite linear combinations of the elements of , and the intersection of all linear subspaces that contain It often denoted or
For example, in geometry, two linearly independent vectors span a plane.
To express that a vector space is a linear span of a subset , one commonly uses one of the following phrases: spans ; is a spanning set of ; is spanned or generated by ; is a generator set or a generating set of .
Spans can be generalized to many mathematical structures, in which case, the smallest substructure containing is generally called the substructure generated by
Definition
Given a vector space over a field , the span of a set of vectors (not necessarily finite) is defined to be the intersection of all subspaces of that contain . It is thus the smallest (for set inclusion) subspace containing . It is referred to as the subspace spanned by , or by the vectors in . Conversely, is called a spanning set of , and we say that spans .
It follows from this definition that the span of is the set of all finite linear combinations of elements (vectors) of , and can be defined as such. That is,
When is empty, the only possibility is , and the previous expression for reduces to the empty sum. The standard convention for the empty sum implies thus a property that is immediate with the other definitions. However, many introductory textbooks simply include this fact as part of the definition.
When is finite, one has
Examples
The real vector space has {(−1, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. This particular spanning set is also a basis. If (−1, 0, 0) were replaced by (1, 0, 0), it would also form the canonical basis of .
Another spanning set for the same space is given by {(1, 2, 3), (0, 1, 2), (−1, , 3), (1, 1, 1)}, but this set is not a basis, because it is linearly dependent.
The set } is not a spanning set of , since its span is the space of all vectors in whose last component is zero. That space is also spanned by the set {(1, 0, 0), (0, 1, 0)}, as (1, 1, 0) is a linear combination of (1, 0, 0) and (0, 1, 0). Thus, the spanned space is not It can be identified with by removing the third components equal to zero.
The empty set is a spanning set of {(0, 0, 0)}, since the empty set is a subset of all possible vector spaces in , and {(0, 0, 0)} is the intersection of all of these vector spaces.
The set of monomials , where is a non-negative integer, spans the space of polynomials.
Theorems
Equivalence of definitions
The set of all linear combinations of a subset of , a vector space over , is the smallest linear subspace of containing .
Proof. We first prove that is a subspace of . Since is a subset of , we only need to prove the existence of a zero vector in , that is closed under addition, and that is closed under scalar multiplication. Letting , it is trivial that the zero vector of exists in , since . Adding together two linear combinations of also produces a linear combination of : , where all , and multiplying a linear combination of by a scalar will produce another linear combination of : . Thus is a subspace of .
Suppose that is a linear subspace of containing . It follows that , since every is a linear combination of (trivially). Since is closed under addition and scalar multiplication, then every linear combination must be contained in . Thus, is contained in every subspace of containing , and the intersection of all such subspaces, or the smallest such subspace, is equal to the set of all linear combinations of .
Size of spanning set is at least size of linearly independent set
Every spanning set of a vector space must contain at least as many elements as any linearly independent set of vectors from .
Proof. Let be a spanning set and be a linearly independent set of vectors from . We want to show that .
Since spans , then must also span , and must be a linear combination of . Thus is linearly dependent, and we can remove one vector from that is a linear combination of the other elements. This vector cannot be any of the , since is linearly independent. The resulting set is , which is a spanning set of . We repeat this step times, where the resulting set after the th step is the union of and vectors of .
It is ensured until the th step that there will always be some to remove out of for every adjoint of , and thus there are at least as many 's as there are 's—i.e. . To verify this, we assume by way of contradiction that . Then, at the th step, we have the set and we can adjoin another vector . But, since is a spanning set of , is a linear combination of . This is a contradiction, since is linearly independent.
Spanning set can be reduced to a basis
Let be a finite-dimensional vector space. Any set of vectors that spans can be reduced to a basis for , by discarding vectors if necessary (i.e. if there are linearly dependent vectors in the set). If the axiom of choice holds, this is true without the assumption that has finite dimension. This also indicates that a basis is a minimal spanning set when is finite-dimensional.
Generalizations
Generalizing the definition of the span of points in space, a subset of the ground set of a matroid is called a spanning set if the rank of equals the rank of the entire ground set
The vector space definition can also be generalized to modules. Given an -module and a collection of elements , ..., of , the submodule of spanned by , ..., is the sum of cyclic modules
consisting of all R-linear combinations of the elements . As with the case of vector spaces, the submodule of A spanned by any subset of A is the intersection of all submodules containing that subset.
Closed linear span (functional analysis)
In functional analysis, a closed linear span of a set of vectors is the minimal closed set which contains the linear span of that set.
Suppose that is a normed vector space and let be any non-empty subset of . The closed linear span of , denoted by or , is the intersection of all the closed linear subspaces of which contain .
One mathematical formulation of this is
The closed linear span of the set of functions xn on the interval [0, 1], where n is a non-negative integer, depends on the norm used. If the L2 norm is used, then the closed linear span is the Hilbert space of square-integrable functions on the interval. But if the maximum norm is used, the closed linear span will be the space of continuous functions on the interval. In either case, the closed linear span contains functions that are not polynomials, and so are not in the linear span itself. However, the cardinality of the set of functions in the closed linear span is the cardinality of the continuum, which is the same cardinality as for the set of polynomials.
Notes
The linear span of a set is dense in the closed linear span. Moreover, as stated in the lemma below, the closed linear span is indeed the closure of the linear span.
Closed linear spans are important when dealing with closed linear subspaces (which are themselves highly important, see Riesz's lemma).
A useful lemma
Let be a normed space and let be any non-empty subset of . Then
(So the usual way to find the closed linear span is to find the linear span first, and then the closure of that linear span.)
See also
Affine hull
Conical combination
Convex hull
Footnotes
Citations
Sources
Textbooks
Lay, David C. (2021) Linear Algebra and Its Applications (6th Edition). Pearson.
Web
External links
Linear Combinations and Span: Understanding linear combinations and spans of vectors, khanacademy.org.
Abstract algebra
Linear algebra | Linear span | Mathematics | 1,743 |
15,067,791 | https://en.wikipedia.org/wiki/CHRNB1 | Acetylcholine receptor subunit beta is a protein that in humans is encoded by the CHRNB1 gene.
The muscle acetylcholine receptor is composed of five subunits: two alpha subunits and one beta, one gamma, and one delta subunit. This gene encodes the beta subunit of the acetylcholine receptor. The acetylcholine receptor changes conformation upon acetylcholine binding leading to the opening of an ion-conducting channel across the plasma membrane. Mutations in this gene are associated with slow-channel congenital myasthenic syndrome.
See also
Nicotinic acetylcholine receptor
References
Further reading
External links
Ion channels
Nicotinic acetylcholine receptors | CHRNB1 | Chemistry | 149 |
48,522,026 | https://en.wikipedia.org/wiki/Hygrophoropsis%20bicolor | Hygrophoropsis bicolor is a species of fungus in the family Hygrophoropsidaceae. Found in Japan, it was described as new to science in 1963 by Tsuguo Hongo.
References
External links
Hygrophoropsidaceae
Fungi described in 1963
Fungi of Japan
Fungus species | Hygrophoropsis bicolor | Biology | 67 |
50,285,278 | https://en.wikipedia.org/wiki/Polycomb%20recruitment%20in%20X%20chromosome%20inactivation | X chromosome inactivation (XCI) is the phenomenon that has been selected during the evolution to balance X-linked gene dosage between XX females and XY males.
Phases
XCI is usually divided in two phases, the establishment phase when gene silencing is reversible, and maintenance phase when gene silencing becomes irreversible. During the establishment phase of X Chromosome Inactivation (XCI), Xist RNA, the master regulator of this process, is monoallelically upregulated and it spreads in cis along the future inactive X (Xi), relocates to the nuclear periphery. and recruits repressive chromatin-remodelling complexes Among these, Xist recruits proteins of the Polycomb repressive complexes. Whether Xist directly recruits Polycomb repressive complex 2 (PRC2) to the chromatin or this recruitment is the consequence of Xist-mediated changes on the chromatin has been object of intense debate.
Mechanism
Some studies showed that PRC2 components are not associated with Xist RNA or do not interact functionally. However another study has shown by means of mass spectrometry analysis, that two subunits of PRC2 may interact with Xist, although these proteins are also found in other complexes and are not unique components of the PRC2 complex.
PRC2 binds the A-repeat (RepA) of Xist RNA directly and with very high affinity (dissociation constants of 10-100 nanomolar), supporting Xist-mediated recruitment of PRC2 to the X chromosome. However it is not clear whether such interactions occurs in vivo under physiological conditions. Failure to turn up PRC2 proteins in function screens may be due to cells not being able to survive or compete without PRC2 or incomplete screens. Two super resolution microscopy analyses have presented different views from each other. One showed that Xist and PRC2 are spatially separated, while another showed that Xist and PRC2 are tightly linked. It is possible that several mechanisms recruit PRC2 in parallel, including direct Xist-mediated recruitment, adaptor proteins, chromatin changes, RNA pol II exclusion, or PRC1 recruitment. For instance, PRC2 recruitment is linked to PRC1-mediated H2A119 ubiquitination in differentiating embryonic stem cells (ESCs). where PRC1 recruitment is mediated by hnrnpK and Xist repB. In fully differentiated cells, PRC2 recruitment seems to be dependent on Xist RepA. It is possible that alternative and complementary pathways such as phase separation work to establish PRC2 recruitment on the X in different experimental systems and during different stages of development. See also work from the Tartaglia lab (https://en.wikipedia.org/wiki/Gian_Gaetano_Tartaglia)
References
Cell biology | Polycomb recruitment in X chromosome inactivation | Biology | 584 |
9,944 | https://en.wikipedia.org/wiki/Episome | An episome is a special type of plasmid, which remains as a part of the eukaryotic genome without integration. Episomes manage this by replicating together with the rest of the genome and subsequently associating with metaphase chromosomes during mitosis. Episomes do not degrade, unlike standard plasmids, and can be designed so that they are not epigenetically silenced inside the eukaryotic cell nucleus. Episomes can be observed in nature in certain types of long-term infection by adeno-associated virus or Epstein-Barr virus. In 2004, it was proposed that non-viral episomes might be used in genetic therapy for long-term change in gene expression.
As of 1999, there were many known sequences of DNA (deoxyribonucleic acid) that allow a standard plasmid to become episomally retained. One example is the S/MAR sequence.
The length of episomal retention is fairly variable between different genetic constructs and there are many known features in the sequence of an episome which will affect the length and stability of genetic expression of the carried transgene. Among these features is the number of CpG sites which contribute to epigenetic silencing of the transgene carried by the episome.
Mechanism of episomal retention
The mechanism behind episomal retention in the case of S/MAR episomes is generally still uncertain.
As of 1985, in the case of latent Epstein-Barr virus infection, episomes seemed to be associated with nuclear proteins of the host cell through a set of viral proteins.
Episomes in prokaryotes
Episomes in prokaryotes are special sequences which can divide either separate from or integrated into the prokaryotic chromosome.
References
Molecular biology | Episome | Chemistry,Biology | 381 |
31,324,619 | https://en.wikipedia.org/wiki/C9H10N2 | {{DISPLAYTITLE:C9H10N2}}
The molecular formula C9H10N2 (molar mass: 146.19 g/mol, exact mass: 146.0844 u) may refer to:
5,6-Dimethylbenzimidazole
Isomyosamine
Myosmine | C9H10N2 | Chemistry | 71 |
66,711,636 | https://en.wikipedia.org/wiki/Entoloma%20mougeotii | Entoloma mougeotii is a species of fungus belonging to the family Entolomataceae.
Synonym:
Eccilia mougeotii Fr., 1873 (= basionym)
References
Entolomataceae
Fungus species | Entoloma mougeotii | Biology | 52 |
32,791,926 | https://en.wikipedia.org/wiki/Howard%20Harry%20Rosenbrock | Howard Harry Rosenbrock (16 December 1920 – 21 October 2010) was a leading figure in control theory and control engineering. He was born in Ilford, England in 1920, graduated in 1941 from University College London with a 1st class honors degree in Electrical Engineering. He served in the Royal Air Force during World War II. He received the PhD from London University in 1955. After some time spent at Cambridge University and MIT, he was awarded a Chair at the University of Manchester Institute of Science and Technology, where he founded the Control Systems Centre. He died on 21 October 2010.
Prof Rosenbrock received many awards including the IEE Premium, the IEE Heaviside Premium, and the IEE Control Achievement Award, the first IEEE Control Systems Science and Engineering Award (1982), the Rufus Oldenburger Medal (1994) and the IChemE Moulton Medal. He was a Fellow of the IEE, the IChemE, the Institute of Measurement and Control, the Royal Academy of Engineering and the Royal Society.
Howard Rosenbrock was a pioneer of multivariable frequency domain control design methods. He also made important contributions to the numerical solution of stiff differential equations and in the development of parameter optimization methods, both known as Rosenbrock methods. The Rosenbrock function is a benchmark test for numerical optimization algorithms.
See also
Rosenbrock function
Rosenbrock system matrix
Rosenbrock methods
References
1920 births
2010 deaths
Alumni of University College London
Alumni of the University of London
Control theorists
Royal Air Force personnel of World War II
Fellows of the Royal Society
British expatriates in the United States | Howard Harry Rosenbrock | Engineering | 324 |
18,928,265 | https://en.wikipedia.org/wiki/NGC%206200 | NGC 6200 is an open cluster in the constellation Ara, lying close to the galactic equator. It contains one β Cephei variable.
References
External links
NGC 6200
6200
Ara (constellation) | NGC 6200 | Astronomy | 42 |
4,098,234 | https://en.wikipedia.org/wiki/Sort%20%28Unix%29 | In computing, sort is a standard command line program of Unix and Unix-like operating systems, that prints the lines of its input or concatenation of all files listed in its argument list in sorted order. Sorting is done based on one or more sort keys extracted from each line of input. By default, the entire input is taken as sort key. Blank space is the default field separator. The command supports a number of command-line options that can vary by implementation. For instance the "-r" flag will reverse the sort order.
History
A command that invokes a general sort facility was first implemented within Multics. Later, it appeared in Version 1 Unix. This version was originally written by Ken Thompson at AT&T Bell Laboratories. By Version 4 Thompson had modified it to use pipes, but sort retained an option to name the output file because it was used to sort a file in place. In Version 5, Thompson invented "-" to represent standard input.
The version of bundled in GNU coreutils was written by Mike Haertel and Paul Eggert. This implementation employs the merge sort algorithm.
Similar commands are available on many other operating systems, for example a command is part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2.
The command has also been ported to the IBM i operating system.
Syntax
sort [OPTION]... [FILE]...
With no FILE, or when FILE is -, the command reads from standard input.
Parameters
Examples
Sort a file in alphabetical order
$ cat phonebook
Smith, Brett 555-4321
Doe, John 555-1234
Doe, Jane 555-3214
Avery, Cory 555-4132
Fogarty, Suzie 555-2314
$ sort phonebook
Avery, Cory 555-4132
Doe, Jane 555-3214
Doe, John 555-1234
Fogarty, Suzie 555-2314
Smith, Brett 555-4321
Sort by number
The -n option makes the program sort according to numerical value. The command produces output that starts with a number, the file size, so its output can be piped to to produce a list of files sorted by (ascending) file size:
$ du /bin/* | sort -n
4 /bin/domainname
24 /bin/ls
102 /bin/sh
304 /bin/csh
The command with the option prints file sizes in the 7th field, so a list of the files sorted by file size is produced by:
$ find . -name "*.tex" -ls | sort -k 7n
Columns or fields
Use the -k option to sort on a certain column. For example, use "-k 2" to sort on the second column. In old versions of sort, the +1 option made the program sort on the second column of data (+2 for the third, etc.). This usage is deprecated.
$ cat zipcode
Adam 12345
Bob 34567
Joe 56789
Sam 45678
Wendy 23456
$ sort -k 2n zipcode
Adam 12345
Wendy 23456
Bob 34567
Sam 45678
Joe 56789
Sort on multiple fields
The -k m,n option lets you sort on a key that is potentially composed of multiple fields (start at column m, end at column n):
$ cat quota
fred 2000
bob 1000
an 1000
chad 1000
don 1500
eric 500
$ sort -k2,2n -k1,1 quota
eric 500
an 1000
bob 1000
chad 1000
don 1500
fred 2000
Here the first sort is done using column 2. -k2,2n specifies sorting on the key starting and ending with column 2, and sorting numerically. If -k2 is used instead, the sort key would begin at column 2 and extend to the end of the line, spanning all the fields in between. -k1,1 dictates breaking ties using the value in column 1, sorting alphabetically by default. Note that bob, and chad have the same quota and are sorted alphabetically in the final output.
Sorting a pipe delimited file
$ sort -k2,2,-k1,1 -t'|' zipcode
Adam|12345
Wendy|23456
Sam|45678
Joe|56789
Bob|34567
Sorting a tab delimited file
Sorting a file with tab separated values requires a tab character to be specified as the column delimiter. This illustration uses the shell's dollar-quote notation
to specify the tab as a C escape sequence.
$ sort -k2,2 -t $'\t' phonebook
Doe, John 555-1234
Fogarty, Suzie 555-2314
Doe, Jane 555-3214
Avery, Cory 555-4132
Smith, Brett 555-4321
Sort in reverse
The -r option just reverses the order of the sort:
$ sort -rk 2n zipcode
Joe 56789
Sam 45678
Bob 34567
Wendy 23456
Adam 12345
Sort in random
The GNU implementation has a -R --random-sort option based on hashing; this is not a full random shuffle because it will sort identical lines together. A true random sort is provided by the Unix utility shuf.
Sort by version
The GNU implementation has a -V --version-sort option which is a natural sort of (version) numbers within text. Two text strings that are to be compared are split into blocks of letters and blocks of digits. Blocks of letters are compared alpha-numerically, and blocks of digits are compared numerically (i.e., skipping leading zeros, more digits means larger, otherwise the leftmost digits that differ determine the result). Blocks are compared left-to-right and the first non-equal block in that loop decides which text is larger. This happens to work for IP addresses, Debian package version strings and similar tasks where numbers of variable length are embedded in strings.
See also
Collation
List of Unix commands
uniq
shuf
References
Further reading
External links
Original Sort manpage The original BSD Unix program's manpage
Further details about sort at Softpanorama
Computing commands
Sorting algorithms
Unix text processing utilities
Unix SUS2008 utilities
Plan 9 commands
Inferno (operating system) commands
IBM i Qshell commands | Sort (Unix) | Mathematics,Technology | 1,299 |
52,530,344 | https://en.wikipedia.org/wiki/Airborne%20Internet | Airborne Internet is a system that overlays network theory and principles into the transportation realm. Its goal is to establish seamless information connectivity between ground based infrastructure and airborne entities. To reach that goal, the system aims to create a scalable, general purpose, multi-application data channel for people in transit.
Airborne Internet is a technology that has the potential to integrate and support a myriad of activities, in both the cockpit and cabin environments. The original concept proposed in 1999 suggested an open system with a scalable architecture: one that is a general purpose, multi-application data channel, for all communications, navigation and surveillance exchanges. Airborne Internet sees all the participating aircraft acting as air-to-air relays, each operating in a peer-to-peer relationship with other aircraft, and supporting the network, even if an aircraft is not consuming bandwidth for its own purposes. Every aircraft is a node on the network.
Airborne Internet for the cockpit offers many possibilities: digital air traffic communications that enables the pilot to have better access to digital information sources, air transport operations and administration, enhanced weather information, 4-dimensional trajectory flight plan management from the air traffic control system, safety and security. Safety will be enhanced when the flight crew is better able to access information sources faster than before. Digital verification techniques can be employed to ensure the security of the information. Airborne Internet also provides the potential to be used by the Federal Air Marshals, airline operations, and flight crew for security information purposes. Aircraft maintenance functionality can use the network to provide important status information to the air carrier.
Airborne Internet for the cabin offers communications for passengers, in-flight entertainment, and other non-critical information sources.
The end-state Airborne Internet system is envisioned as a network of ground stations, specially equipped aircraft, satellites and unmanned aircraft systems to carry two-way broadband communications traffic to aircraft for use by passengers, operators and air traffic control centers.
It has the potential to change how aircraft are monitored and tracked by air traffic control systems, and how they exchange information with and about other aircraft (peer-to-peer). Critical information such as weather, turbulence, and landing conditions can be exchanged, as well as the distance between aircraft and the ground. This information becomes even more critical for aircraft that are beyond line of sight range. There would also be the capability to allow aircraft passengers to go online to check their e-mails, pay bills and surf the web without interference with radio and aircraft control signals.
Background
The fundamental premise of Airborne Internet is that network capability to aircraft will improve the way operators of aircraft and the National Air Space will handle information, as well as non-air traffic functionality. Various commercial solutions have emerged, but these solutions are all satellite-based and only work with a single aircraft. None of these existing satellite solutions provide aircraft-to-aircraft connectivity. An early implementable network connectivity solution was needed that would allow all aircraft types to participate in, and join the network: transport, regional, biz jet, GA, and even helicopters. Aircraft information flow will remain stove-piped (in each unique system) unless a ubiquitous network solution for aircraft is determined. The assumptions made for ground networks do not apply to Airborne networking links.
History
Multiple sources had been working on the general concept of network connectivity for aircraft in the 1990’s, including the U.S. military and its contractors. One of the earliest suggestions of what came to be known as “Airborne Internet,” took place in July 1999 at a NASA Small Aircraft Transportation System (SATS) Planning Conference. The Federal Aviation Administration’s Ralph Yost suggested a civil system for airborne network connectivity that started as a supporting technology for SATS. The name “Airborne Internet” was actually coined by NASA’s Dr. Bruce Holmes, then the Program Manager for SATS, who conveyed it to Yost. Although it was ultimately used by SATS in their multi-aircraft, high volume operations flight demonstration at Danville VA, NASA chose not to invest further in the development of Airborne Internet. Because NASA declined to pursue Airborne Internet further, and based on his originally proposed concept, Yost then cultivated his original Airborne Internet idea and subsequently started the Federal Aviation Administration’s own Airborne Internet research project at the FAA William J Hughes Technical Center in Atlantic City N.J. (The Airborne Internet capability that supported SATS was subsequently the winner of NASA's "Turning Goals Into Reality" Mobility Award for revolutionizing aviation).
Yost started (and still owns) the web site www.AirborneInternet.com. Yost then went on to form the Airborne Internet Collaboration Group (AICG), which matured into the Airborne Internet Consortium (AIC). Once the AIC was formed, it was handed over to interested corporate entities to manage, and government participation was withdrawn.
Originally called “Airborne Internet,” the “Internet” moniker was not received well internally by FAA management. The name of the FAA R&D Airborne Internet program was subsequently changed by Yost to “Airborne Networking.” The name changed appeased the FAA management and added synchronization to similar efforts by the U.S. military. All mentions and publications about “Airborne Internet” or “Airborne Networking” most likely refer to the same research program initiated and conducted by Yost.
Yost worked with two early developers of Airborne Internet capabilities, each with completely different approaches and different operational capabilities. Each company had similar ideas about air-to-air networking, but implemented them in completely different ways.
The first system in the FAA’s Airborne Internet R&D program was developed by Project Management Enterprises Inc. (PMEI), of Bethesda, Md., headed by Prasad Nair. It was used by all aircraft in NASA’s SATS multi-aircraft flight demonstration conducted at Danville VA.
The PMEI system utilized a standard aviation VHF radio channel, and therefore was a low bandwidth system. But PMEI had smartly developed their networking capability to uniquely work in the low bandwidth radio, including the ability to report aircraft position to every other aircraft on the network. They further refined network capability and applications that allowed weather, and other useful information, to very effectively function in the low bandwidth VHF radios. The PMEI system, supporting a narrowband 25Khz channel and a 19 kbit/sec link, combined a standard aircraft omnidirectional VHF antenna with a small multichannel data radio using network protocols, and offered an additional voice channel that could be used simultaneously. Internal GPS could optionally be used to provide own-ship position data, which could then be shared (as a simple application) with other network users to enhance situational awareness. The system connected with a standard local area network (LAN) on the aircraft.
In contrast to the PMEI low bandwidth approach, the second system in the FAA’s Airborne Internet R&D program was developed by AeroSat (now Astronics Aerosat) of Manchester, New Hampshire, and provided very high bandwidth. It included a single, high-gain directional antenna, for long-range connectivity, and two omnidirectional units, for use over ranges of about 100 nm. This combination supported two TCP/IP data communications options: 90 Mbit/sec – that is, 45 Mbit/sec in each direction in the Ka and Ku-bands - for aircraft in the network “backbone,” and a 1-2 Mbit/sec L-band link that allowed secondary aircraft to access the backbone. The concept of operations brought by Aerosat was to establish a very high backbone network between aircraft, then have lower bandwidth aircraft connect (directly or relay) into the backbone. Based upon the early flight tests conducted, Aerosat estimated that only 8 aircraft would be needed to extend the network over the Atlantic from shore-to-shore.
Airborne Internet Flight Tests
Important proof-of-concept flight tests were conducted at the FAA William J. Hughes Technical Center, in late July 2006, using the system developed by PMEI. These critical tests successfully demonstrated a “beyond line-of-sight” relay capability, where data communications took place at a distance greater than the curvature of the Earth normally allows for direct line-of-sight radio communications. This capability was achieved by establishing network connectivity between a distant aircraft, an intermediate-placed aircraft, and a ground station.
The Airborne Internet project was the first to conduct flight tests in the FAA Technical Center’s Bombardier Global 5000 Business Jet. The “flying laboratory” was equipped with multiple Airborne Internet capabilities. Two aircraft, a ground station and ground-based communication support networks were used in the flight tests. The project engineers successfully relayed messages and simulated 4-dimensional flight planning information from one aircraft to another, and then to the ground station, over an extended airborne network. In fact, an e-mail message was successfully sent to 172 people during one of the flight tests, from 140 miles out over the ocean.
As of Jul 2006, this was the first-ever civil aviation (non-military) successful proof-of-concept flight test of this kind, conducted in the world.
Airborne Internet technology offered potential solid support for the FAA’s NexGen air traffic management system, which required implementation of 4-dimensional trajectory flight planning. The Airborne Internet technology has strong potential to enhance future oceanic communications, resolving communications and location problems currently faced in the oceanic environment.
Next, the FAA Airborne Internet program conducted flight testing of its early Airborne Internet systems by testing both system capabilities simultaneously. The flight tests increased with complexity, and utilized multiple aircraft as network nodes and multiple ground stations. Ground stations were located in Bethesda MD and the FAA William J Hughes Technical Center in Atlantic City N.J. The FAA collected data for each system during these flights which proved that both the low bandwidth PMEI and high bandwidth Aerosat systems were viable and ready for operational commercialization.
Capabilities Proven by the FAA’s Airborne Internet R&D Program
· “Shared Situational Awareness”, Net-Centric Operations
· Position Reporting, Broad-area broadband, Data & Voice, Security, Responsiveness, and User-tailored information
· Real-time free-flow of info from private, commercial, & government sources
· Push/pull processes, secured according to needs and priorities
· Common awareness of day-to-day ops, events, crises
· Aircraft are additional “nodes” in network
· Integrated surveillance system across government
· Complete interoperability for ALL classes of aircraft
· Beyond Line of Sight (BLOS) relay
· Voice Over Internet Protocol (VoIP)
· Fully supported the operational needs of the FAA’s NexGen program for Network Enabled Access (NexGen Con Ops 2.0, Jun 2007, Joint Program Development Office (JPDO).
See also
Air traffic control radar beacon system
GPS, a system using satellites for communication to Earth
References
External links
Airborne Internet
Project Loon
Airborne Internet Blogspot
Avionics
Aviation communications | Airborne Internet | Technology | 2,219 |
39,265,695 | https://en.wikipedia.org/wiki/Stimulus%20filtering | Stimulus filtering occurs when an animal's nervous system fails to respond to stimuli that would otherwise cause a reaction to occur. The nervous system has developed the capability to perceive and distinguish between minute differences in stimuli, which allows the animal to only react to significant impetus. This enables the animal to conserve energy as it is not responding to unimportant signals.
Adaptive value
The proximate causes of stimulus filtering can be many things in and around an animal's environment, but the ultimate cause of this response may be the evolutionary advantage offered by stimulus filtering. An animal that saves energy by not responding to unnecessary stimuli may have increased fitness, which means that it would be able to produce more offspring, whereas an animal that does not filter stimuli may have reduced fitness due to depleted energy stores. An animal that practices stimulus filtering may also be more likely to respond appropriately to serious threats than an animal that is distracted by unimportant stimuli.
Physiological mechanism
When particular signals are received by the animal, the superior-ranking neurons determine which signals are important enough to preserve and which signals are insignificant and can be ignored. This process essentially works as a filter as the synapses of the neural network enhance certain signals and repress others, with simple stimuli receiving attention from lower-level neurons, and more complicated stimuli receiving attention from higher level neurons.
Relation to humans
Stimulus filtering is also seen in humans on a day-to-day basis. The cocktail party effect refers to the situation where people in a crowded room tend to ignore other conversations and just focus on the one they are participating in. This effect also works in that when an individual hears their name in another's conversation they immediately focus on that conversation.
Examples
Moths
The evolution of a moth’s auditory system has helped them escape a bat’s echolocation. Physically a moth has two ears on each side of the thorax where they receive ultrasonic indicators to hear the distinct vocalizations that then vibrate the membranes of the moths ears at one of two auditory receptors: A1 or A2. These are attached to the tympanum in the ear. Intense sound pressure waves sweep over the moth's body causing the tympanum to vibrate and deforming these receptor cells. This opens stretch-sensitive channels in the cell membrane and provides the effective stimuli for a moth auditory receptor. These receptors work in the same ways that most neurons do, by responding to the energy contained in selected stimuli and changing the permeability of their cell membranes to positively charged ions. Even though the A1 and A2 receptors work in a similar fashion, there are significant differences between them. The A1 receptor is the main bat detector, and as the rate of firing increases the moth turns away from the bat to reduce sonar echo. In other words, the A1 receptor is sensitive to low frequencies. To determine the relative position of the bat the differential firing rates of the A1 cells will fire on either side of the moth's head and if the bat is farther away cells receive a weaker signal and will fire at a slower rate. The A2 receptor is the emergency back-up system by initiating erratic flight movements as a last-ditch effort to evade capture. This differential sensitivity of the A1 and A2 sensory neurons leads to stimulus filtering of the bat vocalizations. Long-distance evasion tactics are engaged when the bat is far away and therefore the A1 sensory neurons fire. When the bat is in extremely close range, short-distance evasion tactics are engaged with the use of A2 sensory neurons. The adaptive value of the physiological mechanisms of two distinct receptors aids in the evasion of capture from bats.
Parasitoid flies
Female flies of the genus Ormia ochracea possess organs in their bodies that can detect frequencies of cricket sounds from meters away. This process is important for the survival of their species because females will lay their first instar larvae into the body of the cricket, where they will feed and molt for approximately seven days. After this period, the larvae grow into flies and the cricket usually perishes.
Researchers were puzzled about how precise hearing ability could arise from a small ear structure. Normal animals detect and locate sounds using the interaural time difference (ITD) and the interaural level difference (ILD). The ITD is the difference in the time it takes sound to reach the ear. ILD is the difference in sound intensity measure between both ears. At maximum, the ITD would only reach about 1.5 microseconds and the ILD would be less than one decibel. These small values make it hard to sense the differences. To solve these issues, researchers studied the mechanical aspects of flies’ ears. They found that they have a presternum structure linking both tympanal membranes that is critical in detecting sound and localization. The structure acts as a lever by transferring and amplifying vibrational energy between the membranes. After sound hits the membranes at different amplitudes, the presternum sets up symmetrical vibration modes through bending and rocking. This effect helps the nervous system distinguish which side the sound is coming from. Because the presternum acts as an intertympanal bridge, the ITD is increased from 1.5 us to 55 us and the ILD is increased from less than one decibel to over 10 decibels.
When looking at the nervous systems of flies, researchers found three auditory afferents. Type one fires only one spike to the stimulus onset, has low jitter (variability in timing over stimulus presentations), no spontaneous activity, and is the most common type. Type two fires two to four spikes to the stimulus onset, has increased jitter with subsequent spikes, and has low spontaneous activity. Finally, type three has tonic spiking to the presented stimulus, has low jitter only with the first spikes, has low spontaneous activity, and is the least common type. Researchers discovered that neurons responded the strongest to sound frequencies between 4 and 9 kHz, which includes the frequencies present in cricket songs. Also, neurons were found to have responded strongest at 4.5 kHz, which is the frequency of the Gryllus song. Despite the type of auditory afferent, all observed neurons revealed an inverse/latency relationship. The stronger the stimulus, the shorter the time until the neuron begins to respond. The difference in the number of afferents above the threshold on a side of the animal is called population code and can be used to account for sound localization.
Midshipman fish
Female midshipman fish undergo stimulus filtering when it comes time to mate with a male. Midshipman fish use stimulus filtering when listening to sounds produced by underwater species. Dominant signals underwater range between 60–120 Hz, which is the most normally the most sensitive to the fish's auditory receptor. However, the female auditory system changes seasonally to acoustical stimuli in the songs of male midshipman fish. In the summer when female midshipman fish are reproducing they listen to a male humming song that can be produce a frequency level of 400 Hz. The summer is reproducing season for the females so their hearing is more sensitive to the high frequency of the male humming.
References
Further reading
Parasitic fly on cricket [Photograph]. (2009, July 30).
So, you’re overwhelmed, huh? [Photograph]. (2013).
Wake Forest University. (2007, May 30). Image of bat and moth taken with high-speed infrared camera [Photograph].
Sensory systems
Ethology | Stimulus filtering | Biology | 1,522 |
17,744,469 | https://en.wikipedia.org/wiki/Thallium%20barium%20calcium%20copper%20oxide | Thallium barium calcium copper oxide, or TBCCO (pronounced "tibco"), is a family of high-temperature superconductors having the generalized chemical formula TlmBa2Can−1CunO2n+m+2.
Tl2Ba2Ca2Cu3O10 (TBCCO-2223) was discovered in Prof. Allen M. Hermann's laboratory in the physics department of the University of Arkansas in October 1987 by the post-doctoral researcher Zhengzhi Sheng and Prof. Hermann. The bulk superconductivity in this material was confirmed by observations of magnetic flux expulsion and flux trapping signals (under zero field cooled and field cooled conditions) with a SQUID magnetometer in the superconductor laboratory of Timir Datta in the University of South Carolina. Allen Hermann announced his discovery and the critical temperature of 127 K, in Houston, Texas at the World Congress on Superconductivity organized by Paul Chu in February 1988.
The first series of the Tl-based superconductor containing one Tl–O layer has the general formula TlBa2Can-1CunO2n+3, whereas the second series containing two Tl–O layers has a formula of Tl2Ba2Can-1CunO2n+4 with n =1, 2 and 3. In the structure of Tl2Ba2CuO6 (Tl-2201), there is one CuO2 layer with the stacking sequence (Tl–O) (Tl–O) (Ba–O) (Cu–O) (Ba–O) (Tl–O) (Tl–O). In Tl2Ba2CaCu2O8 (Tl-2212), there are two Cu–O layers with a Ca layer in between. Similar to the Tl2Ba2CuO6 structure, Tl–O layers are present outside the Ba–O layers. In Tl2Ba2Ca2Cu3O10 (Tl-2223), there are three CuO2 layers enclosing Ca layers between each of these. In Tl-based superconductors, Tc is found to increase with the increase in CuO2 layers. However, the value of Tc decreases after four CuO2 layers in TlBa2Can-1CunO2n+3, and in the Tl2Ba2Can-1CunO2n+4 compound, it decreases after three CuO2 layers.
See also
Cuprate superconductors
Bismuth strontium calcium copper oxide
Yttrium barium copper oxide
Lanthanum barium copper oxide
References
Copper Oxide Superconductors:, by Charles P. Poole, Timir Datta, Horacio A. Farach, John Wiley & Sons, 1988,
Superconductivity: Its historical Roots and Development from Mercury to the Ceramic Oxides, by Per Fridtjof Dahl, AIP, New York, 1st ed. 1992,
External links
Thallium-based high-temperature superconductors
High-temperature superconductors
Thallium compounds
Barium compounds
Calcium compounds
Copper(II) compounds
Oxides | Thallium barium calcium copper oxide | Chemistry | 668 |
4,600,283 | https://en.wikipedia.org/wiki/National%20Wind%20Institute | The National Wind Institute (NWI) at Texas Tech University (TTU) was established in December 2012, and is intended to serve as Texas Tech University's intellectual hub for interdisciplinary and transdisciplinary research, commercialization and education related to wind science, wind energy, wind engineering and wind hazard mitigation and serves faculty affiliates, students, and external partners.
In 2003, with support from the National Science Foundation, the first interdisciplinary Ph.D. program dedicated to wind science and engineering was developed. Later, the Texas Wind Energy Institute (TWEI) was established and is a partnership between TTU and Texas State Technical College designed to develop education and career pathways to meet workforce and educational needs of the expanding wind energy industry. It is funded in part by the Texas Workforce Commission.
In an effort to streamline and to promote synergy, both WiSE and TWEI have now integrated to form the National Wind Institute.
NWI organizes and administers large multi-dimensional TTU wind-related research projects and serves as the contact point for major project sponsors and other external partners.
History
The Wind Science and Engineering (WiSE) Research Center was established in 1970 as the Institute for Disaster Research, following the F5 Lubbock tornado that caused 26 fatalities and over $100 million in damage. Following the aftermath of the tornado, the WISE center developed the first comprehensive wind engineering report of its kind. In 2006, the Enhanced Fujita scale was developed at TTU to update the original Fujita scale that was first introduced in 1971. In 2003, with support from the National Science Foundation, the first interdisciplinary Ph.D. program dedicated to wind science and engineering was developed. Later, the Texas Wind Energy Institute (TWEI) was established as a partnership between TTU and Texas State Technical College designed to develop education and career pathways to meet workforce and educational needs of the expanding wind energy industry. A bachelor's degree program in Wind Energy was opened in Spring 2012 and now has more than 100 students in the process of completing the degree requirements.
Both WiSE and TWEI have now integrated to form the National Wind Institute (NWI).
Research Facilities
The Texas Tech campus hosts the NWI's administrative offices and the Boundary Layer Wind Tunnel and Wind Library.
Facilities at the campus consist of:
The Boundary Layer Wind Tunnel is a closed wind tunnel offering a by test section that is capable of producing up winds speeds of up to .
The Debris Impact Facility
The Pulsed Jet Wind Tunnel, which is used to simulate thunderstorm downbursts.
The NWI's Wind Library, which hosts one of the largest collections of wind related material in the world. The collection includes Ted Fujita's papers, reports and photographs, which were donated by the Fujita family and the University of Chicago. The library also includes documentation of more than 100 windstorms.
NWI research at the Reese Technology Center
The National Wind Institute occupies of indoor laboratory space and has a large 67-acre field test site at the Reese Technology Center. Some of the facilities housed at the Reese Center include:
A 200-meter data acquisition tower, used to measure and record atmospheric conditions at ten levels,
The Scaled Wind Farm Technology (SWiFT) Facility
A weather balloon facility.
VorTECH, a tornado vortex simulator.
The Wind Engineering Research Field Laboratory (WERFL)
The Reese Center is also home to several radar systems including a SODAR, Low Level Profiler, and the SMART-R Mobile Radar.
Research
Wind energy
Some of the National Wind Institute's wind energy research goals are the assessment of the risk and effects on wind turbine exposure to extreme wind events, the improvement of wind turbine design codes with emphasis on extreme wind events, and the analysis and testing of utility-scale wind turbines for use in less-energetic wind conditions. The NWI is also focused on the identification of advanced wind-driven water treatment and desalination systems for municipal and other applications, as well as the full-scale testing of wind-driven water desalination systems and the development of modeling codes for integrated wind-water desalination systems.
Debris impact
The NWI Debris Impact Facility performs tests on storm shelters and their various components to see if they meet established Federal Emergency Management Agency (FEMA)l and International Code Council guidelines. The Debris Impact Facility houses a high-powered air cannon that shoots wooden two-by-fours at shelter walls and doors to simulate flying debris. The cannon is capable of producing simulated wind speeds of more than 250 mph and provides valuable impact resistance data. Such data is used to develop standards for safe above-ground and below-ground shelters and continues to be in demand for testing new shelter materials and constructions.
See also
Tornado intensity and damage
External links
References
Texas Tech University
Wind energy organizations
Tornado
Structural engineering
Organizations based in Lubbock, Texas | National Wind Institute | Engineering | 972 |
72,261,250 | https://en.wikipedia.org/wiki/40%20Leonis%20Minoris | 40 Leonis Minoris (40 LMi) is a white hued star located in the northern constellation Leo Minor. It is rarely called 14 H. Leonis Minoris, which is the designation given by Polis astronomer Johann Hevelius.
It has an apparent magnitude of 5.51, making it faintly visible to the naked eye. The object is located relatively close at a distance of 154 light years based on Gaia DR3 parallax measurements but is receding with a somewhat constrained heliocentric radial velocity of . At 40 LMi's current distance, its brightness is diminished by only 0.02 magnitudes due to interstellar dust.
40 LMi is a chemically peculiar A-type main-sequence star with a stellar classification of A4 Vn. This indicates that it is an A4 dwarf with nebulous absorption lines due to rapid rotation. It has 1.69 times the mass of the Sun and 1.54 times its girth. It radiates 14.3 times the luminosity of the Sun from its photosphere at an effective temperature of . The star is estimated to be 207 million years old, having completed 54.6% of its main sequence lifetime. 40 LMi is slightly metal deficient and spins rapidly with a projected rotational velocity of .
This star was part of a 2005 survey regarding proper motions from the Hipparcos satellite. Its proper motion varied, indicating that an unseen companion may cause it. This led to Peter P. Eggleton and Andrei Tokovinin classifying it as an astrometric binary. There also 3 optical companions located near 40 LMi. Their relative positions and brightness are listed below.
References
A-type main-sequence stars
Am stars
Astrometric binaries
Leo Minor
Leonis Minoris, 40
BD+27 01927
092769
052422
4189 | 40 Leonis Minoris | Astronomy | 381 |
44,957,547 | https://en.wikipedia.org/wiki/Reuptake%20modulator | A reuptake modulator, or transporter modulator, is a type of drug which modulates the reuptake of one or more neurotransmitters via their respective neurotransmitter transporters. Examples of reuptake modulators include reuptake inhibitors (transporter blockers) and reuptake enhancers.
See also
Releasing agent
Release modulator
Transporter substrate
Channel modulator
Enzyme modulator
Receptor modulator
Drugs by mechanism of action
Psychopharmacology | Reuptake modulator | Chemistry | 103 |
55,993,642 | https://en.wikipedia.org/wiki/Sinosabellidites | Sinosabellidites huainanensis part of the small shelly fauna is a species of Early Neoproterozoic metazoans.
See also
Huainan biota
References
Ediacaran life
Neoproterozoic
Prehistoric marine animals | Sinosabellidites | Biology | 51 |
12,917,766 | https://en.wikipedia.org/wiki/Thermodiscus | Thermodiscus is a genus of archaea in the family Desulfurococcaceae. The only species is Thermodiscus maritimus.
References
Further reading
Scientific journals
Scientific books
External links
Monotypic archaea genera
Thermoproteota | Thermodiscus | Biology | 56 |
23,227,915 | https://en.wikipedia.org/wiki/IEEE%20Medal%20for%20Engineering%20Excellence | The IEEE Medal for Engineering Excellence was an award presented by the IEEE to recognize exceptional achievements in application engineering in the technical disciplines of the IEEE, for the benefit of the public and the engineering profession. The medal was awarded to an individual or a group of up to three people. It was established by the IEEE Board of Directors in 1986 and was last awarded in 2004.
Recipients of this medal received a gold medal, bronze replica, certificate and honorarium.
This award was discontinued in November 2009.
Recipients
2004: Thomas E. Neal
2004: Richard L. Doughty
2004: H. Landis Floyd
2003: Ralph S. Gens
2002: No Award
2001: L. Bruce McClung
2000: Cyril G. Veinott
1999: Kiyoji Morii
1998: C. James Erickson
1997: John G. Anderson
1996: John R. Dunki-Jacobs
1995: Masasuke Morita
1994: Heiner Sussner
1993: Robert L. Hartman
1993: Richard W. Dixon
1993: Bernard C. DeLoach, Jr.
1992: Charles Elachi
1991: Alexander Feiner
1990: John Alvin (Jack) Pierce, the "Father of Omega"
1989: Walter A. Elmore
1988: Karl E. Martersteck, Jr.
References
Engineering Excellence | IEEE Medal for Engineering Excellence | Technology | 260 |
48,286,881 | https://en.wikipedia.org/wiki/Lanthanum%20hydroxide | Lanthanum hydroxide is , a hydroxide of the rare-earth element lanthanum.
Synthesis
Lanthanum hydroxide can be obtained by adding an alkali such as ammonia to aqueous solutions of lanthanum salts such as lanthanum nitrate. This produces a gel-like precipitate that can then be dried in air.
Alternatively, it can be produced by hydration reaction (addition of water) to lanthanum oxide.
Characteristics
Lanthanum hydroxide does not react much with alkaline substances, however is slightly soluble in acidic solution. In temperatures above 330 °C it decomposes into lanthanum oxide hydroxide (LaOOH), which upon further heating decomposes into lanthanum oxide ():
LaOOH
2 LaOOH
Lanthanum hydroxide crystallizes in the hexagonal crystal system. Each lanthanum ion in the crystal structure is surrounded by nine hydroxide ions in a tricapped trigonal prism.
References
External links
External MSDS 1
External MSDS 2
Lanthanum Oxide MSDS
Lanthanum compounds
Inorganic compounds
Hydroxides | Lanthanum hydroxide | Chemistry | 227 |
16,785,037 | https://en.wikipedia.org/wiki/History%20of%20laptops | The history of laptops describes the efforts, begun in the 1970s, to build small, portable personal computers that combine the components, inputs, outputs and capabilities of a desktop computer in a small chassis.
Portable precursors
Portal R2E CCMC
The portable microcomputer "Portal", of the French company R2E Micral CCMC, officially appeared in September 1980 at the Sicob show in Paris. The Portal was a portable microcomputer designed and marketed by the studies and developments department of the French firm R2E Micral in 1980 at the request of the company CCMC specializing in payroll and accounting. It was based on an Intel 8085 processor, 8-bit, clocked at 2 MHz. It was equipped with a central 64K byte RAM, a keyboard with 58 alphanumeric keys and 11 numeric keys (in separate blocks), a 32-character screen, a floppy disk (capacity - 140,000 characters), a thermal printer (speed - 28 characters/second), an asynchronous channel, a synchronous channel, and a 220-volt power supply. Designed for an operating temperature of 15–35 °C, it weighed and its dimensions were 45 × 45 × 15 cm. It ran the Prologue operating system and provided total mobility.
Osborne 1
The Osborne 1 is considered the first true mobile computer by most historians. Adam Osborne founded Osborne Computer and produced the Osborne 1 in 1981. The Osborne 1 had a five-inch screen, incorporating a modem port, two -inch floppy drives, and a large collection of bundled software applications. An aftermarket battery pack was available. The computer company was a failure and did not last for very long. Although it was large and heavy compared to today's laptops, with a tiny 5" CRT monitor, it had a near-revolutionary impact on business, as professionals were able to take their computer and data with them for the first time. This and other "luggables" were inspired by what was probably the first portable computer, the Xerox NoteTaker. The Osborne was about the size of a portable sewing machine, and could be carried on commercial aircraft. The Osborne 1 weighs close to and was priced at .
Compaq Portable
The Compaq Portable was the first PC-compatible portable computer created in 1982. The first shipment was in March 1983 and was priced at . The Compaq Portable folded up into a luggable case the size of a portable sewing machine, similar in size to the Osborne 1. The third model of this development, Compaq Portable II, featured high resolution graphics on its tube display. It was the first portable computer ready to be used on the shop floor, and for CAD and diagram display. It established Compaq as a major brand on the market.
Epson HX-20
The first significant development towards laptop computing was announced in 1981 and sold from July 1982, the 8/16-bit Epson HX-20. It featured a full-transit 68-key keyboard, rechargeable nickel-cadmium batteries, a small (120×32-pixel) dot-matrix LCD with 4 lines of text, 20 characters per line text mode, a 24 column dot matrix printer, a Microsoft BASIC interpreter, and 16 KB of RAM (expandable to 32 KB). The HX-20's very limited screen and tiny internal memory, made serious word-processing and spreadsheet applications impractical and the device was described as "primitive" by some. In terms of mass storage, the HX20 could be fitted with a Microcasette Drive, which is powered and operated by the Main Unit. External Floppy Drives and even an Adapter for CRT output were also available. The HX-20 was the first laptop to be called a notebook.
Grid Compass
The first clamshell laptop, the Grid Compass, was made in 1982. Enclosed in a magnesium case, it introduced the now familiar design in which the flat display folded shut against the keyboard. The computer was equipped with a 320×200-pixel electroluminescent display and 384 kilobyte bubble memory. It was not IBM-compatible, and its high price (US$8,000–10,000, equivalent to $- in ) limited it to specialized applications. However, it was used heavily by the U.S. military, and by NASA on the Space Shuttle during the 1980s. The GRiD's manufacturer subsequently earned significant returns on its patent rights as its innovations became commonplace. GRiD Systems Corp. was later bought by the Tandy (now RadioShack) Corporation. The Grid's portability was restricted as it had no internal battery pack and relied on mains power.
Dulmont Magnum/Kookaburra
The first contender for true laptop computing was the 16-bit Dulmont Magnum, designed by David Irwin and John Blair of Dulmison, Australia, in 1982 and released in Australia in September 1983 by Dulmont. This battery-powered device included an 80 character × 8 line display in a lid that closed against the keyboard. The Dulmont was thus the first computer that could be taken anywhere and offered significant computing potential on the user's laptop (though weighing in at ). It was based on the MS-DOS operating system and applications stored in ROM (A:) and also supported removable modules in expansion slots (B: and C:) that could be custom-programmed EPROM or standard word processing and spreadsheet applications. The Magnum could suspend and retain memory in battery-backed CMOS RAM, including a RAM Disk (D:). A separate expansion box provided dual 5.25-inch floppy or 10 MB hard disk storage. The product was marketed internationally from 1984 to 1986. Dulmont was eventually taken over by Time Office Computers, who relabeled the brand "Kookaburra" and marketed 16- and 25-line LCD display versions.
Sharp and Gavilan
Two other noteworthy early laptops were the Sharp PC-5000 (similar in many respects to the Dulmont Magnum) and the Gavilan SC, announced in 1983 but first sold in 1984, Gavilan filing bankruptcy the same year. Both ran the 8/16-bit Intel 8088 CPU. The Gavilan was notably the first computer to be marketed as a "laptop". It was equipped with an internal floppy disk drive and a pioneering touchpad-like pointing device, installed on a panel above the keyboard. Like the GRiD Compass, the Gavilan and the Sharp were housed in clamshell cases, but they were partly IBM-compatible, although primarily running their own system software. Both had LCDs, and could connect to optional external printers.
Kyotronic 85 (Tandy Model 100)
The year 1983 also saw the launch of what was probably the biggest-selling early laptop, the 8-bit Kyocera Kyotronic 85. Owing much to the design of the previous Epson HX-20, and although at first a slow seller in Japan, it was quickly licensed by Tandy Corporation, Olivetti, and NEC, who recognised its potential and marketed it respectively as the TRS-80 Model 100 line (or Tandy 100), Olivetti M-10, and NEC PC-8201. The machines ran on standard AA batteries.
The Tandy's built-in programs, including a BASIC interpreter, a text editor, and a terminal program, were supplied by Microsoft, and were written in part by Bill Gates. The computer was not a clamshell, but provided a tiltable 8 line × 40-character LCD screen above a full-travel keyboard. With its internal modem, it was a highly portable communications terminal. Due to its portability, good battery life (and ease of replacement), reliability (it had no moving parts), and low price (as little as US$300), the model was highly regarded, becoming a favorite among journalists. It weighed less than with dimensions of 30×21.5×4.5 centimeters (12×× in). Initial specifications included 8 kilobytes of RAM (expandable to 24 KB) and a 3 MHz processor. The machine was in fact about the size of a paper notebook, but the term had yet to come into use and it was generally described as a "portable" computer.
Data General/One
Data General's introduction of the Data General/One (DG-1) in 1984 is one of the few cases of a minicomputer company introducing a breakthrough PC product. Considered genuinely portable, rather than "luggable", it was a nine-pound battery-powered MS-DOS machine equipped with dual -inch diskettes, a 79-key full-stroke keyboard, 128 KB to 512 KB of RAM, and a monochrome LCD screen capable of either the full-sized standard 80×25 characters or full CGA graphics (640×200).
Bondwell 2
Although it was not released until 1985, well after the decline of CP/M as a major operating system, the Bondwell 2 is one of only a handful of CP/M laptops. It used an 8-bit Z-80 CPU running at 4 MHz, had 64 KBs of RAM, and a floppy disk drive built in, which was unusual for CP/M laptops. The flip-up LCD display's resolution was 640x200 pixels. Bondwell 2 also included MicroPro's complete line of CP/M software, including WordStar. The Bondwell 2 was capable of displaying bitmapped graphics. The price of the Bondwell 2 was listed at $995.
Kaypro 2000
Possibly the first commercial IBM-compatible laptop was the 8/16-bit Kaypro 2000, introduced in 1985. With its brushed aluminum clamshell case, it was remarkably similar in design to modern laptops. It featured a 25 line by 80 character LCD, a detachable keyboard, and a pop-up 90 mm (3.5-inch) floppy drive.
Toshiba T1100, T1000, and T1200
Toshiba launched the 8/16-bit Toshiba T1100 in 1985, and has subsequently described it as "the world's first mass-market laptop computer". It did not have a hard drive, and ran entirely from floppy disks. The CPU was a 4.77 MHz Intel 80C88, a lower-power-consumption variation of the popular Intel 8088, and the display was a monochrome, 640x200 LCD. It was followed in 1987 by the T1000 and T1200. Although limited floppy-based DOS machines, with the operating system stored in ROM on the T1000, the Toshiba models were small and light enough to be carried in a backpack, and could be run from Ni-Cd batteries. They also introduced the now-standard "resume" feature to DOS-based machines: the computer could be paused between sessions without having to be restarted each time.
IBM PC Convertible
Also among the first commercial IBM-compatible laptops was the 8/16-bit IBM PC Convertible, introduced in 1986. It had a CGA-compatible LCD and two 720 KB 3.5-inch floppy drives. It weighed .
Epson L3s
The Epson L3s was an early portable computer that ran MS-DOS and featured a parallel port.
Zenith SupersPort
The first laptops successful on a large scale came in large part due to a RFP by the U.S. Air Force in 1987. This contract would eventually lead to the purchase of over 200,000 laptops. Competition to supply this contract was fierce and the major PC companies of the time; IBM, Toshiba, Compaq, NEC, and Zenith Data Systems (ZDS), rushed to develop laptops in an attempt to win this deal. ZDS, which had earlier won a landmark deal with the IRS for its Z-171, was awarded this contract for its SupersPort series. The SupersPort series was originally launched with an Intel 8086 processor, dual floppy disk drives, a backlit, blue-and-white STN LCD screen, and a NiCd battery pack. Later models featured a 16-bit Intel 80286 processor and a 20 MB hard disk drive. On the strength of this deal, ZDS became the world's largest laptop supplier in 1987 and 1988. ZDS partnered with Tottori Sanyo in the design and manufacturing of these laptops. This relationship is notable because it was the first deal between a major brand and an Asian original equipment manufacturer.
Hewlett-Packard Vectra Portable CS
In 1987, HP released a portable version of their 16-bit Vectra CS computer. It had the classic laptop configuration (keyboard and monitor closes up clam-shell style in order to carry), however, it was very heavy and fairly large. It had a full-size keyboard (with separate numeric keypad) and a large amber LCD screen. While it was offered with dual 3.5-inch floppy disk drives, the most common configuration was a 20 MB hard drive and a single floppy drive. It was one of the first machines with a 1.44 MB density 3.5-inch disk drive.
Cambridge Z88
Another notable computer was the 8-bit Cambridge Z88, designed by Clive Sinclair, introduced in 1988. About the size of an A4 sheet of paper as well, it ran on standard batteries, and contained basic spreadsheet, word processing, and communications programs. It anticipated the future miniaturization of the portable computer, and as a ROM-based machine with a small display, can – like the TRS-80 Model 100 – also be seen as a forerunner of the personal digital assistant.
Compaq SLT/286
By the end of the 1980s, laptop computers were becoming popular among business people. The 16-bit COMPAQ SLT/286 debuted in October 1988, being the first battery-powered laptop to support an internal hard disk drive and a VGA compatible LCD screen. It weighed .
NEC UltraLite
The NEC UltraLite, released in October 1988, was the first "notebook" computer, weighing just , which was achieved by obviating floppy or hard drive, it was powered by the NEC V30 16-bit CPU. The very restrictive 2 megabyte RAM drive cramped the product's utility. Although portable computers with clamshell LCD screens already existed at the time of its release, the Ultralite was the first computer in a notebook form-factor. It was significantly smaller than all earlier portable computers and could be carried like a notebook and its clamshell LCD folded over the body like a book cover.
Apple and IBM
Apple Macintosh Portable
Apple's first laptop product was the 16-bit lead-acid battery powered Macintosh Portable released in September 1989. The Portable pioneered inclusion of a pointing device (a trackball) in the laptop/portable sphere.
IBM PS/2 note
The IBM PS/2 note was a first IBM laptop with clamshell design, and the 1992's CL57sx model was IBM's first commercial laptop with color screen; the introduced options and features include the now-common peripherals-oriented PS/2 port as mobile device option, introduced the laptop BIOS and predecessor of laptop docking station (IBM Communications Cartridge).
Apple Powerbook
The Apple PowerBook series, introduced in October 1991, pioneered changes that are now de facto standards on laptops, including room for a palm rest.
Later PowerBooks featured optional color displays (PowerBook 165c, 1993), and first true touchpad (PowerBook 500 series, 1994), first 16-bit stereo audio, and first built-in Ethernet network adapter (PowerBook 500, 1994).
IBM ThinkPad
In 1992, IBM released its ThinkPad 300, 700 and 700C, featuring a clamshell design similar to the PS/2 line. The 700 and 700C (the "C" version came with a color display) came with the distinctive red TrackPoint pointing device that is still used to this day. The ThinkPad raised a new standard for business class laptops with its modular design, greater durability and more productivity options, including video capture and cameras (ThinkPad Power Series 850, 1995), removable drive bays, secondary batteries, and backlit keyboards.
APM and SMI/SMM
Windows 3.1 was the first version of Windows to support APM, which was usually implemented with SMI in the BIOS (introduced with the Intel 80386SL). Windows 95 introduced standardized support for docking via the PnP BIOS (among other things). Prior to this point each brand used custom BIOS, drivers and in some cases, ASICs, to optimize the battery life of its machines. This move by Microsoft was controversial in the eyes of notebook designers because it greatly reduced their ability to innovate; however, it did serve its role in simplifying and stabilizing certain aspects of notebook design.
Intel Pentium processor
Windows 95 ushered in the importance of the CD-ROM drive in mobile computing, and helped the shift to the Intel Pentium processor as the base platform for notebooks. The Gateway Solo was the first notebook introduced with a Pentium processor and a CD-ROM. Also featuring a removable hard disk drive and floppy drive, the Solo was the first three-spindle (optical, floppy, and hard disk drive) notebook computer, and was extremely successful within the consumer segment of the market. In roughly the same time period the Dell Latitude, Toshiba Satellite, and IBM ThinkPad were reaching great success with Pentium-based two-spindle (hard disk and floppy disk drive) systems directed toward the corporate market.
Improved technology
Early laptop displays were so primitive that PC Magazine in 1986 published an article discussing them with the headline "Is It On Yet?". It said of the accompanying montage of nine portable computers, "Pictured at the right are two screens and seven elongated smudges". The article stated that "LCD screens still look to many observers like Etch-a-Sketch toys, or gray chalk on a dirty blackboard", and predicted that until displays improved, "laptops will continue to be a niche rather than a mainstream direction". As technology improved during the 1990s, the usefulness and popularity of laptops increased. Correspondingly prices went down. Several developments specific to laptops were quickly implemented, improving usability and performance. Among them were:
Improved battery technology. The heavy lead-acid batteries were replaced with lighter and more efficient technologies, first nickel cadmium or NiCd, then nickel metal hydride (NiMH) and then Lithium-ion battery and lithium polymer.
Power-saving processors. While laptops in 1989 were limited to the 80286 processor (often Harris CMOS) because of the energy demands of the more powerful 80386 on the original CHMOS III process, the introduction of the Intel 386SL processor, designed for the specific power needs of laptops, marked the point at which laptop needs were included in CPU design. The 386SL integrated a 386SX core with a memory controller and this was paired with an I/O chip to create the SL chipset. It was more integrated than any previous solution although its cost was higher. It was heavily adopted by the major notebook brands of the time. Intel followed this with the 486SL chipset which used the same architecture. However, Intel had to abandon this design approach as it introduced its Pentium series. Early versions of the mobile Pentium required TAB mounting (also used in LCD manufacturing) and this initially limited the number of companies capable of supplying notebooks. However, Intel did eventually migrate to more standard chip packaging. One limitation of notebooks has always been the difficulty in upgrading the processor which is a common attribute of desktops. Intel did try to solve this problem with the introduction of the MMC for mobile computing. The MMC was a standard module upon which the CPU and external cache memory could sit. It gave the notebook buyer the potential to upgrade their CPU at a later date, eased the manufacturing process somewhat, and was also used in some cases to skirt U.S. import duties as the CPU could be added to the chassis after it arrived in the U.S. Intel stuck with MMC for a few generations but ultimately could not maintain the appropriate speed and data integrity to the memory subsystem through the MMC connector. A more specialized power saving CPU variant for laptops is the PowerPC 603 family. Derived from IBM's 601 series for laptops (while the 604 branch was for desktops), it found itself used on many low end Apple desktops before it was widely used in laptops, starting with PowerBook models 5300, 2400, 500 upgrades. What started out as a laptop processor was eventually used across all platforms in its follow up, the PowerPC 750 AKA G3.
Improved Liquid-crystal displays, in particular active-matrix TFT (Thin-film transistor) LCD technology. Early laptop screens were black and white, blue and white, or grayscale, STN (Super Twist Nematic) passive-matrix LCDs prone to heavy shadows, ghosting and blurry movement (some portable computer screens were sharper monochrome plasma displays, but these drew too much current to be powered by batteries). Color STN screens were used for some time although their viewing quality was poor. By about 1991, two new color LCD technologies hit the mainstream market in a big way; Dual STN and TFT. The Dual STN screens solved many of the viewing problems of STN at a very affordable price and the TFT screens offered excellent viewing quality although initially at a steep price. DSTN continued to offer a significant cost advantage over TFT until the mid-90s before the cost delta dropped to the point that DSTN was no longer used in notebooks. Improvements in production technology meant displays became larger, sharper, had higher native resolutions, faster response time and could display color with great accuracy, making them an acceptable substitute for a traditional CRT monitor.
Improved storage technology. Early laptops and portables had only floppy disk drives. As thin, high-capacity hard disk drives with higher reliability and shock resistance and lower power consumption became available, users could store their work on laptop computers and take it with them. The 3.5" HDD was created initially as a response to the needs of notebook designers that needed smaller, lower power consumption products. With continuing pressure to shrink the notebook size even further, the 2.5" HDD was introduced. One Laptop Per Child (OLPC) and other new laptops use Flash RAM (non volatile, non mechanical memory device) instead of the mechanical hard disk.
Improved connectivity. Internal modems and standard serial, parallel, and PS/2 ports on IBM PC-compatible laptops made it easier to work away from home; the addition of network adapters and, from 1997, USB, as well as, from 1999, Wi-Fi, made laptops as easy to use with peripherals as a desktop computer. Many newer laptops are also available with built-in 3G Broadband wireless modems.
Other peripherals may include:
an integrated video camera for video communication
a fingerprint sensor for implementing a restriction of access to a sensitive data or the computer itself.
Netbooks
In June 2007, Asus announced the Eee PC 701 to be released in October, a small lightweight x86 Celeron-M ULV 353 powered laptop with 4 GB SDHC disk and a 7-inch screen. Despite previous attempts to launch small lightweight computers such as ultra-portable PC, the Eee was the first success story largely due to its low cost, small size, low weight and versatility. The term 'Netbook' was later dubbed by Intel. Asus then extended the Eee line with models with features such as a 9-inch screen and other brands, including Acer, MSI and Dell followed suit with similar devices, often built on the fledgling low-power Intel Atom processor architecture.
Smartbooks
In 2009, Qualcomm introduced a new term "smartbook", which stands for a hybrid device between smartphone and laptop.
See also
History of mobile phones
History of personal computers
History of software
List of pioneers in computer science
Timeline of portable computers
References
External links
Further reading
Laptops
Laptops
Laptops | History of laptops | Technology | 5,038 |
221,653 | https://en.wikipedia.org/wiki/Bleeding%20time | Bleeding time is a medical test done to assess the function of a person's platelets. It involves making a patient bleed, then timing how long it takes for them to stop bleeding using a stopwatch or other suitable devices.
The term template bleeding time is used when the test is performed to standardized parameters.
A newer alternative to the traditional bleeding time test is the platelet function screen performed on the PFA-100 analyzer.
Usage
The template bleeding time test is a method used when other more reliable and less invasive tests for determining coagulation are not available. Historically, it was used whenever physicians needed information about platelet activation.
Process
The test involves cutting the underside of the subject's forearm, in an area where there is no hair or visible veins. The cut is of a standardized width and depth, and is done quickly by a template device.
IVY method
The IVY method is the traditional format for this test. While both the IVY and Duke's method require the use of a sphygmomanometer, or blood pressure cuff, the IVY method is more invasive than the Duke method, utilizing an incision on the ventral side of the forearm, whereas the Duke method involves puncture with a lancet or special needle. In the IVY method, the blood pressure cuff is placed on the upper arm and inflated to 40 mmHg. A lancet or scalpel blade is used to make a shallow incision that is 1 millimeter deep on the underside of the forearm.
A standard-sized incision is made around 10 mm long and 1 mm deep. The time from when the incision is made until all bleeding has stopped is measured and is called the bleeding time. Every 30 seconds, filter paper or a paper towel is used to draw off the blood.
The test is finished when bleeding has stopped.
A prolonged bleeding time may be a result from decreased number of thrombocytes or impaired blood vessels. However, the depth of the puncture or incision may be the source of error.
Normal values fall between 3 – 10 minutes depending on the method used.
A disadvantage of Ivy's method is closure of puncture wound before stoppage of bleeding.
Duke's method
With the Duke's method, the patient is pricked with a special needle or lancet, preferably on the earlobe or fingertip, after having been swabbed with alcohol. The prick is about 3–4 mm deep. The patient then wipes the blood every 30 seconds with a filter paper. The test ceases when bleeding ceases. The usual time is about 2–5 minutes.
This method is not recommended and cannot be standardized because it can cause a large local hematoma.
Interpretation
Bleeding time may be affected by platelet function, certain vascular disorders and von Willebrand Disease—not by other coagulation factors such as haemophilia. Diseases that may cause prolonged bleeding time include thrombocytopenia, disseminated intravascular coagulation (DIC), Bernard-Soulier disease, and Glanzmann's thrombasthenia.
Aspirin and other cyclooxygenase inhibitors can significantly prolong bleeding time. While warfarin and heparin have their major effects on coagulation factors, an increased bleeding time is sometimes seen with use of these medications as well.
People with von Willebrand disease usually experience increased bleeding time, as von Willebrand factor is a platelet adhesion protein, but this is not considered an effective diagnostic test for this condition.
It is also prolonged in hypofibrinogenemia.
In popular culture
In the British comedy film Doctor in the House (1954), Sir Lancelot Spratt, the intimidating chief of surgery played by James Robertson Justice is asking instructional questions of his medical students. He asks a young student, who has been distracted by a pretty nurse, what "the bleeding time" is. The student looks at his watch and answers "Ten past ten, sir."
References
External links
MedlinePlus Medical Encyclopedia
Blood tests | Bleeding time | Chemistry | 827 |
7,484,714 | https://en.wikipedia.org/wiki/Forest%20management | Forest management is a branch of forestry concerned with overall administrative, legal, economic, and social aspects, as well as scientific and technical aspects, such as silviculture, forest protection, and forest regulation. This includes management for timber, aesthetics, recreation, urban values, water, wildlife, inland and nearshore fisheries, wood products, plant genetic resources, and other forest resource values. Management objectives can be for conservation, utilisation, or a mixture of the two. Techniques include timber extraction, planting and replanting of different species, building and maintenance of roads and pathways through forests, and preventing fire.
Many tools like remote sensing, GIS and photogrammetry modelling have been developed to improve forest inventory and management planning. Scientific research plays a crucial role in helping forest management. For example, climate modeling, biodiversity research, carbon sequestration research, GIS applications, and long-term monitoring help assess and improve forest management, ensuring its effectiveness and success.
Role of forests
The forest is a natural system that can supply different products and services. Forests supply water, mitigate climate change, provide habitats for wildlife including many pollinators which are essential for sustainable food production, provide timber and fuelwood, serve as a source of non-wood forest products including food and medicine, and contribute to rural livelihoods.
Forests include market and non-market products. Marketable products include goods that have a market price. Timber is the main one, with prices that range from a few hundred dollars per thousand board feet (MBF) to several thousand dollars for a veneer log. Others include grazing and fodder, specialty crops such as mushrooms or berries, usage fees for recreation or hunting, and biomass for bioenergy production. Forests also provide some non-market values which have no current market price. Examples of non-market goods would be improving water quality, air quality, aesthetics, and carbon sequestration.
The working of this system is influenced by the natural environment: climate, topography, soil, etc., and also by human activity. The actions of humans in forests constitute forest management. In developed societies, this management tends to be elaborated and planned in order to achieve the objectives that are considered desirable.
Aims
Some forests have been and are managed to obtain traditional forest products such as firewood, fiber for paper, and timber, with little thinking for other products and services. Nevertheless, as a result of the progression of environmental awareness, management of forests for multiple use is becoming more common.
Forests provide a variety of ecosystem services: cleaning the air, accumulating carbon, filtering water, and reducing flooding and erosion. Forests are the most biodiverse land-based ecosystem, and provide habitat for a vast array of animals, birds, plants and other life. They can provide food and material and also opportunities for recreation and education. Research has found that forest plantations "may result in reduced diversity and abundance of pollinators compared with natural forests that have greater structural and plant species diversity."
Monitoring and planning
Foresters develop and implement forest management plans relying on mapped resources, inventories showing an area's topographical features as well as its distribution of trees (by species) and other plant covers. Plans also include landowner objectives, roads, culverts, proximity to human habitation, water features and hydrological conditions, and soil information. Forest management plans typically include recommended silvicultural treatments and a timetable for their implementation. Application of digital maps in Geographic Information systems (GIS) that extracts and integrates different information about forest terrains, soil type and tree covers, etc. using, e.g. laser scanning enhances forest management plans in modern systems.
Forest management plans include recommendations to achieve the landowner's objectives and desired future conditions for the property subject to ecological, financial, logistical (e.g. access to resources), and other constraints. On some properties, plans focus on producing quality wood products for processing or sale. Hence, tree species, quantity, and form, all central to the value of harvested products quality and quantity, tend to be important components of silvicultural plans.
Good management plans include consideration of future conditions of the stand after any recommended harvests treatments, including future treatments (particularly in intermediate stand treatments), and plans for natural or artificial regeneration after final harvests.
The objectives of landowners and leaseholders influence plans for harvest and subsequent site treatment. In Britain, plans featuring "good forestry practice" must always consider the needs of other stakeholders such as nearby communities or rural residents living within or adjacent to woodland areas. Foresters consider tree felling and environmental legislation when developing plans. Plans instruct the sustainable harvesting and replacement of trees. They indicate whether road building or other forest engineering operations are required.
Agriculture and forest leaders are also trying to understand how the climate change legislation will affect what they do. The information gathered will provide the data that will determine the role of agriculture and forestry in a new climate change regulatory system.
Forest inventory
Wildlife considerations
The abundance and diversity of birds, mammals, amphibians and other wildlife are affected by strategies and types of forest management. Forests are important because they provide these species with food, space and water. Forest management is also important as it helps in conservation and utilization of the forest resources.
Approximately 50 million hectares (or 24%) of European forest land is protected for biodiversity and landscape protection. Forests allocated for soil, water, and other ecosystem services encompass around 72 million hectares (32% of European forest area). Over 90% of the world's forests regenerate organically, and more than half are covered by forest management plans or equivalents.
Management intensity
Forest management varies in intensity from a leave alone, natural situation to a highly intensive regime with silvicultural interventions. Forest Management is generally increased in intensity to achieve either economic criteria (increased timber yields, non-timber forest products, ecosystem services) or ecological criteria (species recovery, fostering of rare species, carbon sequestration).
Most of the forests in Europe have management plans; on the other hand, management plans exist for less than 25 percent of forests in Africa and less than 20 percent in South America. The area of forest under management plans is increasing in all regions – globally, it has increased by 233 million ha since 2000, reaching 2.05 billion ha in 2020.
Monitoring
Long-term monitoring studies are conducted to track forest dynamics over extended periods. These studies involve monitoring factors such as tree growth, mortality rates, and species composition. By observing forest changes over time, scientists can assess the health of forests and their responses to environmental shifts. Long-term monitoring is invaluable for informing sustainable forest management practices.
Scientific research employs remote sensing technologies and geographic information systems (GIS) to monitor changes in forest cover, deforestation rates, and forest health over time. These tools provide valuable data for forest assessments and support evidence-based decision making in forest management and conservation. By remotely monitoring forest changes, scientists can respond more effectively to threats and challenges facing forests.
Researchers conduct biodiversity assessments to gain insights into the diversity and distribution of plant and animal species in various forest ecosystems. These studies are essential for identifying areas of high conservation value and understanding the ecological importance of different habitats. By studying biodiversity patterns, scientists can recommend targeted approaches to forest management that protect and promote the richness of forest life.
Effects of climate change on forests
Research explores the specific impacts of climate change on forest ecosystems, including extreme heat and drought events. Understanding these effects is vital for developing adaptive strategies to mitigate climate change impacts on forests. By recognizing the vulnerabilities of forests to changing climatic conditions, scientists can implement conservation methods that enhance their resilience.
Scientific research plays a crucial role in forest management by utilizing climate modeling to project future climate scenarios. These models help scientists understand potential changes in temperature, precipitation patterns, and extreme weather events, enabling them to assess the impact of these changes on forest ecosystems. By predicting climate trends, researchers can develop more effective strategies for forest management and conservation.
Methods for creating or recreating forests
The term forestation is sometimes used as an umbrella term to include afforestation and reforestation. Both of those are processes for establishing and nurturing forests on lands that either previously had forest cover or were subjected to deforestation or degradation.
Tree breeding
Tree planting
Reforestation
Forest restoration
Afforestation
Types
Plantation forestry
Silviculture
Bamboo forestry
Hardwood timber production
Hardwood timber production is the process of managing stands of deciduous trees to maximize woody output. The production process is not linear because other factors must be considered, including marketable and non-marketable goods, financial benefits, management practices, and the environmental implications, of those management practices.
The more biodiverse the hardwood-forest ecosystem, the more challenges and opportunities its managers face. Managers aim for sustainable forest management to keep their cash crop renewing itself, using silvicultural practices that include growing, selling, controlling insects and most diseases, providing manure, applying herbicide treatments, and thinning.
But management can also harm the ecosystem; for example, machinery used in a timber harvest can compact the soil, stress the root system, reduce tree growth, lengthen the time needed for a stand to mature to harvestability. Machinery can also damage the understory, disturbing wildlife habitat and prevent regeneration.
Energy forestry
Forest farming
Sustainable forest management
Sustainable forest management (SFM) is the management of forests according to the principles of sustainable development. Sustainable forest management must keep a balance between the three main pillars: ecological, economic and socio-cultural. The goal of sustainable forestry is to allow for a balance to be found between making use of trees while maintaining natural patterns of disturbance and regeneration. The forestry industry mitigates climate change by boosting carbon storage in growing trees and soils and improving the sustainable supply of renewable raw materials via sustainable forest management.
Successfully achieving sustainable forest management will provide integrated benefits to all, ranging from safeguarding local livelihoods to protecting biodiversity and ecosystems provided by forests, reducing rural poverty and mitigating some of the effects of climate change. Forest conservation is essential to stop climate change.
Sustainable forest management also helps with climate change adaptation by increasing forest ecosystems' resistance to future climatic hazards and lowering the danger of additional land degradation by repairing and stabilizing soils and boosting their water-retention capacity. It contributes to the provision of a wide range of vital ecosystem services and biodiversity conservation, such as wildlife habitats, recreational amenity values, and a variety of non-timber forest products. Conservation of biodiversity is the major management aim in around 13% of the world's forests, while preservation of soil and water resources is the primary management goal in more than 30%.
Feeding humanity and conserving and sustainably using ecosystems are complementary and closely interdependent goals. Forests supply water, mitigate climate change and provide habitats for many pollinators, which are essential for sustainable food production. It is estimated that 75 percent of the world's leading food crops, representing 35 percent of global food production, benefit from animal pollination for fruit, vegetable or seed production.
The "Forest Principles" adopted at the Earth Summit (United Nations Conference on Environment and Development) in Rio de Janeiro in 1992 captured the general international understanding of sustainable forest management at that time. A number of sets of criteria and indicators have since been developed to evaluate the achievement of SFM at the global, regional, country and management unit level. These were all attempts to codify and provide for assessment of the degree to which the broader objectives of sustainable forest management are being achieved in practice. In 2007, the United Nations General Assembly adopted the Non-Legally Binding Instrument on All Types of Forests. The instrument was the first of its kind that reflected the strong international commitment to promote implementation of sustainable forest management through a new approach bringing all stakeholders together.
The Sustainable Development Goal 15 is also a global initiative aimed at promoting the implementation of sustainable forest management.
Definition
A definition of SFM was developed by the Ministerial Conference on the Protection of Forests in Europe (FOREST EUROPE) and has since been adopted by the Food and Agriculture Organization (FAO). It defines sustainable forest management as:
The stewardship and use of forests and forest lands in a way, and at a rate, that maintains their biodiversity, productivity, regeneration capacity, vitality and their potential to fulfill, now and in the future, relevant ecological, economic and social functions, at local, national, and global levels, and that does not cause damage to other ecosystems.
In simpler terms, the concept can be described as the attainment of balance: balance between society's increasing demands for forest products and benefits, and the preservation of forest health and diversity. This balance is critical to the survival of forests, and to the prosperity of forest-dependent communities.
For forest managers, sustainably managing a particular forest tract means determining, in a tangible way, how to use it today to ensure similar benefits, health and productivity in the future. Forest managers must assess and integrate a wide array of sometimes conflicting factors: commercial and non-commercial values, environmental considerations, community needs, even global impact to produce sound forest plans. In most cases, forest managers develop their forest plans in consultation with citizens, businesses, organizations and other interested parties in and around the forest tract being managed. The tools and visualization have been recently evolving for better management practices.
The Food and Agriculture Organization of the United Nations, at the request of Member States, developed and launched the Sustainable Forest Management Toolbox in 2014, an online collection of tools, best practices and examples of their application to support countries implementing sustainable forest management.
Because forests and societies are in constant flux, the desired outcome of sustainable forest management is not a fixed one. What constitutes a sustainably managed forest will change over time as values held by the public change.
Criteria and indicators
Criteria and indicators are tools which can be used to conceptualise, evaluate and implement sustainable forest management. Criteria define and characterize the essential elements, as well as a set of conditions or processes, by which sustainable forest management may be assessed. Periodically measured indicators reveal the direction of change with respect to each criterion.
Criteria and indicators of sustainable forest management are widely used and many countries produce national reports that assess their progress toward sustainable forest management. There are nine international and regional criteria and indicators initiatives, which collectively involve more than 150 countries. Three of the more advanced initiatives are those of the Working Group on Criteria and Indicators for the Conservation and Sustainable Management of Temperate and Boreal Forests (also called the Montréal Process), Forest Europe, and the International Tropical Timber Organization. Countries who are members of the same initiative usually agree to produce reports at the same time and using the same indicators. Within countries, at the management unit level, efforts have also been directed at developing local level criteria and indicators of sustainable forest management. The Center for International Forestry Research, the International Model Forest Network and researchers at the University of British Columbia have developed a number of tools and techniques to help forest-dependent communities develop their own local level criteria and indicators. Criteria and Indicators also form the basis of third-party forest certification programs such as the Canadian Standards Association's Sustainable Forest Management Standards and the Sustainable Forestry Initiative.
There appears to be growing international consensus on the key elements of sustainable forest management. Seven common thematic areas of sustainable forest management have emerged based on the criteria of the nine ongoing regional and international criteria and indicators initiatives. The seven thematic areas are:
Extent of forest resources
Biological diversity
Forest health and vitality
Productive functions of forest resources
Protective functions of forest resources
Socio-economic functions
Legal, policy and institutional framework.
This consensus on common thematic areas (or criteria) effectively provides a common and implicit definition of sustainable forest management. The seven thematic areas were acknowledged by the international forest community at the fourth session of the United Nations Forum on Forests and the 16th session of the Committee on Forestry. These thematic areas have since been enshrined in the Non-Legally Binding Instrument on All Types of Forests as a reference framework for sustainable forest management to help achieve the purpose of the instrument.
In 2012, the Montréal Process, Forest Europe, the International Tropical Timber Organization, and the Food and Agriculture Organization of the United Nations, acknowledging the seven thematic areas, endorsed a joint statement of collaboration to improve global forest-related data collection and reporting and avoiding the proliferation of monitoring requirements and associated reporting burdens.
Sustainable forestry operations must also adhere to the International Labour Organization's 18 criteria on human and social rights. Gender equality, health and well-being and community consultation are examples of such rights.
Ecosystem approach
The ecosystem approach has been prominent on the agenda of the Convention on Biological Diversity (CBD) since 1995. The CBD definition of the Ecosystem Approach and a set of principles for its application were developed at an expert meeting in Malawi in 1995, known as the Malawi Principles. The definition, 12 principles and 5 points of "operational guidance" were adopted by the fifth Conference of Parties (COP5) in 2000. The CBD definition is as follows:
The ecosystem approach is a strategy for the integrated management of land, water and living resources that promotes conservation and sustainable use in an equitable way. Application of the ecosystem approach will help to reach a balance of the three objectives of the Convention. An ecosystem approach is based on the application of appropriate scientific methodologies focused on levels of biological organization, which encompasses the essential structures, processes, functions and interactions among organisms and their environment. It recognizes that humans, with their cultural diversity, are an integral component of many ecosystems.
Sustainable forest management was recognized by parties to the Convention on Biological Diversity in 2004 (Decision VII/11 of COP7) to be a concrete means of applying the Ecosystem Approach to forest ecosystems. The two concepts, sustainable forest management and the ecosystem approach, aim at promoting conservation and management practices which are environmentally, socially and economically sustainable, and which generate and maintain benefits for both present and future generations. In Europe, the MCPFE and the Council for the Pan-European Biological and Landscape Diversity Strategy (PEBLDS) jointly recognized sustainable forest management to be consistent with the Ecosystem Approach in 2006.
Methods
Ecoforestry
Continuous cover forestry
Mycoforestry
Assisted natural regeneration
Alternative harvesting methods
Reduced impact logging (RIL) is a sustainable forestry method as it decreases the forest and canopy damages by approximately 75% compared to the conventional logging methods. Additionally, a 120-year regression model found that RIL would have a significantly higher reforestation in 30 years ("18.3 m3 ha−1") in relation to conventional logging ("14.0 m3 ha−1"). Furthermore, it is essential that RIL should be practiced as soon as possible to improve reforestation in the future. For instance, a study concluded that logging would have to reduce by 40% in Brazil if the current logging measures stay of "6 trees/hectare with a 30-year cutting cycle" stay in place. This would be to ensure that future ground biomass to have regeneration of the original ground biomass prior to harvesting.
Preserving forest genetic resources
Appropriate use and long-term conservation of forest genetic resources (FGR) is a part of sustainable forest management. In particular when it comes to the adaptation of forests and forest management to climate change. Genetic diversity ensures that forest trees can survive, adapt and evolve under changing environmental conditions. Genetic diversity in forests also contributes to tree vitality and to the resilience towards pests and diseases. Furthermore, FGR has a crucial role in maintaining forest biological diversity at both species and ecosystem levels.
Selecting carefully the forest reproductive material with emphasis on getting a high genetic diversity rather than aiming at producing a uniform stand of trees, is essential for sustainable use of FGR. Considering the provenance is crucial as well. For example, in relation to climate change, local material may not have the genetic diversity or phenotypic plasticity to guarantee good performance under changed conditions. A different population from further away, which may have experienced selection under conditions more like those forecast for the site to be reforested, might represent a more suitable seed source.
Problems
Wildfires
Forest degradation
Deforestation
Deforestation and climate change
Checkerboarding
Checkerboarding can create problems for access and ecological management. It is one of the major causes of inholdings within the boundaries of national forests. As is the case in northwestern California, checkerboarding has resulted in issues with managing national forest land. Checkerboarding was previously applied to these areas during the period of western expansion, and they are now commercial forest land. Conflicting policies establishing the rights of the private owners of this land have caused some difficulties in the local hardwood timber production economy.
While relieving this land from its checkerboard ownership structure could benefit the timber production economy of the region, checkerboards can allow government to extend good forestry practices over intermingled private lands, by demonstration or applying pressure via economy of scale or the right of access.
Unsustainable practices
Clear-cutting
Even-aged timber management
Illegal logging
Certification systems
Forest certification is a globally recognized system for encouraging sustainable forest management and assuring that forest-based goods are derived from sustainably managed forests. This is a voluntary procedure in which an impartial third-party organization evaluates the quality of forest management and output against a set of criteria established by a governmental or commercial certification agency.
Growing environmental awareness and consumer demand for more socially responsible businesses helped third-party forest certification emerge in the 1990s as a credible tool for communicating the environmental and social performance of forest operations.
There are many potential users of certification, including: forest managers, scientists, policy makers, investors, environmental advocates, business consumers of wood and paper, and individuals.
With third-party forest certification, an independent standards setting organization (SSO) develops standards of good forest management, and independent auditors issue certificates to forest operations that comply with those standards. Forest certification verifies that forests are well-managed – as defined by a particular standard – and chain-of-custody certification tracks wood and paper products from the certified forest through processing to the point of sale.
This rise of certification led to the emergence of several different systems throughout the world. As a result, there is no single accepted forest management international standard worldwide. ISO members rejected a proposal for a forestry management system as requirements standard, with a consensus that a management system for certification would not be effective. Instead ISO members voted for a chain of custody of wood and wood-based products with ISO 38200 published in 2018. Without an international standard each system takes a somewhat different approach with scheme owners defining private standards for sustainable forest management.
In its 2009–2010 Forest Products Annual Market Review United Nations Economic Commission for Europe/Food and Agriculture Organization stated: "Over the years, many of the issues that previously divided the (certification) systems have become much less distinct. The largest certification systems now generally have the same structural programmatic requirements."
Third-party forest certification is an important tool for those seeking to ensure that the paper and wood products they purchase and use come from forests that are well-managed and legally harvested. Incorporating third-party certification into forest product procurement practices can be a centerpiece for comprehensive wood and paper policies that include factors such as the protection of sensitive forest values, thoughtful material selection and efficient use of products.
Without a single international standard, there are a proliferation of private standards, with more than fifty scheme owners offering certification worldwide, addressing the diversity of forest types and tenures. Globally, the two largest umbrella certification programs are:
Programme for the Endorsement of Forest Certification (PEFC)
Forest Stewardship Council (FSC)
The Forest Stewardship Council's Policy on Conversion states that land areas converted from natural forests to round wood production after November 1994 are ineligible for Forest Stewardship Council certification.
The area of forest certified worldwide is growing slowly. PEFC is the world's largest forest certification system, with more than two-thirds of the total global certified area certified to its Sustainability Benchmarks. In 2021, PEFC issued a position statement defending their use of private standards in response to the Destruction: Certified report from Greenpeace.
In North America, there are three certification standards endorsed by PEFC – the Sustainable Forestry Initiative, the Canadian Standards Association's Sustainable Forest Management Standard, and the American Tree Farm System. SFI is the world's largest single forest certification standard by area. FSC has five standards in North America – one in the United States and four in Canada.
While certification is intended as a tool to enhance forest management practices throughout the world, to date most certified forestry operations are located in Europe and North America. A significant barrier for many forest managers in developing countries is that they lack the capacity to undergo a certification audit and maintain operations to a certification standard.
Forest governance
Although a majority of forests continue to be owned formally by government, the effectiveness of forest governance is increasingly independent of formal ownership. Since neo-liberal ideology in the 1980s and the emanation of the climate change challenges, evidence that the state is failing to effectively manage environmental resources has emerged. Under neo-liberal regimes in the developing countries, the role of the state has diminished and the market forces have increasingly taken over the dominant socio-economic role.
The shifting of natural resource management responsibilities from central to state and local governments, where this is occurring, is usually a part of broader decentralization process.
The development of National Forest Funds is one way to address the issue of financing sustainable forest management. National forest funds (NFFs) are dedicated financing mechanisms managed by public institutions designed to support the conservation and sustainable use of forest resources. As of 2014, there are 70 NFFs operating globally.
Community forestry
Community-based forest management (CBFM) is a scheme that links governmental forest agencies and the local community in efforts to regenerate degraded forests, reforest deforested areas, and decrease carbon emissions that contribute to climate change. This partnership is done with the intent of not only repairing damage to the environment but also providing economic and social benefits to the affected area.
In principle, the benefits for the local community involvement in the management and protection of their forests would be to provide employment and to supplement income from both the wage labor and additional agriculture which would then strength the entire local economy while improving environmental conditions and mitigating climate change. Therefore, implementing a CBFM system can provide rural development while mitigating climate change and sustaining biodiversity within the region. It is important to engage the local community members, many of which are indigenous since presumably, they would have a deeper knowledge of the local ecosystems as well as the life cycles of those ecosystems over time. Their involvement also helps to ensure that their cultural practices remain intact.
Forestry law
Mitigation of deforestation and climate change
Scientific studies investigate the ability of forests to absorb carbon dioxide from the atmosphere (carbon sequestration). Through such analysis, researchers can quantify the carbon stocks present in different types of forests and assess their effectiveness as carbon sinks. Understanding the capacity of forests to sequester carbon is crucial for climate change mitigation efforts.
Forest protection
Tropical rainforest conservation
Proforestation
Proforestation is the practice of protecting existing natural forests to foster continuous growth, carbon accumulation, and structural complexity. It is recognized as an important forest based strategy for addressing the global crises in climate and biodiversity. Forest restoration can be a strategy for climate change mitigation. Proforestation complements other forest-based solutions like afforestation, reforestation and improved forest management.
Allowing proforestation in some secondary forests will increase their accumulated carbon and biodiversity over time. Strategies for proforestation include rewilding, such as reintroducing apex predators and keystone species as, for example, predators keep the population of herbivores in check (which reduce the biomass of vegetation). Another strategy is establishing wildlife corridors connecting isolated protected areas.
Proforestation refers specifically to enabling continuous forest growth uninterrupted by active management or timber harvesting, a term coined by scientists William Moomaw, Susan Masino, and Edward Faison.
Proforestation differs from agroforestry or the cultivation of forest plantations, the latter consisting of similarly aged trees of just one or two species. Plantations can be an efficient source of wood but often come at the expense of natural forests and cultivate little habitat for biodiversity, such as dead and fallen trees or understory plants. Further, once factoring in emissions from clearing the land and the decay of plantation waste and products at the end of their often brief lifecycles (e.g. paper products), plantations sequester 40 times less carbon than natural forests.
Proforestation is specifically recommended in “World Scientists’ Warning of a Climate Emergency, as a means to “quickly curtail habitat and biodiversity loss” and protect "high carbon stores" and areas "with the capacity to rapidly sequester carbon."
Increasing Forest and Community Resilience
1.6 billion people worldwide depend on forests for their livelihoods, including 300–350 million (half of whom are Indigenous peoples) who live near or within "dense forests" and depend almost entirely on these ecosystems for their survival. Rural households in Asia, Africa, and Latin America also depend on forests for about a quarter of their total incomes, with about half of this in the form of food, fodder, energy, building materials and medicine.
Proforestation can protect full native biodiversity and support the forests and other land types that provide resources we need. For example, research has found that old growth and complex forests are more resistant to the effects of climate change. One study found that taller trees had increased drought resistance, being able to capture and retain water better, due to their deeper root system and larger biomass. This means that even in dry conditions, these trees continued to photosynthesize at a higher rate than smaller trees.
Further, old-growth forests have been shown to be more resistant to fires compared to young forests with trees that have thinner bark and with more fuel available for increasing temperatures and fire damage. Proforestation can help to reduce fire risks to forests and the surrounding communities. They can also help absorb water and prevent flooding to surrounding communities. Considering the variety of ecosystem services complex forests provide, sustaining healthy forests means adjacent communities will be better off as well.
Workers
History
The preindustrial age has been dubbed by Werner Sombart and others as the 'wooden age', as timber and firewood were the basic resources for energy, construction and housing. The development of modern forestry is closely connected with the rise of capitalism, the economy as a science and varying notions of land use and property. Roman Latifundiae, large agricultural estates, were quite successful in maintaining the large supply of wood that was necessary for the Roman Empire. Large deforestations came with the decline of the Romans. However already in the 5th century, monks in the then Byzantine Romagna on the Adriatic coast, were able to establish stone pine plantations to provide fuelwood and food. This was the beginning of the massive forest mentioned by Dante Alighieri in his 1308 poem Divine Comedy.
Similar sustainable formal forestry practices were developed by the Visigoths in the 7th century when, faced with the ever-increasing shortage of wood, they instituted a code concerned with the preservation of oak and pine forests. The use and management of many forest resources has a long history in China as well, dating back to the Han dynasty and taking place under the landowning gentry. A similar approach was used in Japan. It was also later written about by the Ming dynasty Chinese scholar Xu Guangqi (1562–1633).
In Europe, land usage rights in medieval and early modern times allowed different users to access forests and pastures. Plant litter and resin extraction were important, as pitch (resin) was essential for the caulking of ships, falking and hunting rights, firewood and building, timber gathering in wood pastures, and grazing animals in forests. The notion of "commons" (German "Allmende") refers to the underlying traditional legal term of common land. The idea of enclosed private property came about during modern times. However, most hunting rights were retained by members of the nobility which preserved the right of the nobility to access and use common land for recreation, like fox hunting.
13th to 16th century
Systematic management of forests for a sustainable yield of timber began in Portugal in the 13th century when King Afonso III planted the Pinhal do Rei (King's Pine Forest) near Leiria to prevent coastal erosion and soil degradation, and as a sustainable source for timber used in naval construction. His successor King Denis of Portugal continued the practice and the forest exists still today.
Forest management also flourished in the German states in the 14th century, e.g. in Nuremberg, and in 16th-century Japan. Typically, a forest was divided into specific sections and mapped; the harvest of timber was planned with an eye to regeneration. As timber rafting allowed for connecting large continental forests, as in south western Germany, via Main, Neckar, Danube and Rhine with the coastal cities and states, early modern forestry and remote trading were closely connected. Large firs in the black forest were called „Holländer“, as they were traded to the Dutch ship yards. Large timber rafts on the Rhine were 200 to 400m in length, 40m in width and consisted of several thousand logs. The crew consisted of 400 to 500 men, including shelter, bakeries, ovens and livestock stables. Timber rafting infrastructure allowed for large interconnected networks all over continental Europe and is still of importance in Finland.
Starting with the 16th century, enhanced world maritime trade, a boom in housing construction in Europe, and the success and further Berggeschrey (rushes) of the mining industry increased timber consumption sharply. The notion of 'Nachhaltigkeit', sustainability in forestry, is closely connected to the work of Hans Carl von Carlowitz (1645–1714), a mining administrator in Saxony. His book Sylvicultura oeconomica, oder haußwirthliche Nachricht und Naturmäßige Anweisung zur wilden Baum-Zucht (1713) was the first comprehensive treatise about sustainable yield forestry. In the UK, and, to an extent, in continental Europe, the enclosure movement and the Clearances favored strictly enclosed private property. The Agrarian reformers, early economic writers and scientists tried to get rid of the traditional commons. At the time, an alleged tragedy of the commons together with fears of a Holznot, an imminent wood shortage played a watershed role in the controversies about cooperative land use patterns.
The practice of establishing tree plantations in the British Isles was promoted by John Evelyn, though it had already acquired some popularity. Louis XIV's minister Jean-Baptiste Colbert's oak Forest of Tronçais, planted for the future use of the French Navy, matured as expected in the mid-19th century: "Colbert had thought of everything except the steamship," Fernand Braudel observed. Colbert's vision of forestry management was encoded in the French forestry Ordinance of 1669, which proved to be an influential management system throughout Europe. In parallel, schools of forestry were established beginning in the late 18th century in Hesse, Russia, Austria-Hungary, Sweden, France and elsewhere in Europe.
Mechanization in 19th century
Forestry mechanization was always in close connection to metal working and the development of mechanical tools to cut and transport timber to its destination. Rafting belongs to the earliest means of transport. Steel saws came up in the 15th century. The 19th century widely increased the availability of steel for whipsaws and introduced forest railways and railways in general for transport and as forestry customer. Further human induced changes, however, came since World War II, respectively in line with the "1950s syndrome". The first portable chainsaw was invented in 1918 in Canada, but large impact of mechanization in forestry started after World War II. Forestry harvesters are among the most recent developments. Although drones, planes, laser scanning, satellites and robots also play a part in forestry.
Forest conservation and early globalization
Starting from the 1750s modern scientific forestry was developed in France and the German speaking countries in the context of natural history scholarship and state administration inspired by physiocracy and cameralism. Its main traits were centralized management by professional foresters, the adherence to sustainable yield concepts with a bias towards fuelwood and timber production, artificial afforestation, and a critical view of pastoral and agricultural uses of forests.
During the late 19th and early 20th centuries, forest preservation programs were established in British India, the United States, and Europe. Many foresters were either from continental Europe (like Sir Dietrich Brandis), or educated there (like Gifford Pinchot). Sir Dietrich Brandis is considered the father of tropical forestry, European concepts and practices had to be adapted in tropical and semi-arid climate zones. The development of plantation forestry was one of the (controversial) answers to the specific challenges in the tropical colonies. The enactment and evolution of forest laws and binding regulations occurred in most Western nations in the 20th century in response to growing conservation concerns and the increasing technological capacity of logging companies. Tropical forestry is a separate branch of forestry which deals mainly with equatorial forests that yield woods such as teak and mahogany.
21st century
A strong body of research exists regarding the management of forest ecosystems and the genetic improvement of tree species and varieties. Forestry studies also include the development of better methods for the planting, protecting, thinning, controlled burning, felling, extracting, and processing of timber. One of the applications of modern forestry is reforestation, in which trees are planted and tended in a given area.Trees provide numerous environmental, social and economic benefits for people. In many regions, the forest industry is of major ecological, economic, and social importance, with the United States producing more timber than any other country in the world. Third-party certification systems that provide independent verification of sound forest stewardship and sustainable forestry have become commonplace in many areas since the 1990s. These certification systems developed as a response to criticism of some forestry practices, particularly deforestation in less-developed regions along with concerns over resource management in the developed world. Sustainable forestry operations must also adhere to the International Labour Organization's 18 criteria on human and social rights. Gender equality, health and well-being and community consultation are examples of mentioned rights.
In topographically severe forested terrain, proper forestry is important for the prevention or minimization of serious soil erosion or even landslides. In areas with a high potential for landslides, forests can stabilize soils and prevent property damage or loss, human injury, or loss of life.
Global production of roundwood rose from 3.5 billion m³ in 2000 to 4 billion m³ in 2021. In 2021, wood fuel was the main product with a 49 percent share of the total (2 billion m³), followed by coniferous industrial roundwood with 30 percent (1.2 billion m³) and non-coniferous industrial roundwood with 21 percent (0.9 billion m³). Asia and the Americas are the two main producing regions, accounting for 29 and 28 percent of the total roundwood production, respectively; Africa and Europe have similar shares of 20–21 percent, while Oceania produces the remaining 2 percent.
Many lower- and middle-income countries rely on wood for energy purposes (notably cooking). The largest producers are all in these income groups and have large populations with a high reliance on wood for energy: in 2021, India ranked first with 300 million m³ (15 percent of total production), followed by China with 156 million m3 and Brazil with 129 million m³ (8 percent and 7 percent of global production).
Journals
Sylwan first published in 1820
Schweizerische Zeitschrift für Forstwesen first published in 1850.
Erdészeti Lapok first published in 1862. (Hungary, 1862–present)
The Indian Forester first published in 1875.
Šumarski list (Forestry Review, Croatia) was published in 1877 by Croatian Forestry Society.
Montes (Forestry, Spain) first published in 1877.
Revista pădurilor (Journal of Forests, Romania, 1881–1882; 1886–present), the oldest extant magazine in Romania
Forestry Quarterly, first published in 1902 by the New York State College of Forestry.
Šumarstvo (Forestry, Serbia) first published in 1948 by the Ministry of Forestry of Democratic Federal Yugoslavia, and since 1951 by Organ of Society of Forestry Engineers and Technicians of the Republic of Serbia (succeeding the former Šumarski glasnik published from 1907 to 1921)
Society and culture
Public input and awareness
There has been increased public awareness of natural resource policy, including forest management. Public concern regarding forest management may have shifted from the extraction of timber for economic development, to maintaining the flow of the range of ecosystem services provided by forests, including provision of habitat for wildlife, protecting biodiversity, watershed management, and opportunities for recreation. Increased environmental awareness may contribute to an increased public mistrust of forest management professionals. But it can also lead to greater understanding about what professionals do for forests for nature conservation and ecological services.
By region
Developing world
In December 2007, at the Climate Change Conference in Bali, the issue of deforestation in the developing world in particular was raised and discussed. The foundations of a new incentive mechanism for encouraging sustainable forest management measures was therefore laid in hopes of reducing world deforestation rates. This mechanism was formalized and adopted as REDD in November 2010 at the Climate Change Conference in Cancun by UNFCCC COP 16. Developing countries who are signatories of the CBD were encouraged to take measure to implement REDD activities in the hope of becoming more active contributors of global efforts aimed at the mitigation greenhouse gas, as deforestation and forest degradation account for roughly 15% of total global greenhouse gas emissions. The REDD activities are formally tasked with "reducing emissions from deforestation and forest degradation; and the role of conservation, sustainable management of forests and enhancement of forest carbon stocks in developing countries". REDD+ works in 3 phases. The first phase consists of developing viable strategies, while the second phase begins work on technology development and technology transfer to the developing countries taking part in REDD+ activities. The last phase measures and reports the implementation of the action taken. In 2021 the LEAF coalition was created, aiming to provide 1 billion dollars to countries that will protect their tropical and subtropical forests.
European Union
In 2022 the European parliament approved a bill aiming to stop the import linked with deforestation. The bill may cause to Brazil, for example, to stop deforestation for agricultural production and begun to "increase productivity on existing agricultural land". The legislation was adopted with some changes by the European Council in May 2023 and is expected to enter into force several weeks after. The bill requires companies who want to import certain types of products to the European Union to prove the production of those commodities is not linked to areas deforested after 31 of December 2020. It prohibits also import of products linked with Human rights abuse. The list of products includes: palm oil, cattle, wood, coffee, cocoa, rubber and soy. Some derivatives of those products are also included: chocolate, furniture, printed paper and several palm oil based derivates.
Great Britain
The Forestry Commission was founded in 1919 to restore forests to Great Britain after World War 1. The commission regulates both private and public forests, as well as manages private forests. Agricultural land was bought and transformed, totalling 35% of the British woodland area having been possessed at one point in time
America
Canada
Canada's significant contribution to global sustainable forest management with its 166 million hectares of forest land independently certified as sustainably managed, representing 40% of the world's certified forests, which is more than any other country. Approximately 94% of Canada's forest land is publicly owned. Sustainable forest management strategies aim to reconcile various immediate demands while ensuring that forests continue to provide benefits for future generations.
The province of Ontario has its own sustainable forest management measures in place. A little less than half of all the publicly owned forests of Ontario are managed forests, required by The Crown Forest Sustainability Act to be managed sustainably. Sustainable management is often done by forest companies who are granted Sustainable Forest Licenses which are valid for 20 years. The main goal of Ontario's sustainable forest management measures is to ensure that the forest are kept healthy and productive, conserving biodiversity, all whilst supporting communities and forest industry jobs. All management strategies and plans are highly regulated, arranged to last for a 10-year period, and follow the strict guidelines of the Forest Management Planning Manual. Alongside public sustainable forest management, the government of Ontario encourages sustainable forest management of Ontario's private forests as well through incentives. So far, 44% of Ontario's crown forests are managed.
In order for logging to begin, the forestry companies must present a plan to the government who will then communicate to the public, First Nations and other industries in order to protect forest values. The plan must include strategies on how the forest values will be protected, assessing the state of the forest and whether it is capable of recovering from human activity, and presenting strategies on regeneration. After the harvest begins, the government monitors if the company is complying within the planned restrictions and also monitors the health of the ecosystem (soil depletion and erosion, water contamination, wildlife...). Failure to comply may result in fines, suspensions, removal of harvesting rights, confiscation of harvested timber and possible imprisonment.
United States
In the beginning of the year 2020 the "Save the Redwoods League" after a successful crowdfunding campaign bought " Alder Creek" a piece of land 583 acres large, with 483 big Sequoia trees including the 5th largest tree in the world. The organizations plan to make there forest thinning that is a controversial operation
Asia
Russia
In 2019 after severe wildfires and public pressure the Russian government decided to take a number of measures for more effective forest management, what is considered as a big victory for the Environmental movement
Indonesia
In August 2019, a court in Indonesia stopped the construction of a dam that could heavily hurt forests and villagers in the area
In 2020 the rate of deforestation in Indonesia was the slowest since 1990. It was 75% lower than in 2019. This is because the government stopped issuing new licences to cut forests, including for palm oil plantations. The falling price of palm oil facilitated making it. Very wet weather reduced wildfires what also contributed to the achievement.
Africa
Congo
In August 2021 UNESCO removed the Salonga National Park from its list of threatened sites. Forbidding oil drilling, reducing poaching played crucial role in the achievement. The event is considered as a big win to Democratic Republic of the Congo as the Salonga forest is the biggest protected rainforest in Africa.
Kenya
In accordance with Article 10 of the Kenyan Constitution, which mandates the incorporation of sustainable development into all laws and decisions regarding public policy, including forest conservation and management. Kenya responds to continued deforestation, forest degradation, and forest encroachment, which results in conversion of land uses to settlement and agriculture, by taking action.
See also
Conservation biology
Coppicing
Environmental protection
Forest farming
Forest inventory
Forest plans
Outline of forestry
Sustainable land management
:Category:Forest conservation
References
Sources
External links
Why does this forest look like a fingerprint? We set out to solve why a forest in the middle of Uruguay looked like that — and wound up discovering something much bigger. Vox – explores issues surrounding commercial monoculture forest management and its impact upon the economy, previously existing habitats, wildlife, and people.
California fires are so severe some forests might vanish forever LA Times – a serious forest management issue, in which industrial forest management practices are increasing high-severity fire risk.
The checkerboard effect OSU Press – explores how the effects of forest management decisions can last for centuries, and the value of forest history in understanding current forest management problems.
Proforestation by Project Regeneration
Ecological processes
Habitat management equipment and methods
Forest certification
Forest governance
Forest conservation | Forest management | Physics | 9,740 |
1,750,677 | https://en.wikipedia.org/wiki/Durable%20good | In economics, a durable good or a hard good or consumer durable is a good that does not quickly wear out or, more specifically, one that yields utility over time rather than being completely consumed in one use. Items like bricks could be considered perfectly durable goods because they should theoretically never wear out. Highly durable goods such as refrigerators or cars usually continue to be useful for several years of use, so durable goods are typically characterized by long periods between successive purchases.
Durable goods are known to form an imperative part of economic production. This can be exemplified from the fact that personal expenditures on durables exceeded the total value of $800 billion in 2000. In the year 2000 itself, durable goods production composed of approximately 60 percent of aggregate production within the manufacturing sector in the United States.
Examples of consumer durable goods include vehicles, books, household goods (home appliances, consumer electronics, furniture, musical instruments, tools, etc.), sports equipment, jewelry, medical equipment, and toys.
Nondurable goods or soft goods (consumables) are the opposite of durable goods. They may be defined either as goods that are immediately consumed in one use or ones that have a lifespan of less than three years. Examples of nondurable goods include fast-moving consumer goods such as food, cosmetics, cleaning products, medication, clothing, packaging and fuel. While durable goods can usually be rented as well as bought, nondurable goods generally are not rented.
Durability
According to Cooper (1994, p5) "durability is the ability of a product to perform its required function over a lengthy period under normal conditions of use without excessive expenditure on maintenance or repair". Several units may be used to measure the durability of a product according to its field of application such as years of existence, hours of use and operational cycles.
Product life spans and sustainable consumption
The life span of household goods is significant for sustainable consumption. The longer product life spans could contribute to eco-efficiency and sufficiency, thus slowing consumption in order to progress towards a sustainable consumption. Cooper (2005) proposed a model to demonstrate the crucial role of product life spans for sustainable production and consumption.
Durability, as a characteristic relating to the quality of goods that can be demanded by consumers, was not clear until an amendment of the law in 1994 relating to the quality standards for supplied goods.
The condition of the economy is one of the biggest factors as well as the philosophy of money. Consumers want to use their money effectively and essentially get what they paid for, and in the best-case scenario, get more than what they paid for. In the pursuit of durable goods through the lifespans of the products and consumption of those products money and price dictate two of the biggest factors other than supply and demand. “At some point, people will realize that they can trade more easily if they use some intermediate good—money. This intermediate good should ideally be easy to handle, store and transport (function i). It should be easy to measure and divide to facilitate calculations (function ii). And it should be difficult to destroy so that it lasts over time (function iii)” (de Bruin 2023). Durable good falls into this category since ease of commerce and convenience are key factors into making it a good product to buy.
See also
Coase conjecture
Disposable product
Eco-action
Industrial organization
Pacman conjecture
Planned obsolescence
Putty-putty
Quality assurance
Source reduction
Waste minimisation
References
Goods (economics)
Environmentalism
Industrial ecology
Sustainability
Waste management concepts
Waste minimisation | Durable good | Physics,Chemistry,Engineering | 724 |
55,353,295 | https://en.wikipedia.org/wiki/Paper%20spray%20ionization | Paper spray ionization is a technique used in mass spectrometry to produce ions from a sample to be analyzed. It is a variant of electrospray ionization. The sample (for instance a few microlitres of blood or urine) is applied to a piece of paper and solvent is added. Then a high voltage is applied, which creates the ions to be analyzed with a mass spectrometer. The method, first described in 2010, is relatively easy to use and can detect and measure the presence of various substances in the sample. This technique shows great potential for point-of-care clinical applications, in that important tests may be run and results obtained within a reasonable amount of time in proximity to the patient in a single visit. In 2017 it was reported that a test based on paper spray ionization mass spectrometry can detect cocaine use from a subject's fingerprint. It was also used to detect pesticides from the surfaces of fruits.
More recently, an advanced form of Paper Spray, termed Paper Arrow, was developed. This universal approach seamlessly hyphenates Paper Chromatography and Mass Spectrometry, facilitated by on-paper ionization without requiring visual indicators. The entire process of Paper Arrow was shown to be simple and fast, requiring only 2 μL of raw biological sample. Its analytical performance is in accordance with stringent clinical guidelines, and it demonstrated superior figures of merit compared to LC-MS. Paper Arrow is one of the few ambient ionization sources that has been clinically validated. In a study with 17 volunteers, blood and saliva samples were collected before and at 15, 30, 60 and 240 min after ingesting 1 g of paracetamol. Detection from stimulated saliva and plasma with PA-MS provided a reliable result that can aid in making timely treatment decisions. Moreover, participants’ views of blood and saliva sampling procedures were assessed qualitatively, showing a preference for non-invasive sampling.
References
Ion source | Paper spray ionization | Physics | 404 |
367,077 | https://en.wikipedia.org/wiki/Alu%20element | An Alu element is a short stretch of DNA originally characterized by the action of the Arthrobacter luteus (Alu) restriction endonuclease. Alu elements are the most abundant transposable elements in the human genome, present in excess of one million copies. Alu elements were thought to be selfish or parasitic DNA, because their sole known function is self reproduction. However, they are likely to play a role in evolution and have been used as genetic markers. They are derived from the small cytoplasmic 7SL RNA, a component of the signal recognition particle. Alu elements are highly conserved within primate genomes and originated in the genome of an ancestor of Supraprimates.
Alu insertions have been implicated in several inherited human diseases and in various forms of cancer.
The study of Alu elements has also been important in elucidating human population genetics and the evolution of primates, including the evolution of humans.
Alu family
The Alu family is a family of repetitive elements in primate genomes, including the human genome. Modern Alu elements are about 300 base pairs long and are therefore classified as short interspersed nuclear elements (SINEs) among the class of repetitive RNA elements. The typical structure is 5' - Part A - A5TACA6 - Part B - PolyA Tail - 3', where Part A and Part B (also known as "left arm" and "right arm") are similar nucleotide sequences. Expressed another way, it is believed modern Alu elements emerged from a head to tail fusion of two distinct FAMs (fossil antique monomers) over 100 million years ago, hence its dimeric structure of two similar, but distinct monomers (left and right arms) joined by an A-rich linker. Both monomers are thought to have evolved from 7SL, also known as SRP RNA. The length of the polyA tail varies between Alu families.
There are over one million Alu elements interspersed throughout the human genome, and it is estimated that about 10.7% of the human genome consists of Alu sequences. However, less than 0.5% are polymorphic (i.e., occurring in more than one form or morph). In 1988, Jerzy Jurka and Temple Smith discovered that Alu elements were split in two major subfamilies known as AluJ (named after Jurka) and AluS (named after Smith), and other Alu subfamilies were also independently discovered by several groups. Later on, a sub-subfamily of AluS which included active Alu elements was given the separate name AluY. Dating back 65 million years, the AluJ lineage is the oldest and least active in the human genome. The younger AluS lineage is about 30 million years old and still contains some active elements. Finally, the AluY elements are the youngest of the three and have the greatest disposition to move along the human genome. The discovery of Alu subfamilies led to the hypothesis of master/source genes, and provided the definitive link between transposable elements (active elements) and interspersed repetitive DNA (mutated copies of active elements).
Related elements
B1 elements in rats and mice are similar to Alus in that they also evolved from 7SL RNA, but they only have one left monomer arm. 95% percent of human Alus are also found in chimpanzees, and 50% of B elements in mice are also found in rats. These elements are mostly found in introns and upstream regulatory elements of genes.
The ancestral form of Alu and B1 is the fossil Alu monomer (FAM). Free-floating forms of the left and right arms exist, termed free left Alu monomers (FLAMs) and free right Alu monomers (FRAMs) respectively. A notable FLAM in primates is the BC200 lncRNA.
Sequence features
Two main promoter "boxes" are found in Alu: a 5' A box with the consensus , and a 3' B box with the consensus (IUPAC nucleic acid notation). tRNAs, which are transcribed by RNA polymerase III, have a similar but stronger promoter structure. Both boxes are located in the left arm.
Alu elements contain four or fewer Retinoic Acid response element hexamer sites in its internal promoter, with the last one overlapping with the "B box". In this 7SL (SRP) RNA example below, functional hexamers are underlined using a solid line, with the non-functional third hexamer denoted using a dotted line:
.
The recognition sequence of the Alu I endonuclease is 5' ag/ct 3'; that is, the enzyme cuts the DNA segment between the guanine and cytosine residues (in lowercase above).
Alu elements
Alu elements are responsible for regulation of tissue-specific genes. They are also involved in the transcription of nearby genes and can sometimes change the way a gene is expressed.
Alu elements are retrotransposons and look like DNA copies made from RNA polymerase III-encoded RNAs. Alu elements do not encode for protein products. They are replicated as any other DNA sequence, but depend on LINE retrotransposons for generation of new elements.
Alu element replication and mobilization begins by interactions with signal recognition particles (SRPs), which aid newly translated proteins to reach their final destinations. Alu RNA forms a specific RNA:protein complex with a protein heterodimer consisting of SRP9 and SRP14. SRP9/14 facilitates Alu's attachment to ribosomes that capture nascent L1 proteins. Thus, an Alu element can take control of the L1 protein's reverse transcriptase, ensuring that the Alu's RNA sequence gets copied into the genome rather than the L1's mRNA.
Alu elements in primates form a fossil record that is relatively easy to decipher because Alu element insertion events have a characteristic signature that is both easy to read and faithfully recorded in the genome from generation to generation. The study of Alu Y elements (the more recently evolved) thus reveals details of ancestry because individuals will most likely only share a particular Alu element insertion if they have a common ancestor. This is because insertion of an Alu element occurs only 100 - 200 times per million years, and no known mechanism of deletion of one has been found. Therefore, individuals with an element likely descended from an ancestor with one—and vice versa, for those without. In genetics, the presence or lack thereof of a recently inserted Alu element may be a good property to consider when studying human evolution. Most human Alu element insertions can be found in the corresponding positions in the genomes of other primates, but about 7,000 Alu insertions are unique to humans.
Impact in humans
Alu elements have been proposed to affect gene expression and been found to contain functional promoter regions for steroid hormone receptors. Due to the abundant content of CpG dinucleotides found in Alu elements, these regions serve as a site of methylation, contributing to up to 30% of the methylation sites in the human genome. Alu elements are also a common source of mutations in humans; however, such mutations are often confined to non-coding regions of pre-mRNA (introns), where they have little discernible impact on the bearer. Mutations in the introns (or non-coding regions of RNA) have little or no effect on phenotype of an individual if the coding portion of individual's genome does not contain mutations. The Alu insertions that can be detrimental to the human body are inserted into coding regions (exons) or into mRNA after the process of splicing.
However, the variation generated can be used in studies of the movement and ancestry of human populations, and the mutagenic effect of Alu and retrotransposons in general has played a major role in the evolution of the human genome. There are also a number of cases where Alu insertions or deletions are associated with specific effects in humans:
Associations with human disease
Alu insertions are sometimes disruptive and can result in inherited disorders. However, most Alu variation acts as markers that segregate with the disease so the presence of a particular Alu allele does not mean that the carrier will definitely get the disease. The first report of Alu-mediated recombination causing a prevalent inherited predisposition to cancer was a 1995 report about hereditary nonpolyposis colorectal cancer. In the human genome, the most recently active have been the 22 AluY and 6 AluS Transposon Element subfamilies due to their inherited activity to cause various cancers. Thus due to their major heritable damage it is important to understand the causes that affect their transpositional activity.
The following human diseases have been linked with Alu insertions:
Alport syndrome
Breast cancer
chorioretinal degeneration
Diabetes mellitus type II
Ewing's sarcoma
Familial hypercholesterolemia
Hemophilia
Leigh syndrome
mucopolysaccharidosis VII
Neurofibromatosis
Macular degeneration
And the following diseases have been associated with single-nucleotide DNA variations in Alu elements affecting transcription levels:
Alzheimer's disease
Lung cancer
Gastric cancer
The following disease have been associated with repeat expansion of AAGGG pentamere in Alu element :
RFC1 mutation responsible of CANVAS (Cerebellar Ataxia, Neuropathy & Vestibular Areflexia Syndrome)
Associated human mutations
The ACE gene, encoding angiotensin-converting enzyme, has 2 common variants, one with an Alu insertion (ACE-I) and one with the Alu deleted (ACE-D). This variation has been linked to changes in sporting ability: the presence of the Alu element is associated with better performance in endurance-oriented events (e.g. triathlons), whereas its absence is associated with strength- and power-oriented performance.
The opsin gene duplication which resulted in the re-gaining of trichromacy in Old World primates (including humans) is flanked by an Alu element, implicating the role of Alu in the evolution of three colour vision.
References
External links
Repetitive DNA sequences
Human genetics | Alu element | Biology | 2,146 |
2,852,949 | https://en.wikipedia.org/wiki/Envelope%20glycoprotein%20GP120 | Envelope glycoprotein GP120 (or gp120) is a glycoprotein exposed on the surface of the HIV envelope. It was discovered by Professors Tun-Hou Lee and Myron "Max" Essex of the Harvard School of Public Health in 1984. The 120 in its name comes from its molecular weight of 120 kDa. Gp120 is essential for virus entry into cells as it plays a vital role in attachment to specific cell surface receptors. These receptors are DC-SIGN, Heparan Sulfate Proteoglycan and a specific interaction with the CD4 receptor, particularly on helper T-cells. Binding to CD4 induces the start of a cascade of conformational changes in gp120 and gp41 that lead to the fusion of the viral membrane with the host cell membrane. Binding to CD4 is mainly electrostatic although there are van der Waals interactions and hydrogen bonds.
Gp120 is coded by the HIV env gene, which is around 2.5 kb long and codes for around 850 amino acids. The primary env product is the protein gp160, which gets cleaved to gp120 (~480 amino acids) and gp41 (~345 amino acids) in the endoplasmatic reticulum by the cellular protease furin. The crystal structure of core gp120 shows an organization with an outer domain, an inner domain with respect to its termini and a bridging sheet. Gp120 is anchored to the viral membrane, or envelope, via non-covalent bonds with the transmembrane glycoprotein, gp41. Three gp120s and gp41s combine in a trimer of heterodimers to form the envelope spike, which mediates attachment to and entry into the host cell.
Variability
Since gp120 plays a vital role in the ability of HIV-1 to enter CD4+ cells, its evolution is of particular interest. Many neutralizing antibodies bind to sites located in variable regions of gp120, so mutations in these regions will be selected for strongly. The diversity of env has been shown to increase by 1-2% per year in HIV-1 group M and the variable units are notable for rapid changes in amino acid sequence length. Increases in gp120 variability result in significantly elevated levels of viral replication, indicating an increase in viral fitness in individuals infected by diverse HIV-1 variants. Further studies have shown that variability in potential N-linked glycosylation sites (PNGSs) also result in increased viral fitness. PNGSs allow for the binding of long-chain carbohydrates to the high variability regions of gp120, so the authors hypothesize that the number of PNGSs in env might affect the fitness of the virus by providing more or less sensitivity to neutralizing antibodies. The presence of large carbohydrate chains extending from gp120 might obscure possible antibody binding sites.
The boundaries of the potential to add and eliminate PNGSs are naively explored by growing viral populations following each new infection. While the transmitting host has developed a neutralizing antibody response to gp120, the newly infected host lacks immune recognition of the virus. Sequence data shows that initial viral variants in an immunologically naïve host have few glycosylation sites and shorter exposed variable loops. This may facilitate viral ability to bind host cell receptors. As the host immune system develops antibodies against gp120, immune pressures seem to select for increased glycosylation, particularly on the exposed variable loops of gp120. Consequently, insertions in env, which confer more PNGSs on gp120 may be more tolerated by the virus as higher glycan density promotes the viral ability to evade antibodies and thus promotes higher viral fitness. In considering how much PNGS density could theoretically change, there may be an upper bound to PNGS number due to its inhibition of gp120 folding, but if the PNGS number decreases substantially, then the virus is too easily detected by neutralizing antibodies. Therefore, a stabilizing selection balance between low and high glycan densities is likely established. A lower number of bulky glycans improves viral replication efficiency and higher number on the exposed loops aids host immune evasion via disguise.
The relationship between gp120 and neutralizing antibodies is an example of Red Queen evolutionary dynamics. Continuing evolutionary adaptation is required for the viral envelope protein to maintain fitness relative to the continuing evolutionary adaptations of the host immune neutralizing antibodies, and vice versa, forming a coevolving system.
Vaccine target
Since CD4 receptor binding is the most obvious step in HIV infection, gp120 was among the first targets of HIV vaccine research. Efforts to develop HIV vaccines targeting gp120, however, have been hampered by the chemical and structural properties of gp120, which make it difficult for antibodies to bind to it. gp120 can also easily be shed from the surface of the virus and captured by T cells due to its loose binding with gp41. A conserved region in the gp120 glycoprotein that is involved in the metastable attachment of gp120 to CD4 has been identified and targeting of invariant region has been achieved with a broadly neutralising antibody, IgG1-b12.
NIH research published in Science reports the isolation of 3 antibodies that neutralize 90% of HIV-1 strains at the CD4bs region of gp120, potentially offering a therapeutic and vaccine strategy. However, most antibodies that bind the CDbs region of gp120 do not neutralize HIV, and rare ones that do such as IgG1-b12 have unusual properties such as asymmetry of the Fab arms or in their positioning. Unless a gp120-based vaccine can be designed to elicit antibodies with strongly neutralizing antiviral properties, there is concern that breakthrough infection leading to humoral production of high levels of non-neutralizing antibodies targeting the CD4 binding site of gp120 is associated with faster disease progression to AIDS.
Competition
The protein gp120 is necessary during the initial binding of HIV to its target cell. Consequently, anything which binds to gp120 or its targets can physically block gp120 from binding to a cell. Only one such agent, Maraviroc, which binds the co-receptor CCR5 is currently licensed and in clinical use. No agent targeting gp120's main first cellular interaction partner, CD4, is currently licensed since interfering with such a central molecule of the immune system can cause toxic side effects, such as the anti-CD4 monoclonal antibody OKT4. Targeting gp120 itself has proven extremely difficult due to its high degree of variability and shielding. Fostemsavir (BMS-663068) is a methyl phosphate prodrug of the small molecule inhibitor BMS-626529, which prevents viral entry by binding to the viral envelope gp120 and interfering with virus attachment to the host CD4 receptor.
HIV dementia
The HIV viral protein gp120 induces apoptosis of neuronal cells by inhibiting levels of furin and tissue plasminogen activator, enzymes responsible for converting pBDNF to mBDNF. gp120 induces mitochondrial-death proteins like caspases which may influence the upregulation of the death receptor Fas leading to apoptosis of neuronal cells, gp120 induces oxidative stress in the neuronal cells, and it is also known to activate STAT1 and induce interleukins IL-6 and IL-8 secretion in neuronal cells.
See also
HIV envelope gene
HIV entry to the cell
gp41
CD4
CCR5
Entry inhibitor
Structure and genome of HIV
References
Further reading
Human Immunodeficiency Virus Glycoprotein 120
External links
https://web.archive.org/web/20060219135317/http://www.aidsmap.com/en/docs/4406022B-85D7-4A9B-B700-91336CBB6B18.asp
http://www.mcld.co.uk/hiv/?q=gp120
http://www.ebi.ac.uk/interpro/IEntry?ac=IPR000777
Glycoproteins
HIV/AIDS
Viral structural proteins | Envelope glycoprotein GP120 | Chemistry | 1,753 |
14,551,977 | https://en.wikipedia.org/wiki/GPRC5C | G-protein coupled receptor family C group 5 member C is a protein that in humans is encoded by the GPRC5C gene.
Function
The protein encoded by this gene is a member of the type 3 G protein-coupled receptor family. Members of this superfamily are characterized by a signature 7-transmembrane domain motif. The specific function of this protein is unknown; however, this protein may mediate the cellular effects of retinoic acid on the G protein signal transduction cascade. Two transcript variants encoding different isoforms have been found for this gene.
See also
Retinoic acid-inducible orphan G protein-coupled receptor
References
Further reading
G protein-coupled receptors | GPRC5C | Chemistry | 138 |
10,854,684 | https://en.wikipedia.org/wiki/Karnaugh%20map | A Karnaugh map (KM or K-map) is a diagram that can be used to simplify a Boolean algebra expression. Maurice Karnaugh introduced it in 1953 as a refinement of Edward W. Veitch's 1952 Veitch chart, which itself was a rediscovery of Allan Marquand's 1881 logical diagram (aka. Marquand diagram). It is also useful for understanding logic circuits. Karnaugh maps are also known as Marquand–Veitch diagrams, Svoboda charts -(albeit only rarely)- and Karnaugh–Veitch maps (KV maps).
Definition
A Karnaugh map reduces the need for extensive calculations by taking advantage of humans' pattern-recognition capability. It also permits the rapid identification and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps, the cells are ordered in Gray code, and each cell position represents one combination of input conditions. Cells are also known as minterms, while each cell value represents the corresponding output value of the Boolean function. Optimal groups of 1s or 0s are identified, which represent the terms of a canonical form of the logic in the original truth table. These terms can be used to write a minimal Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using the minimal number of logic gates. A sum-of-products expression (SOP) can always be implemented using AND gates feeding into an OR gate, and a product-of-sums expression (POS) leads to OR gates feeding an AND gate. The POS expression gives a complement of the function (if F is the function so its complement will be F'). Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional statements, can get very complicated, which makes the code difficult to read and to maintain. Once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators.
Example
Karnaugh maps are used to facilitate the simplification of Boolean algebra functions. For example, consider the Boolean function described by the following truth table.
Following are two different notations describing the same function in unsimplified Boolean algebra, using the Boolean variables , , , and their inverses.
where are the minterms to map (i.e., rows that have output 1 in the truth table).
where are the maxterms to map (i.e., rows that have output 0 in the truth table).
Construction
In the example above, the four input variables can be combined in 16 different ways, so the truth table has 16 rows, and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 × 4 grid.
The row and column indices (shown across the top and down the left side of the Karnaugh map) are ordered in Gray code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the function's output for that combination of inputs.
Grouping
After the Karnaugh map has been constructed, it is used to find one of the simplest possible forms — a canonical form — for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify the expression. The minterms ('minimal terms') for the final expression are found by encircling groups of 1s in the map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8...). Minterm rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and green groups overlap. The red group is a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For example, would mean a cell which covers the 2x2 area where and are true, i.e. the cells numbered 13, 9, 15, 11 in the diagram above. On the other hand, would mean the cells where is true and is false (that is, is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells on the extreme right are actually 'adjacent' to those on the far left, in the sense that the corresponding input values only differ by one bit; similarly, so are those at the very top and those at the bottom. Therefore, can be a valid term—it includes cells 12 and 8 at the top, and wraps to the bottom to include cells 10 and 14—as is , which includes the four corners.
Solution
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic minterms can be found by examining which variables stay the same within each box.
For the red grouping:
A is the same and is equal to 1 throughout the box, therefore it should be included in the algebraic representation of the red minterm.
B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded.
C does not change. It is always 0, so its complement, NOT-C, should be included. Thus, should be included.
D changes, so it is excluded.
Thus the first minterm in the Boolean sum-of-products expression is .
For the green grouping, A and B maintain the same state, while C and D change. B is 0 and has to be negated before it can be included. The second term is therefore . Note that it is acceptable that the green grouping overlaps with the red one.
In the same way, the blue grouping gives the term .
The solutions of each grouping are combined: the normal form of the circuit is .
Thus the Karnaugh map has guided a simplification of
It would also have been possible to derive this simplification by carefully applying the axioms of Boolean algebra, but the time it takes to do that grows exponentially with the number of terms.
Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.
The three terms to cover the inverse are all shown with grey boxes with different colored borders:
:
:
:
This yields the inverse:
Through the use of De Morgan's laws, the product of sums can be determined:
Don't cares
Karnaugh maps also allow easier minimizations of functions whose truth tables include "don't care" conditions. A "don't care" condition is a combination of inputs for which the designer doesn't care what the output is. Therefore, "don't care" conditions can either be included in or excluded from any rectangular group, whichever makes it larger. They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of f(1,1,1,1) replaced by a "don't care". This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:
Note that the first term is just , not . In this case, the don't care has dropped a term (the green rectangle); simplified another (the red one); and removed the race hazard (removing the yellow term as shown in the following section on race hazards).
The inverse case is simplified as follows:
Through the use of De Morgan's laws, the product of sums can be determined:
Race hazards
Elimination
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions circumscribed on the map. However, because of the nature of Gray coding, adjacent has a special definition explained above – we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.
In the example above, a potential race condition exists when C is 1 and D is 0, A is 1, and B changes from 1 to 0 (moving from the blue state to the green state). For this case, the output is defined to remain unchanged at 1, but because this transition is not covered by a specific term in the equation, a potential for a glitch (a momentary transition of the output to 0) exists.
There is a second potential glitch in the same example that is more difficult to spot: when D is 0 and A and B are both 1, with C changing from 1 to 0 (moving from the blue state to the red state). In this case the glitch wraps around from the top of the map to the bottom.
Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of would eliminate the potential race hazard, bridging between the green and blue output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often needed to assure race-free dynamic performance.
Similarly, an additional term of must be added to the inverse to eliminate another potential race hazard. Applying De Morgan's laws creates another product of sums expression for f, but with a new factor of .
2-variable map examples
The following are all the possible 2-variable, 2 × 2 Karnaugh maps. Listed with each is the minterms as a function of and the race hazard free (see previous section) minimum equation. A minterm is defined as an expression that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical interconnected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to be mapped. Here are all the blocks with one field.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and horizontal row. A visualization of the k-map can be considered cylindrical. The fields at edges on the left and right are adjacent, and the top and bottom are adjacent. K-Maps for four variables must be depicted as a donut or torus shape. The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables and more.
Related graphical methods
Related graphical minimization methods include:
Marquand diagram (1881) by Allan Marquand (1853–1924)
Veitch chart (1952) by Edward W. Veitch (1924–2013)
Svoboda chart (1956) by Antonín Svoboda (1907–1980)
Mahoney map (M-map, designation numbers, 1963) by Matthew V. Mahoney (a reflection-symmetrical extension of Karnaugh maps for larger numbers of inputs)
Reduced Karnaugh map (RKM) techniques (from 1969) like infrequent variables, map-entered variables (MEV), variable-entered map (VEM) or variable-entered Karnaugh map (VEKM) by G. W. Schultz, Thomas E. Osborne, Christopher R. Clare, J. Robert Burgoon, Larry L. Dornhoff, William I. Fletcher, Ali M. Rushdi and others (several successive Karnaugh map extensions based on variable inputs for a larger numbers of inputs)
Minterm-ring map (MRM, 1990) by Thomas R. McCalla (a three-dimensional extension of Karnaugh maps for larger numbers of inputs)
See also
Algebraic normal form (ANF)
Binary decision diagram (BDD), a data structure that is a compressed representation of a Boolean function
Espresso heuristic logic minimizer
List of Boolean algebra topics
Logic optimization
Punnett square (1905), a similar diagram in biology
Quine–McCluskey algorithm
Reed–Muller expansion
Venn diagram (1880)
Zhegalkin polynomial
Notes
References
Further reading
(146 pages)
(282 pages with 14 animations)
External links
Detect Overlapping Rectangles, by Herbert Glarner.
Using Karnaugh maps in practical applications, Circuit design project to control traffic lights.
K-Map Tutorial for 2,3,4 and 5 variables
POCKET–PC BOOLEAN FUNCTION SIMPLIFICATION, Ledion Bitincka — George E. Antoniou
K-Map troubleshoot
Boolean algebra
Diagrams
Electronics optimization
Logic in computer science | Karnaugh map | Mathematics | 2,844 |
1,507,852 | https://en.wikipedia.org/wiki/DOM%20event | DOM (Document Object Model) Events are a signal that something has occurred, or is occurring, and can be triggered by user interactions or by the browser. Client-side scripting languages like JavaScript, JScript, VBScript, and Java can register various event handlers or listeners on the element nodes inside a DOM tree, such as in HTML, XHTML, XUL, and SVG documents.
Examples of DOM Events:
When a user clicks the mouse
When a web page has loaded
When an image has been loaded
When the mouse moves over an element
When an input field is changed
When an HTML form is submitted
When a user presses a key
Historically, like DOM, the event models used by various web browsers had some significant differences which caused compatibility problems. To combat this, the event model was standardized by the World Wide Web Consortium (W3C) in DOM Level 2.
Events
HTML events
Common events
There is a huge collection of events that can be generated by most element nodes:
Mouse events.
Keyboard events.
HTML frame/object events.
HTML form events.
User interface events.
Mutation events (notification of any changes to the structure of a document).
Progress events (used by XMLHttpRequest and File API).
Note that the event classification above is not exactly the same as W3C's classification.
Note that the events whose names start with "DOM" are currently not well supported, and for this and other performance reasons are deprecated by the W3C in DOM Level 3. Mozilla and Opera support DOMAttrModified, DOMNodeInserted, DOMNodeRemoved and DOMCharacterDataModified. Chrome and Safari support these events, except for DOMAttrModified.
Touch events
Web browsers running on touch-enabled devices, such as Apple's iOS and Google's Android, generate additional events.
In the W3C draft recommendation, a TouchEvent delivers a TouchList of Touch locations, the modifier keys that were active, a TouchList of Touch locations within the targeted DOM element, and a TouchList of Touch locations that have changed since the previous TouchEvent.
Apple didn't join this working group, and delayed W3C recommendation of its Touch Events Specification by disclosing patents late in the recommendation process.
Pointer events
Web browsers on devices with various types of input devices including mouse, touch panel, and pen may generate integrated input events. Users can see what type of input device is pressed, what button is pressed on that device, and how strongly the button is pressed when it comes to a stylus pen. As of October 2013, this event is only supported by Internet Explorer 10 and 11.
Indie UI events
Not yet really implemented, the Indie UI working groups want to help web application developers to be able to support standard user interaction events without having to handle different platform specific technical events that could match with it.
Scripting usable interfaces can be difficult, especially when one considers that user interface design patterns differ across software platforms, hardware, and locales, and that those interactions can be further customized based on personal preference. Individuals are accustomed to the way the interface works on their own system, and their preferred interface frequently differs from that of the web application author's preferred interface.
For example, web application authors, wishing to intercept a user's intent to undo the last action, need to "listen" for all the following events:
Control+Z on Windows and Linux.
Command+Z on Mac OS X.
Shake events on some mobile devices.
It would be simpler to listen for a single, normalized request to "undo" the previous action.
Internet Explorer-specific events
In addition to the common (W3C) events, two major types of events are added by Internet Explorer. Some of the events have been implemented as de facto standards by other browsers.
Clipboard events.
Data binding events.
Note that Mozilla, Safari and Opera also support the readystatechange event for the XMLHttpRequest object. Mozilla also supports the beforeunload event using the traditional event registration method (DOM Level 0). Mozilla and Safari also support contextmenu, but Internet Explorer for Mac does not.
Note that Firefox 6 and later support the beforeprint and afterprint events.
XUL events
In addition to the common (W3C) events, Mozilla defined a set of events that work only with XUL elements.
Other events
For Mozilla and Opera 9, there are also undocumented events known as DOMContentLoaded and DOMFrameContentLoaded which fire when the DOM content is loaded. These are different from "load" as they fire before the loading of related files (e.g., images). However, DOMContentLoaded has been added to the HTML 5 specification.
The DOMContentLoaded event was also implemented in the Webkit rendering engine build 500+. This correlates to all versions of Google Chrome and Safari 3.1+. DOMContentLoaded is also implemented in Internet Explorer 9.
Opera 9 also supports the Web Forms 2.0 events DOMControlValueChanged, invalid, forminput and formchange.
Event flow
Consider the situation when two event targets participate in a tree. Both have event listeners registered on the same event type, say "click". When the user clicks on the inner element, there are two possible ways to handle it:
Trigger the elements from outer to inner (event capturing). This model is implemented in Netscape Navigator.
Trigger the elements from inner to outer (event bubbling). This model is implemented in Internet Explorer and other browsers.
W3C takes a middle position in this struggle.
According to the W3C, events go through three phases when an event target participates in a tree:
The capture phase: the event travels down from the root event target to the target of an event
The target phase: the event travels through the event target
The bubble phase (optional): the event travels back up from the target of an event to the root event target. The bubble phase will only occur for events that bubble (where event.bubbles == true)
You can find a visualization of this event flow at https://domevents.dev
Stopping events
While an event is travelling through event listeners, the event can be stopped with event.stopPropagation() or event.stopImmediatePropagation()
event.stopPropagation(): the event is stopped after all event listeners attached to the current event target in the current event phase are finished
event.stopImmediatePropagation(): the event is stopped immediately and no further event listeners are executed
When an event is stopped it will no longer travel along the event path. Stopping an event does not cancel an event.
Legacy mechanisms to stop an event
Set the event.cancelBubble to true (Internet Explorer)
Set the event.returnValue property to false
Canceling events
A cancelable event can be canceled by calling event.preventDefault(). Canceling an event will opt out of the default browser behaviour for that event. When an event is canceled, the event.defaultPrevented property will be set to true. Canceling an event will not stop the event from traveling along the event path.
Event object
The Event object provides a lot of information about a particular event, including information about target element, key pressed, mouse button pressed, mouse position, etc. Unfortunately, there are very serious browser incompatibilities in this area. Hence only the W3C Event object is discussed in this article.
Event handling models
DOM Level 0
This event handling model was introduced by Netscape Navigator, and remains the most cross-browser model . There are two model types: the inline model and the traditional model.
Inline model
In the inline model, event handlers are added as attributes of elements. In the example below, an alert dialog box with the message "Hey Joe" appears after the hyperlink is clicked. The default click action is cancelled by returning false in the event handler.
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Inline Event Handling</title>
</head>
<body>
<h1>Inline Event Handling</h1>
<p>Hey <a href="http://www.example.com" onclick="triggerAlert('Joe'); return false;">Joe</a>!</p>
<script>
function triggerAlert(name) {
window.alert("Hey " + name);
}
</script>
</body>
</html>
One common misconception with the inline model is the belief that it allows the registration of event handlers with custom arguments, e.g. name in the triggerAlert function. While it may seem like that is the case in the example above, what is really happening is that the JavaScript engine of the browser creates an anonymous function containing the statements in the onclick attribute. The onclick handler of the element would be bound to the following anonymous function:
function () {
triggerAlert('Joe');
return false;
}
This limitation of the JavaScript event model is usually overcome by assigning attributes to the function object of the event handler or by using closures.
Traditional model
In the traditional model, event handlers can be added or removed by scripts. Like the inline model, each event can only have one event handler registered. The event is added by assigning the handler name to the event property of the element object. To remove an event handler, simply set the property to null:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Traditional Event Handling</title>
</head>
<body>
<h1>Traditional Event Handling</h1>
<p>Hey Joe!</p>
<script>
var triggerAlert = function () {
window.alert("Hey Joe");
};
// Assign an event handler
document.onclick = triggerAlert;
// Assign another event handler
window.onload = triggerAlert;
// Remove the event handler that was just assigned
window.onload = null;
</script>
</body>
</html>
To add parameters:
var name = 'Joe';
document.onclick = (function (name) {
return function () {
alert('Hey ' + name + '!');
};
}(name));
Inner functions preserve their scope.
DOM Level 2
The W3C designed a more flexible event handling model in DOM Level 2.
Some useful things to know :
To prevent an event from bubbling, developers must call the stopPropagation() method of the event object.
To prevent the default action of the event to be called, developers must call the preventDefault() method of the event object.
The main difference from the traditional model is that multiple event handlers can be registered for the same event. The useCapture option can also be used to specify that the handler should be called in the capture phase instead of the bubbling phase. This model is supported by Mozilla, Opera, Safari, Chrome and Konqueror.
A rewrite of the example used in the traditional model
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>DOM Level 2</title>
</head>
<body>
<h1>DOM Level 2</h1>
<p>Hey Joe!</p>
<script>
var heyJoe = function () {
window.alert("Hey Joe!");
}
// Add an event handler
document.addEventListener( "click", heyJoe, true ); // capture phase
// Add another event handler
window.addEventListener( "load", heyJoe, false ); // bubbling phase
// Remove the event handler just added
window.removeEventListener( "load", heyJoe, false );
</script>
</body>
</html>
Internet Explorer-specific model
Microsoft Internet Explorer prior to version 8 does not follow the W3C model, as its own model was created prior to the ratification of the W3C standard. Internet Explorer 9 follows DOM level 3 events, and Internet Explorer 11 deletes its support for Microsoft-specific model.
Some useful things to know :
To prevent an event bubbling, developers must set the event's cancelBubble property.
To prevent the default action of the event to be called, developers must set the event's returnValue property.
The this keyword refers to the global window object.
Again, this model differs from the traditional model in that multiple event handlers can be registered for the same event. However the useCapture option can not be used to specify that the handler should be called in the capture phase. This model is supported by Microsoft Internet Explorer and Trident based browsers (e.g. Maxthon, Avant Browser).
A rewrite of the example used in the old Internet Explorer-specific model
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Internet Explorer-specific model</title>
</head>
<body>
<h1>Internet Explorer-specific model</h1>
<p>Hey Joe!</p>
<script>
var heyJoe = function () {
window.alert("Hey Joe!");
}
// Add an event handler
document.attachEvent("onclick", heyJoe);
// Add another event handler
window.attachEvent("onload", heyJoe);
// Remove the event handler just added
window.detachEvent("onload", heyJoe);
</script>
</body>
</html>
References
Further reading
Deitel, Harvey. (2002). Internet and World Wide Web: how to program (Second Edition).
The Mozilla Organization. (2009). DOM Event Reference. Retrieved August 25, 2009.
Quirksmode (2008). Event compatibility tables. Retrieved November 27, 2008.
http://www.sitepen.com/blog/2008/07/10/touching-and-gesturing-on-the-iphone/
External links
Document Object Model (DOM) Level 2 Events Specification
Document Object Model (DOM) Level 3 Events Working Draft
DOM4: Events (Editor's Draft)
UI Events Working Draft
Pointer Events W3C Candidate Recommendation
MSDN PointerEvent
domevents.dev - A visualizer to learn about DOM Events through exploration
JS fiddle for Event Bubbling and Capturing
World Wide Web Consortium standards
Application programming interfaces
Events (computing) | DOM event | Technology | 3,135 |
421,129 | https://en.wikipedia.org/wiki/Cob%20%28material%29 | Cob, cobb, or clom (in Wales) is a natural building material made from subsoil, water, fibrous organic material (typically straw), and sometimes lime. The contents of subsoil vary, and if it does not contain the right mixture, it can be modified with sand or clay. Cob is fireproof, termite proof, resistant to seismic activity, and uses low-cost materials, although it is very labour intensive. It can be used to create artistic and sculptural forms, and its use has been revived in recent years by the natural building and sustainability movements.
In technical building and engineering documents, such as the Uniform Building Code of the western USA, cob may be referred to as "unburned clay masonry," when used in a structural context. It may also be referred to as "aggregate" in non-structural contexts, such as "clay and sand aggregate," or more simply "organic aggregate," such as where cob is a filler between post and beam construction.
History and usage
Cob is an English term attested to around the year 1600 for an ancient building material that has been used for building since prehistoric times. The use of this material in Iran is more than 4000 years old. The etymology of cob and cobbing is unclear, but in several senses means to beat or strike, which is how cob material is applied to a wall.
Many similar materials and methods of earthen building are used around the world, such as adobe, lump clay, puddled clay, chalk mud, wychert, clay daubins, swish (Asante Twi), torchis (French), bauge (French), bousille (French mud with moss), beaten clay-pahsa (Central Asia), and cat and clay.
Cob structures can be found in a variety of climates across the globe. European examples include:
in England, notably in the counties of Devon and Cornwall in the West Country, and in East Anglia (where it is referred to as clay lump)
in Wales, notably in rural Anglesey
in Donegal Bay in Ulster and in Munster, South-West Ireland
in Finisterre and Ille-et-Vilaine in Brittany, where many homes have survived over 500 years and are still inhabited
Some of the oldest human-made structures in Afghanistan are composed of rammed earth and cob. Cobwork (tabya) was used in the Maghreb and al-Andalus in the 11th and 12th centuries, and was described in detail by Ibn Khaldun in the 14th century.
Many old cob buildings can be found in Africa, the Middle East, and the southwestern United States like the Taos Pueblo. A number of cob cottages survive from mid-19th-century New Zealand.
Traditionally, English cob was made by mixing the clay-based subsoil with sand, straw and water using oxen to trample it. English soils contain varying amounts of chalk, and cob made with significant amounts of chalk are called chalk cob or wychert. The earthen mixture was then ladled onto a stone foundation in courses and trodden onto the wall by workers in a process known as cobbing. The construction would progress according to the time required for the prior course to dry. After drying, the walls would be trimmed and the next course built, with lintels for later openings such as doors and windows being placed as the wall takes shape.
The walls of a cob house are generally about thick, and windows were correspondingly deep-set, giving the homes a characteristic internal appearance. The thick walls provided excellent thermal mass which was easy to keep warm in winter and cool in summer. Walls with a high thermal mass value act as a thermal buffer inside the home. The material has a long life-span even in rainy or humid climates, provided a tall foundation and large roof overhang are present
Cob is fireproof, while "fire cob" (cob without straw or fiber) is a refractory material (the same material, essentially, as unfired common red brick), and historically, has been used to make chimneys, fireplaces, forges and crucibles. Without fiber, however, cob loses most of its tensile strength.
Modern cob buildings
When Kevin McCabe constructed a two-story, four bedroom cob house in England, UK in 1994, it was reputedly the first cob residence built in the country in 70 years. His techniques remained very traditional; the only innovations he made were using a tractor to mix the cob and adding sand or shillet, a gravel of crushed shale, to reduce shrinkage.
From 2002 to 2004, sustainability enthusiast Rob Hopkins initiated the construction of a cob house for his family, the first new one in Ireland in circa one hundred years. It was a community project, but an unidentified arsonist destroyed it shortly before completion. The house, located at The Hollies Centre for Practical Sustainability in County Cork, was being rebuilt as of 2010. There are a number of other completed modern cob houses and more are planned, including a public education centre.
In 2000-01, a modern, four bedroom cob house in Worcestershire, England, UK, designed by Associated Architects, was sold for £999,000. Cobtun House was erected in 2001 and won the Royal Institute of British Architects' Sustainable Building of the Year award in 2005. The total construction cost was £300,000, but the metre (yard) thick outer cob wall cost only £20,000.
In the Pacific Northwest of the United States there has been a resurgence of cob construction, both as an alternative building practice and one desired for its form, function, and cost effectiveness. Pat Hennebery, Tracy Calvert, Elke Cole, and the Cobworks workshops erected more than ten cob houses in the Southern Gulf Islands of British Columbia, Canada.
In 2010, Sota Construction Services in Pittsburgh, Pennsylvania, United States, completed construction on its new 7,500 square foot corporate headquarters, which featured exterior cob walls along with other energy saving features like radiant heat flooring, a rooftop solar panel array, and daylighting. The cob walls, in conjunction with the other sustainable features, enabled the edifice to earn a LEED Platinum rating in 2012, and it also received one of the highest scores by percentage of total points earned in any LEED category.
In 2007, Ann and Gord Baird began constructing a two-storey cob house in Victoria, British Columbia, Canada, for an estimated $210,000 CDN. The home of 2,150 square feet includes heated floors, solar panels, and a southern exposure to enable passive solar heating.
Welsh architect Ianto Evans and researcher Linda Smiley refined the construction technique known as "Oregon Cob" in the 1980s and 1990s. Oregon Cob integrates the variation of wall layup technique which uses loaves of mud mixed with sand and straw with a rounded architectural stylism. They are experimenting with a mixture of cob and straw bale denominated "balecob".
Cob building code
In 2019 an appendix for the International Residential Code (IRC) was approved by a vote in the public comment hearings. Appendix U of the IRC governs use of cob in load-bearing walls of single story residential structures. Based on currently available test data, the appendix limits the conditions under which cob may be used without engineering approval, such as seismic activity.
See also
, a German Research-Institute for Cob-buildings
(a variant of cob used in southern Romania)
, the earliest human-made composite materials were straw, combined with mud, to make bricks and walls.
, a typical Devon cob building
References
Further reading
Building With Cob, A Step by Step Guide by Adam Weismann and Katy Bryce. Published by Green Books ; 2006, .
The Hand-Sculpted House: A Philosophical and Practical Guide to Building a Cob Cottage (The Real Goods Solar Living Book) by Ianto Evans, Michael G. Smith, Linda Smiley, Deanne Bednar (Illustrator), Chelsea Green Publishing Company; (June 2002), .
The Cob Builders Handbook: You Can Hand-Sculpt Your Own Home by Becky Bee, Groundworks, 1997
Essential Cob Construction: A Guide to Design, Engineering, and Building by Anthony Dente PE, Michael Smith, and Massey Burke, New Publishers Society; 2024, ISBN 978-0865719682.
External links
The Cob Builders Handbook
How to Build a Traditional Cob Oven
Sustainable building
Appropriate technology
Natural materials
Rammed earth
Soil-based building materials
Sustainable products
Clay | Cob (material) | Physics,Engineering | 1,784 |
642,006 | https://en.wikipedia.org/wiki/Laplace%27s%20method | In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form
where is a twice-differentiable function, is a large number, and the endpoints and could be infinite. This technique was originally presented in the book by .
In Bayesian statistics, Laplace's approximation can refer to either approximating the posterior normalizing constant with Laplace's method or approximating the posterior distribution with a Gaussian centered at the maximum a posteriori estimate. Laplace approximations are used in the integrated nested Laplace approximations method for fast approximations of Bayesian inference.
Concept
Let the function have a unique global maximum at . is a constant here. The following two functions are considered:
Then, is the global maximum of and as well. Hence:
As M increases, the ratio for will grow exponentially, while the ratio for does not change. Thus, significant contributions to the integral of this function will come only from points in a neighborhood of , which can then be estimated.
General theory
To state and motivate the method, one must make several assumptions. It is assumed that is not an endpoint of the interval of integration and that the values cannot be very close to unless is close to .
can be expanded around x0 by Taylor's theorem,
where (see: big O notation).
Since has a global maximum at , and is not an endpoint, it is a stationary point, i.e. . Therefore, the second-order Taylor polynomial approximating is
Then, just one more step is needed to get a Gaussian distribution. Since is a global maximum of the function it can be stated, by definition of the second derivative, that , thus giving the relation
for close to . The integral can then be approximated with:
If this latter integral becomes a Gaussian integral if we replace the limits of integration by and ; when is large this creates only a small error because the exponential decays very fast away from . Computing this Gaussian integral we obtain:
A generalization of this method and extension to arbitrary precision is provided by the book .
Formal statement and proof
Suppose is a twice continuously differentiable function on and there exists a unique point such that:
Then:
Lower bound: Let . Since is continuous there exists such that if then By Taylor's Theorem, for any
Then we have the following lower bound:
where the last equality was obtained by a change of variables
Remember so we can take the square root of its negation.
If we divide both sides of the above inequality by
and take the limit we get:
since this is true for arbitrary we get the lower bound:
Note that this proof works also when or (or both).
Upper bound: The proof is similar to that of the lower bound but there are a few inconveniences. Again we start by picking an but in order for the proof to work we need small enough so that Then, as above, by continuity of and Taylor's Theorem we can find so that if , then
Lastly, by our assumptions (assuming are finite) there exists an such that if , then .
Then we can calculate the following upper bound:
If we divide both sides of the above inequality by
and take the limit we get:
Since is arbitrary we get the upper bound:
And combining this with the lower bound gives the result.
Note that the above proof obviously fails when or (or both). To deal with these cases, we need some extra assumptions. A sufficient (not necessary) assumption is that for
and that the number as above exists (note that this must be an assumption in the case when the interval is infinite). The proof proceeds otherwise as above, but with a slightly different approximation of integrals:
When we divide by
we get for this term
whose limit as is . The rest of the proof (the analysis of the interesting term) proceeds as above.
The given condition in the infinite interval case is, as said above, sufficient but not necessary. However, the condition is fulfilled in many, if not in most, applications: the condition simply says that the integral we are studying must be well-defined (not infinite) and that the maximum of the function at must be a "true" maximum (the number must exist). There is no need to demand that the integral is finite for but it is enough to demand that the integral is finite for some
This method relies on 4 basic concepts such as
1. Relative error
The “approximation” in this method is related to the relative error and not the absolute error. Therefore, if we set
the integral can be written as
where is a small number when is a large number obviously and the relative error will be
Now, let us separate this integral into two parts: region and the rest.
2. around the stationary point when is large enough
Let’s look at the Taylor expansion of around x0 and translate x to y because we do the comparison in y-space, we will get
Note that because is a stationary point. From this equation you will find that the terms higher than second derivative in this Taylor expansion is suppressed as the order of so that will get closer to the Gaussian function as shown in figure. Besides,
3. The larger is, the smaller range of is related
Because we do the comparison in y-space, is fixed in which will cause ; however, is inversely proportional to , the chosen region of will be smaller when is increased.
4. If the integral in Laplace's method converges, the contribution of the region which is not around the stationary point of the integration of its relative error will tend to zero as grows.
Relying on the 3rd concept, even if we choose a very large Dy, sDy will finally be a very small number when is increased to a huge number. Then, how can we guarantee the integral of the rest will tend to 0 when is large enough?
The basic idea is to find a function such that and the integral of will tend to zero when grows. Because the exponential function of will be always larger than zero as long as is a real number, and this exponential function is proportional to the integral of will tend to zero. For simplicity, choose as a tangent through the point as shown in the figure:
If the interval of the integration of this method is finite, we will find that no matter is continue in the rest region, it will be always smaller than shown above when is large enough. By the way, it will be proved later that the integral of will tend to zero when is large enough.
If the interval of the integration of this method is infinite, and might always cross to each other. If so, we cannot guarantee that the integral of will tend to zero finally. For example, in the case of will always diverge. Therefore, we need to require that can converge for the infinite interval case. If so, this integral will tend to zero when is large enough and we can choose this as the cross of and
You might ask why not choose as a convergent integral? Let me use an example to show you the reason. Suppose the rest part of is then and its integral will diverge; however, when the integral of converges. So, the integral of some functions will diverge when is not a large number, but they will converge when is large enough.
Based on these four concepts, we can derive the relative error of this method.
Other formulations
Laplace's approximation is sometimes written as
where is positive.
Importantly, the accuracy of the approximation depends on the variable of integration, that is, on what stays in and what goes into
First, use to denote the global maximum, which will simplify this derivation. We are interested in the relative error, written as ,
where
So, if we let
and , we can get
since .
For the upper bound, note that thus we can separate this integration into 5 parts with 3 different types (a), (b) and (c), respectively. Therefore,
where and are similar, let us just calculate and and are similar, too, I’ll just calculate .
For , after the translation of , we can get
This means that as long as is large enough, it will tend to zero.
For , we can get
where
and should have the same sign of during this region. Let us choose as the tangent across the point at , i.e. which is shown in the figure
From this figure you can find that when or gets smaller, the region satisfies the above inequality will get larger. Therefore, if we want to find a suitable to cover the whole during the interval of , will have an upper limit. Besides, because the integration of is simple, let me use it to estimate the relative error contributed by this .
Based on Taylor expansion, we can get
and
and then substitute them back into the calculation of ; however, you can find that the remainders of these two expansions are both inversely proportional to the square root of , let me drop them out to beautify the calculation. Keeping them is better, but it will make the formula uglier.
Therefore, it will tend to zero when gets larger, but don't forget that the upper bound of should be considered during this calculation.
About the integration near , we can also use Taylor's Theorem to calculate it. When
and you can find that it is inversely proportional to the square root of . In fact, will have the same behave when is a constant.
Conclusively, the integral near the stationary point will get smaller as gets larger, and the rest parts will tend to zero as long as is large enough; however, we need to remember that has an upper limit which is decided by whether the function is always larger than in the rest region. However, as long as we can find one satisfying this condition, the upper bound of can be chosen as directly proportional to since is a tangent across the point of at . So, the bigger is, the bigger can be.
In the multivariate case, where is a -dimensional vector and is a scalar function of , Laplace's approximation is usually written as:
where is the Hessian matrix of evaluated at and where denotes matrix determinant. Analogously to the univariate case, the Hessian is required to be negative-definite.
By the way, although denotes a -dimensional vector, the term denotes an infinitesimal volume here, i.e. .
Steepest descent extension
In extensions of Laplace's method, complex analysis, and in particular Cauchy's integral formula, is used to find a contour of steepest descent for an (asymptotically with large M) equivalent integral, expressed as a line integral. In particular, if no point x0 where the derivative of vanishes exists on the real line, it may be necessary to deform the integration contour to an optimal one, where the above analysis will be possible. Again, the main idea is to reduce, at least asymptotically, the calculation of the given integral to that of a simpler integral that can be explicitly evaluated. See the book of Erdelyi (1956) for a simple discussion (where the method is termed steepest descents).
The appropriate formulation for the complex z-plane is
for a path passing through the saddle point at z0. Note the explicit appearance of a minus sign to indicate the direction of the second derivative: one must not take the modulus. Also note that if the integrand is meromorphic, one may have to add residues corresponding to poles traversed while deforming the contour (see for example section 3 of Okounkov's paper Symmetric functions and random partitions).
Further generalizations
An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems.
Given a contour C in the complex sphere, a function defined on that contour and a special point, such as infinity, a holomorphic function M is sought away from C, with prescribed jump across C, and with a given normalization at infinity. If and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution.
An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour.
The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, "steepest descent contours" solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov).
The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics.
Median-point approximation generalization
In the generalization, evaluation of the integral is considered equivalent to finding the norm of the distribution with density
Denoting the cumulative distribution , if there is a diffeomorphic Gaussian distribution with density
the norm is given by
and the corresponding diffeomorphism is
where denotes cumulative standard normal distribution function.
In general, any distribution diffeomorphic to the Gaussian distribution has density
and the median-point is mapped to the median of the Gaussian distribution. Matching the logarithm of the density functions and their derivatives at the median point up to a given order yields a system of equations that determine the approximate values of and .
The approximation was introduced in 2019 by D. Makogon and C. Morais Smith primarily in the context of partition function evaluation for a system of interacting fermions.
Complex integrals
For complex integrals in the form:
with we make the substitution t = iu and the change of variable to get the bilateral Laplace transform:
We then split g(c + ix) in its real and complex part, after which we recover u = t/i. This is useful for inverse Laplace transforms, the Perron formula and complex integration.
Example: Stirling's approximation
Laplace's method can be used to derive Stirling's approximation
for a large integer N. From the definition of the Gamma function, we have
Now we change variables, letting so that Plug these values back in to obtain
This integral has the form necessary for Laplace's method with
which is twice-differentiable:
The maximum of lies at z0 = 1, and the second derivative of has the value −1 at this point. Therefore, we obtain
See also
Method of stationary phase
Method of steepest descent
Large deviations theory
Laplace principle (large deviations theory)
Laplace's approximation
Notes
References
.
.
.
.
Asymptotic analysis
Perturbation theory
Integral calculus | Laplace's method | Physics,Mathematics | 3,145 |
38,544,280 | https://en.wikipedia.org/wiki/Ballast | Ballast is dense material used as a weight to provide stability to a vehicle or structure. Ballast, other than cargo, may be placed in a vehicle, often a ship or the gondola of a balloon or airship, to provide stability. A compartment within a boat, ship, submarine, or other floating structure that holds water is called a ballast tank. Water should be moved in and out from the ballast tank to balance the ship. In a vessel that travels on the water, the ballast will be kept below the water level, to counteract the effects of weight above the water level. The ballast may be redistributed in the vessel or disposed of altogether to change its effects on the movement of the vessel.
History
The basic concept behind the ballast tank can be seen in many forms of aquatic life, such as the blowfish or members of the argonaut group of octopus. The concept has been invented and reinvented many times by humans to serve a variety of purposes.
In the fifteenth and sixteenth century, the ballast "did not consist entirely of leakage, but of urine, vomit, and various foul food leavings that lazy sailors discharged into the ballast contrary to orders, in the belief that the pumps would take care of it." In the nineteenth century, cargo boats returning from Europe to North America would carry quarried stone as ballast, contributing to the architectural heritage of some east coast cities (for example Montreal), where this stone was used in building.
During World War 2 ships returning from Great Britain to the United States used rubble as ballast. The ballast would be dumped in New York and used for construction projects such as FDR Drive and an outcrop colloquially named Bristol Basin since it was made from rubble from bombed-out Bristol.
Uses
Ballast takes many forms, for example:
Sailing ballast, or ship's ballast, used to lower the centre of gravity of a ship to increase stability
Ballast tank, a device used on ships and submarines and other submersibles to control buoyancy and stability
Ballast (car racing), metallic plates used to bring auto racing vehicles up to the minimum mandated weight
in underwater diving, a diver weighting system comprises blocks of heavy material, usually lead, used to compensate for excess buoyancy of the diver and their equipment.
in gliding, weights added to maximise the average speed in cross-country competition, especially when thermal convection is strong
in a balloon, as part of a buoyancy compensator
Sailing ballast is used in sailboats to provide righting moment to resist the overturning moment generated by lateral forces on the sail. Insufficiently ballasted boats will tend to tip, or heel, excessively in high winds. Too much heel may result in the boat capsizing. If a sailing vessel should need to voyage without cargo then ballast of little or no value would be loaded to keep the vessel upright. Some or all of this ballast would then be discarded when cargo was loaded.
Ballast weight is also added to a race car to alter its performance. In most racing series, cars have a minimum allowable weight. Often, the actual weight of the car is lower, so ballast is used to bring it up to the minimum. The advantage is that the ballast can be positioned to affect the car's handling by changing its load distribution. This is near-universal in Formula 1. It is also common in other racing series that ballast may only be located in certain positions on the car. In some racing series, for example the British Touring Car Championship, ballast is used as a handicap, the leading drivers at the end of one race being given more ballast for the next race.
Ballast may also be carried aboard an aircraft. For example, in gliding it may be used to increase speed and/or adjust the aircraft's center of gravity, or in a balloon as a buoyancy compensator.
References
Sources
Mechanisms (engineering)
Weights | Ballast | Physics,Engineering | 796 |
60,423,671 | https://en.wikipedia.org/wiki/Normally%20flat%20ring | In algebraic geometry, a normally flat ring along a proper ideal I is a local ring A such that is flat over for each integer .
The notion was introduced by Hironaka in his proof of the resolution of singularities as a refinement of equimultiplicity and was later generalized by Alexander Grothendieck and others.
References
Herrmann, M., S. Ikeda, and U. Orbanz: Equimultiplicity and Blowing Up. An Algebraic Study with an Appendix by B. Moonen. Springer Verlag, Berlin Heidelberg New-York, 1988.
Algebraic geometry | Normally flat ring | Mathematics | 128 |
12,592,816 | https://en.wikipedia.org/wiki/Gold%28III%29%20oxide | Gold(III) oxide (Au2O3) is an inorganic compound of gold and oxygen with the formula Au2O3. It is a red-brown solid that decomposes at 298 °C.
According to X-ray crystallography, AuO features square planar gold centers with both 2- and 3-coordinated oxides. The four Au-O bond distances range from 193 to 207 picometers. The crystals can be prepared by heating amorphous hydrated gold(III) oxide with perchloric acid and an alkali metal perchlorate in a sealed quartz tube at a temperature of around 250 °C and a pressure of around 30 MPa.
References
External links
Gold(III) compounds
Sesquioxides
Transition metal oxides
Crystals in space group 43 | Gold(III) oxide | Chemistry | 161 |
70,491,447 | https://en.wikipedia.org/wiki/HD1 | HD1 is a proposed high-redshift galaxy, which is considered (as of April 2022) to be one of the earliest and most distant known galaxies yet identified in the observable universe. The galaxy, with an estimated redshift of approximately z = 13.27, is seen as it was about 324 million years after the Big Bang, which was according to scientists around 13.787 billion years ago. It has a light-travel distance (lookback time) of 13.463 billion light-years from Earth, and, due to the expansion of the universe, a present proper distance of 33.288 billion light-years.
According to the last spectroscopic studies (cf. https://arxiv.org/abs/2406.18352, 2024), the new redshift of HD1 is z = 4.0.
Discovery
The discovery of the proposed high-redshift galaxy HD1 (RA:10:01:51.31 DEC:+02:32:50.0) in the Sextans constellation, along with another high-redshift galaxy, HD2 (RA:02:18:52.44 DEC:-05:08:36.1) in the Cetus constellation, was reported by astronomers at the University of Tokyo on 7 April 2022. These two galaxies were found in two patches of sky surveyed by the Cosmic Evolution Survey and by the Subaru Telescope in the Subaru/XMM-Newton Deep Survey Field respectively. They were found by looking for objects that are much brighter in the so-called K band of infrared than in the H band (around 1.6 microns), which could indicate a Lyman-break galaxy red-shifted by a factor of around 13. For this reason they were named "HD 1" and "HD 2" (for "H band dropout", not to be confused with stars HD 1 and HD 2 in the Henry Draper Catalog.
Physical properties
HD1 is one of the earliest and most distant known galaxies yet identified in the observable universe, having a spectroscopic redshift of , meaning that the light from the galaxy travelled for 13.5 billion years on its way to Earth, which due to the expansion of the universe, corresponds to a proper distance of approximately . The observed position of HD1 was determined to be about 330 million years after the Big Bang. Another similar high-redshift galaxy, HD2, was determined to be nearly as far away as HD1.
HD1's unusually high brightness has been an open question for its discoverers; it has a significantly more luminous ultraviolet emission than similar galaxies at its redshift range. Possible explanations have been proposed, one being that it is an active Lyman-break galaxy, or a rather extreme starburst galaxy producing stars at a rate far higher than any previously observed. It is also considered that it may have a significant population of Population III stars that are far more massive and luminous than present-day stars. Another scenario is that it may be a quasar hosting a supermassive black hole; such a scenario would put constraints on models of black hole growth in such an early stage of the universe. A resolution to the true nature of the galaxy would likely await confirmations from the James Webb Space Telescope.
The previous farthest known galaxy, GN-z11, discovered in 2015, had a redshift of 11, suggesting that the observed position of the galaxy is about 420 million years after the Big Bang.
Future considerations
According to the discoverers of HD1 and HD2, "If spectroscopically confirmed, these two sources [ie, HD1 and HD2] will represent a remarkable laboratory to study the Universe at previously inaccessible redshifts." The researchers expect even further clarification of the astronomical objects, including better identifying the objects as galaxies, or, possibly as quasars or black holes, when carefully examined by the James Webb Space Telescope, Nancy Grace Roman Space Telescope, and GREX-PLUS space missions. HD1, on close examination, may also reveal the first visible Population III stars, due to its very early age. In addition, the researchers claim that the use of the new upcoming space telescopes could help discover over 10,000 galaxies at this early epoch of the Universe.
See also
CEERS-93316
Earliest galaxies
GLASS-z12
List of galaxies
List of the most distant astronomical objects
References
External links
Astronomical objects discovered in 2022
Galaxies
Cetus
Sextans
Galaxies discovered in 2022 | HD1 | Astronomy | 934 |
21,281,702 | https://en.wikipedia.org/wiki/Jin%20Au%20Kong | Jin Au Kong (Traditional Chinese: 孔金甌; Simplified Chinese: 孔金瓯; 27 December 1942 – 13 March 2008) was a Taiwanese-American electrical engineer. He was an expert in applied electromagnetics. He was a 74th-generation lineal descendant of the famous Chinese philosopher Confucius (551 BC – 479 BC).
Biography
Kong was born in Gaochun, Jiangsu Province. He received his Bachelor of Science from the National Taiwan University in 1962, his M.S. from the National Chiao Tung University in 1965, and his Ph.D. from Syracuse University in 1968. His PhD thesis supervisor was David K. Cheng. Kong did his postdoctoral research at Syracuse University as well from 1968 to 1969. From 1969 to 1971, he was the Vinton Hayes Postdoctoral Fellow of Engineering.
Kong then moved to MIT, where he remained for the rest of his academic career, as assistant professor from 1969 to 1973, associate professor from 1973 to 1980, and promoted to full professor in 1980. From 1977 until his death in 2008, Kong served as a United Nations high-level consultant to the undersecretary-general, as well as an interregional advisor on remote sensing technology for the United Nations Department of Technical Cooperation for Development. At MIT and later Zhejiang University, Kong supervised about 50 PhD theses and 90 Master theses. Among Kong's PhD graduates were Leung Tsang, Weng Chew, Tarek Habashy, Shun-Lien Chuang, Apo Sezginer, Robert Shin, Jay K Lee, Eni Njoku, Michael Zuniga, Jean-Fu Kiang, Maurice Borgeaud, Soon Poh, Simon Yueh, Son Nghiem, Yaqiu Jin, Eric Yang, William Au, Joel Johnson, Chi On Ao, Henning Braunisch, Bae Ian Wu, Xudong Chen, Baile Zhang, Hongsheng Chen, etc. . From 1984 to 2003, he was the chairman of Area IV on Energy and Electromagnetic Systems at MIT. From 1989 until 2008, Kong was director of the Center for Electromagnetic Theory and Applications in the Research Laboratory of Electronics at MIT.
Kong was the founding president of The Electromagnetics Academy from 1989 to 2008. He also founded the academy's China branch at Zhejiang University in Hangzhou, known as The Electromagnetics Academy at Zhejiang University, serving as its dean from 2003 to 2008.
Kong was also the founding chair of the Progress In Electromagnetics Research Symposium (PIERS), from 1989 to 2008. From 1987 to 2008, he was the editor-in-chief of the Journal of Electromagnetic Waves and Applications. He was the founding chief editor for Progress in Electromagnetics Research (PIER) series (1989–2008), chief editor for Progress In Electromagnetics Research (PIER) Letters, B, M, C in 2008, and chief editor for PIERS Online from 2005 to 2008.
Honors and awards
Kong was rewarded with many honors and awards during his life, including:
Fellow, The Electromagnetics Academy
Fellow, Institute of Electrical and Electronics Engineers
Fellow, Optical Society of America
Distinguished Achievement Award, from the IEEE Geoscience and Remote Sensing Society, 2000
IEEE Electromagnetics Award, 2004
Honorary doctorate from Paris X University Nanterre, 2006
Books
Electromagnetic Wave Theory, J. A. Kong, EMW Publishing, 1016 pg, 2008 (Previous editions by Wiley-Interscience: 1975, 1986 and 1990 and EMW Publishing: 1998, 2000 and 2005)
Maxwell Equations, J. A. Kong, EMW Publishing, 398 pg, 2002
Theory of Microwave Remote Sensing, L. Tsang, J. A. Kong and R.T. Shin, Wiley-Interscience, 613 pages, 1985
Scattering of Electromagnetic Waves: Theories and Applications, L. Tsang, J. A. Kong and K. H. Ding, Wiley-Interscience, 426 pg, 2000
Scattering of Electromagnetic Waves: Numerical Simulations, L. Tsang, J. A. Kong, K. H. Ding and C. Ao, Wiley-Interscience, 705 pg, 2001
Scattering of Electromagnetic Waves: Advanced Topics, L. Tsang and J. A. Kong, Wiley-Interscience, 413 pg, 2001
Applied Electromagnetism, L.C. Shen and J. A. Kong, PWS, 1987
Electromagnetic Waves, David H. Staelin, Ann W. Morgenthaler and Jin Au Kong, 1993
See also
List of textbooks in electromagnetism
References
External links
Jin Au Kong's homepage at MIT
MIT News: Jin Au Kong, long-serving EECS professor, dies aged 65
Jin Au Kong at IEEE
1942 births
2008 deaths
Fellows of the IEEE
Massachusetts Institute of Technology faculty
Academic staff of Zhejiang University
Fellows of Optica (society)
Scientists from Nanjing
Chinese electrical engineers
Electrical engineering academics
National Chiao Tung University alumni
National Taiwan University alumni
Syracuse University alumni
Chinese non-fiction writers
Chinese expatriates in the United States
Microwave engineers
20th-century non-fiction writers
Metamaterials scientists
Descendants of Confucius
American people of Taiwanese descent
Taiwanese expatriates in the United States | Jin Au Kong | Materials_science | 1,048 |
35,896,259 | https://en.wikipedia.org/wiki/V602%20Carinae | V602 Carinae (V602 Car, HD 97671) is a red supergiant and variable star of spectral type M3 in the constellation Carina. It is considered to be one of largest known stars, being around 1,000 times larger than the Sun.
In 2005, V602 Car was calculated to have a bolometric luminosity below and a radius around based on the assumption of an effective temperature of . A 2015 study derived a slightly higher bolometric luminosity of based on the measured flux and an assumed distance, and a larger radius of based on the measured angular diameter and luminosity. An effective temperature of was then calculated from the luminosity and radius. A more recent measurement based on a Gaia Data Release 2 parallax of gives a luminosity at with a corresponding radius of based on the same effective temperature derived in 2005. The radius was measured again in 2024 at .
V602 Car has an estimated mass loss rate of per year. An excess of emission at long wavelengths from this star, as well as a small amount of silicate emission, suggests that it may be enclosed by an extensive cloud of dust.
V602 Car is a semiregular variable star with a maximum brightness range of magnitude 7.6 - 9.1 and a period of 635 or 672 days. Despite the large amplitude of variation, it was only named as a variable star in 2006.
See also
RT Carinae
EV Carinae
References
M-type supergiants
Carina (constellation)
Carinae, V602
097671
Semiregular variable stars
CD-59 3623
J11132996-6005288
IRAS catalogue objects
Population I stars | V602 Carinae | Astronomy | 362 |
14,330,683 | https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20%28Norway%29 | The Royal Norwegian Ministry of Energy () is a Norwegian ministry responsible for energy, including petroleum and natural gas production in the North Sea. It is led by Minister of Energy Terje Aasland of the Labour Party since 2022. The department must report to the legislature, the Storting.
History
The ministry was originally established in 1978, where petroleum and energy affairs were transferred from the Ministry of Industry. It was merged into the Ministry of Industry as to become Ministry of Industry and Energy in 1993. In 1997, petroleum and energy affairs was once again transferred to the current ministry. It was renamed again in 2024 as Ministry of Energy.
Organisation
Political staff
As of June 2023, the political staff of the ministry is as follows:
Minister Terje Aasland (Labour Party)
State Secretary Andreas Bjelland Eriksen (Labour Party)
State Secretary Astrid Bergmål (Labour Party)
State Secretary Elisabeth Sæther (Labour Party)
Political Advisor Jorid Juliussen Nordmelan (Labour Party)
Departments
The ministry is divided into four departments and a communication unit.
Communication Unit
Technology and Industry Department
Energy and Water Resources Department
Department of Trade and Industrial Economics
Administration, Budgets and Accounting Department
Subsidiaries
Subordinate government agencies:
Norwegian Petroleum Directorate
Norwegian Water Resources and Energy Directorate
Gassnova
Statnett
Wholly owned limited companies:
Gassco
Petoro
Partially owned public limited companies:
Equinor (62% ownership)
References
External links
Official web site
Petroleum and Energy
Norway
Ministry of Petroleum and Energy
Ministry of Petroleum and Energy
Ministry of Petroleum and Energy
Petroleum politics
1978 establishments in Norway
Norway, Petroleum and Energy | Ministry of Energy (Norway) | Chemistry,Engineering | 318 |
34,856,180 | https://en.wikipedia.org/wiki/Street%20box%20camera | The street box camera or kamra-e-faoree is a handmade wooden camera. It is both a camera and a darkroom in one. The term Kamra-e-faoree comes from Dari where it means ‘instant camera’.
History
This type of camera was first used in small towns and villages where there were no photographic studios. These places were visited by travelling photographers that would improvise a studio on the spot .
After the first world war street box cameras became increasingly popular in big cities. Most of these street photographers had a fixed spot outdoors near touristic hot spots. They lured passengers to get their portrait taken at the spot. The photos were developed right away and the picture was ready within a couple of minutes. That is why in Spanish these street photographers are called Minuteros.
Technical details
The cameras were often built by the photographers themselves, and this resulted in unique designs emerging in each country.
Generally speaking the camera composes of a wooden box with a simple lens mounted in front. Often a complete folding camera is built into the front. In the back inside of the box, a sheet of photographic paper is mounted for exposure. To process the picture the photographer sticks his hand in an opening with a sleeve that prevents light from entering. Inside the box there is room for an improvised lab where the sheet of paper is developed and fixated in simple containers. The rinsing is done in a bucket of water. The pictures are taken on photographic paper and not on film, which is more time-consuming and would require an indoor lab. After development you get a negative image. This negative image is put on a holder attached to the front of the camera and a new photo is taken of the same image. The complete in-tank process is repeated and the result is a regular, positive image.
Current use
Generations of people living in small town villages have had their portrait taken while sitting in front of a Kamra-e-faoree, but a significant decline has happened since the 21st century due to digital photography. Nowadays the street box camera is only used in touristic hot spots all across the world
References
External links & Publications
Publications
Afghan Box Camera, Lukas Birk & Sean Foley. Dewi Lewis Publishing, 2013.
Smudgers, Chris Wroblewski, Selfpublished, 2003.
Photographes de Rue. Street photographers. Minuteros, Ghnassia, Patrick, Zilmo de Freitas. Mialet, Katar press, 2003.
2 MISSISSIPPI, Hans Zeeldieb. Edition Le Mulet, 2019.
Los Ambulantes: The Itinerant Photographers of Guatemala, Avon Neal & Ann Parker, The MIT Press, 1984.
Cameras
Street photographers | Street box camera | Technology | 553 |
58,725 | https://en.wikipedia.org/wiki/Physical%20security | Physical security describes security measures that are designed to deny unauthorized access to facilities, equipment, and resources and to protect personnel and property from damage or harm (such as espionage, theft, or terrorist attacks). Physical security involves the use of multiple layers of interdependent systems that can include CCTV surveillance, security guards, protective barriers, locks, access control, perimeter intrusion detection, deterrent systems, fire protection, and other systems designed to protect persons and property.
Overview
Physical security systems for protected facilities can be intended to:
deter potential intruders (e.g. warning signs, security lighting);
detect intrusions, and identify, monitor and record intruders (e.g. security alarms, access control and CCTV systems);
trigger appropriate incident responses (e.g. by security guards and police);
delay or prevent hostile movements (e.g. door reinforcements, grilles);
protect the assets (e.g. safes).
It is up to security designers, architects and analysts to balance security controls against risks, taking into account the costs of specifying, developing, testing, implementing, using, managing, monitoring and maintaining the controls, along with broader issues such as aesthetics, human rights, health and safety, and societal norms or conventions. Physical access security measures that are appropriate for a high security prison or a military site may be inappropriate in an office, a home or a vehicle, although the principles are similar.
Elements and design
Deterrence
The goal of deterrence methods is to convince potential attackers that a successful attack is unlikely due to strong defenses.
The initial layer of security for a campus, building, office, or other physical space can use crime prevention through environmental design to deter threats. Some of the most common examples are also the most basic: warning signs or window stickers, fences, vehicle barriers, vehicle height-restrictors, restricted access points, security lighting and trenches.
Physical barriers
For example, tall fencing, topped with barbed wire, razor wire or metal spikes are often emplaced on the perimeter of a property, generally with some type of signage that warns people not to attempt entry. However, in some facilities imposing perimeter walls or fencing will not be possible (e.g. an urban office building that is directly adjacent to public sidewalks) or it may be aesthetically unacceptable (e.g. surrounding a shopping center with tall fences topped with razor wire); in this case, the outer security perimeter will be generally defined as the walls, windows and doors of the structure itself.
Security lighting
Security lighting is another effective form of deterrence. Intruders are less likely to enter well-lit areas for fear of being seen. Doors, gates, and other entrances, in particular, should be well lit to allow close observation of people entering and exiting. When lighting the grounds of a facility, widely distributed low-intensity lighting is generally superior to small patches of high-intensity lighting, because the latter can have a tendency to create blind spots for security personnel and CCTV cameras. It is important to place lighting in a manner that makes it difficult to tamper with (e.g. suspending lights from tall poles), and to ensure that there is a backup power supply so that security lights will not go out if the electricity is cut off. The introduction of low-voltage LED-based lighting products has enabled new security capabilities, such as instant-on or strobing, while substantially reducing electrical consumption.
Security lighting for nuclear power plants in the United States
For nuclear power plants in the United States (U.S.), per the U.S. Nuclear Regulatory Commission (NRC), 10 CFR Part 73, [security] lighting is mentioned four times. The most notable mentioning contained in 10 CFR 73.55(i)(6) Illumination, which clearly identifies that licensees "-shall provide a minimum illumination level of 0.2 foot-candles, measured horizontally at ground level, in the isolation zones and appropriate exterior areas within the protected area-". [Ref] This is also the minimum illumination level specified in Table H–2 Minimum Night Firing Criteria of 10 CFR 73 Appendix H, for night firing. Per 10 CFR 73.46(b)(7) "-Tactical Response Team members, armed response personnel, and guards shall qualify and requalify, at least every 12 months, for day and night firing with assigned weapons in accordance with Appendix H-"; therefore on said respective shooting range [at night] per Appendix H, Table H-2, "-all courses [shall have] 0.2 foot-candles at center mass of target area-" applicable to handguns, shotguns, and rifles. [Ref] 1 foot-candle is approximately 10.76 lux, therefore the minimum illumination requirements for the above sections also reflect 2.152 lux.
Intrusion detection and electronic surveillance
Alarm systems and sensors
Security alarms can be installed to alert security personnel when unauthorized access is attempted. Alarm systems work in tandem with physical barriers, mechanical systems, and security guards, serving to trigger a response when these other forms of security have been breached. They consist of sensors including perimeter sensors, motion sensors, contact sensors, and glass break detectors.
However, alarms are only useful if there is a prompt response when they are triggered. In the reconnaissance phase prior to an actual attack, some intruders will test the response time of security personnel to a deliberately tripped alarm system. By measuring the length of time it takes for a security team to arrive (if they arrive at all), the attacker can determine if an attack could succeed before authorities arrive to neutralize the threat. Loud audible alarms can also act as a psychological deterrent, by notifying intruders that their presence has been detected.
In some U.S. jurisdictions, law enforcement will not respond to alarms from intrusion detection systems unless the activation has been verified by an eyewitness or video. Policies like this one have been created to combat the 94–99 percent rate of false alarm activation in the United States.
Video surveillance
Surveillance cameras can be a deterrent when placed in highly visible locations and are useful for incident assessment and historical analysis. For example, if alarms are being generated and there is a camera in place, security personnel assess the situation via the camera feed. In instances when an attack has already occurred and a camera is in place at the point of attack, the recorded video can be reviewed. Although the term closed-circuit television (CCTV) is common, it is quickly becoming outdated as more video systems lose the closed circuit for signal transmission and are instead transmitting on IP camera networks.
Video monitoring does not necessarily guarantee a human response. A human must be monitoring the situation in real time in order to respond in a timely manner; otherwise, video monitoring is simply a means to gather evidence for later analysis. However, technological advances like video analytics are reducing the amount of work required for video monitoring as security personnel can be automatically notified of potential security events.
Access control
Access control methods are used to monitor and control traffic through specific access points and areas of the secure facility. This is done using a variety of methods, including CCTV surveillance, identification cards, security guards, biometric readers, locks, doors, turnstiles and gates.
Mechanical access control systems
Mechanical access control systems include turnstiles, gates, doors, and locks. Key control of the locks becomes a problem with large user populations and any user turnover. Keys quickly become unmanageable, often forcing the adoption of electronic access control.
Electronic access control systems
Electronic access control systems provide secure access to buildings or facilities by controlling who can enter and exit. Some aspects of these systems can include:
Access credentials - Access cards, fobs, or badges are used to identify and authenticate authorized users. Information encoded on the credentials is read by card readers at entry points.
Access control panels - These control the system, make access decisions, and are usually located in a secure area. Access control software runs on the panels and interfaces with card reader.
Readers - Installed at access points, these read credentials or other data, and send information to the access control panel. Readers can be proximity, magnetic stripe, smart card, biometrics, etc.
Door locking hardware - Electrified locks, electric strikes, or maglocks physically secure doors and release when valid credentials are presented. Integration allows doors to unlock when authorized.
Request to exit devices - These allow free egress through an access point without triggering an alarm. Buttons, motion detectors, and other sensors are commonly used.
Alarms - Unauthorized access attempts or held/forced doors can trigger audible alarms and alerts. Integration with camera systems also occurs.
Access levels - Software can limit access to specific users, groups, and times. For example, some employees may have 24/7 access to all areas while others are restricted.
Event logging - Systems record activity like access attempts, alarms, user tracking, etc. for security auditing and troubleshooting purposes.
Electronic access control uses credential readers, advanced software, and electrified locks to provide programmable, secure access management for facilities. Integration of cameras, alarms and other systems is also common.
An additional sub-layer of mechanical/electronic access control protection is reached by integrating a key management system to manage the possession and usage of mechanical keys to locks or property within a building or campus.
Identification systems and access policies
Another form of access control (procedural) includes the use of policies, processes and procedures to manage the ingress into the restricted area. An example of this is the deployment of security personnel conducting checks for authorized entry at predetermined points of entry. This form of access control is usually supplemented by the earlier forms of access control (i.e. mechanical and electronic access control), or simple devices such as physical passes.
Security personnel
Security personnel play a central role in all layers of security. All of the technological systems that are employed to enhance physical security are useless without a security force that is trained in their use and maintenance, and which knows how to properly respond to breaches in security. Security personnel perform many functions: patrolling facilities, administering electronic access control, responding to alarms, and monitoring and analyzing video footage.
See also
Alarm management
Artificial intelligence for video surveillance
Biometric device
Biometrics
Computer security
Door security
Executive protection
Guard tour patrol system
Information security
Infrastructure security
Logical security
Nuclear security
Perimeter intrusion detection system
Physical Security Professional
Security alarm
Security company
Security convergence
Security engineering
Surveillance
High-voltage transformer fire barriers
References
External links
UK NPSA Tools, Catalogues and Standards
Physical security
Security
Crime prevention
Public safety
National security
Warning systems
Security engineering
Perimeter security | Physical security | Technology,Engineering | 2,150 |
23,154,241 | https://en.wikipedia.org/wiki/Methylamide | In biochemistry, an N-methylamide (NME) is a blocking group for the C-terminus end of peptides. When the carboxyl group of the C-terminus is replaced with a methylamide, further elongation of the peptide chain is prevented. C-Terminal modified peptides are also useful for the modulation of structure-activity relationships and for modifying conformational properties of peptides. N-Methylamides can be prepared directly from solid phase resin-bound peptides.
References
Carboxamides | Methylamide | Chemistry,Biology | 109 |
8,566,056 | https://en.wikipedia.org/wiki/Chain%20rule%20for%20Kolmogorov%20complexity | The chain rule for Kolmogorov complexity is an analogue of the chain rule for information entropy, which states:
That is, the combined randomness of two sequences X and Y is the sum of the randomness of X plus whatever randomness is left in Y once we know X.
This follows immediately from the definitions of conditional and joint entropy, and the fact from probability theory that the joint probability is the product of the marginal and conditional probability:
The equivalent statement for Kolmogorov complexity does not hold exactly; it is true only up to a logarithmic term:
(An exact version, ,
holds for the prefix complexity KP, where is a shortest program for x.)
It states that the shortest program printing X and Y is obtained by concatenating a shortest program printing X with a program printing Y given X, plus at most a logarithmic factor. The results implies that algorithmic mutual information, an analogue of mutual information for Kolmogorov complexity is symmetric: for all x,y.
Proof
The ≤ direction is obvious: we can write a program to produce x and y by concatenating a program to produce x, a program to produce y given
access to x, and (whence the log term) the length of one of the programs, so
that we know where to separate the two programs for x and upper-bounds this length).
For the ≥ direction, it suffices to show that for all such that we have that either
or
.
Consider the list (a1,b1), (a2,b2), ..., (ae,be) of all pairs produced by programs of length exactly [hence ]. Note that this list
contains the pair ,
can be enumerated given and (by running all programs of length in parallel),
has at most 2K(x,y) elements (because there are at most 2n programs of length ).
First, suppose that x appears less than times as first element. We can specify y given by enumerating (a1,b1), (a2,b2), ... and then selecting in the sub-list of pairs . By assumption, the index of in this sub-list is less than and hence, there is a program for y given of length .
Now, suppose that x appears at least times as first element. This can happen for at most different strings. These strings can be enumerated given and hence x can be specified by its index in this enumeration. The corresponding program for x has size . Theorem proved.
References
Computability theory
Theory of computation
Articles containing proofs | Chain rule for Kolmogorov complexity | Mathematics,Technology,Engineering | 539 |
3,526,052 | https://en.wikipedia.org/wiki/Van%20der%20Waerden%20notation | In theoretical physics, Van der Waerden notation refers to the usage of two-component spinors (Weyl spinors) in four spacetime dimensions. This is standard in twistor theory and supersymmetry. It is named after Bartel Leendert van der Waerden.
Dotted indices
Undotted indices (chiral indices)
Spinors with lower undotted indices have a left-handed chirality, and are called chiral indices.
Dotted indices (anti-chiral indices)
Spinors with raised dotted indices, plus an overbar on the symbol (not index), are right-handed, and called anti-chiral indices.
Without the indices, i.e. "index free notation", an overbar is retained on right-handed spinor, since ambiguity arises between chirality when no index is indicated.
Hatted indices
Indices which have hats are called Dirac indices, and are the set of dotted and undotted, or chiral and anti-chiral, indices. For example, if
then a spinor in the chiral basis is represented as
where
In this notation the Dirac adjoint (also called the Dirac conjugate) is
See also
Dirac equation
Infeld–Van der Waerden symbols
Lorentz transformation
Pauli equation
Ricci calculus
Notes
References
Spinors in physics
Spinors
Mathematical notation | Van der Waerden notation | Mathematics | 279 |
60,128,567 | https://en.wikipedia.org/wiki/Kerstin%20Nordstrom | Kerstin N. Nordstrom is an American physicist who is the Clare Boothe Luce Assistant Professor of Physics in the Department of Physics at Mount Holyoke College. Her research focuses on soft matter physics; her work has been featured in the LA Times and in the BBC News.
Early life and education
Nordstrom completed a bachelor's degree in physics and mathematics at Bryn Mawr College in 2004. She joined the University of Pennsylvania as a graduate student, earning a Master of Science in 2006 and a PhD in 2010. Her doctoral thesis focused on the "Jamming and flow of soft particle suspensions." In 2011, Nordstrom joined the University of Maryland, College Park as a postdoctoral researcher. At the University of Maryland, Nordstrom worked on several topics, including how beds of granular materials respond to impact and how razor clams burrow in sand.
Research and career
In 2014, Nordstrom joined Mount Holyoke College as an Assistant Professor. She is interested in complex fluid flows, including the systems of solid particles found in granular materials.
Awards and honors
2012 AAAS Mass Media Fellow
2018 Cottrell Scholar Award
2019 National Science Foundation CAREER Award
External media
In 2016, Nordstrom appeared on Jeopardy!.
References
Living people
Bryn Mawr College alumni
University of Pennsylvania alumni
Mount Holyoke College faculty
Year of birth missing (living people)
American women physicists
Bionics | Kerstin Nordstrom | Engineering,Biology | 278 |
32,721,695 | https://en.wikipedia.org/wiki/Spacecraft%20thermal%20control | In spacecraft design, the function of the thermal control system (TCS) is to keep all the spacecraft's component systems within acceptable temperature ranges during all mission phases. It must cope with the external environment, which can vary in a wide range as the spacecraft is exposed to the extreme coldness found in the shadows of deep space or to the intense heat found in the unfiltered direct sunlight of outer space. A TCS must also moderate the internal heat generated by the operation of the spacecraft it serves.
A TCS can eject heat passively through the simple and natural infrared radiation of the spacecraft itself, or actively through an externally mounted infrared radiation coil.
Thermal control is essential to guarantee the optimal performance and success of the mission because if a component is subjected to temperatures which are too high or too low, it could be damaged or its performance could be severely affected. Thermal control is also necessary to keep specific components (such as optical sensors, atomic clocks, etc.) within a specified temperature stability requirement, to ensure that they perform as efficiently as possible.
Active or passive systems
The thermal control subsystem can be composed of both passive and active items and works in two ways:
Protects the equipment from overheating, either by thermal insulation from external heat fluxes (such as the Sun or the planetary infrared and albedo flux), or by proper heat removal from internal sources (such as the heat emitted by the internal electronic equipment).
Protects the equipment from temperatures that are too low, by thermal insulation from external sinks, by enhanced heat absorption from external sources, or by heat release from internal sources.
Passive thermal control system (PTCS) components include:
Multi-layer insulation (MLI), which protects the spacecraft from excessive solar or planetary heating, as well as from excessive cooling when exposed to deep space.
Coatings that change the thermo-optical properties of external surfaces.
Thermal fillers to improve the thermal coupling at selected interfaces (for instance, on the thermal path between an electronic unit and its radiator).
Thermal washers to reduce the thermal coupling at selected interfaces.
Thermal doublers to spread on the radiator surface the heat dissipated by equipment.
Mirrors (secondary surface mirrors, SSM, or optical solar reflectors, OSR) to improve the heat rejection capability of the external radiators and at the same time to reduce the absorption of external solar fluxes.
Radioisotope heater units (RHU), used by some planetary and exploratory missions to produce heat for TCS purposes.
Active thermal control system (ATCS) components include:
Thermostatically controlled resistive electric heaters to keep the equipment temperature above its lower limit during the mission's cold phases.
Fluid loops to transfer the heat emitted by equipment to the radiators. They can be:
single-phase loops, controlled by a pump;
two-phase loops, composed of heat pipes (HP), loop heat pipes (LHP) or capillary pumped loops (CPL).
Louvers (which change the heat rejection capability to space as a function of temperature).
Thermoelectric coolers.
Thermal control systems
Environment interaction
Includes the interaction of the external surfaces of the spacecraft with the environment. Either the surfaces need to be protected from the environment, or there has to be improved interaction. Two main goals of environment interaction are the reduction or increase of absorbed environmental fluxes and reduction or increase of heat losses to the environment.
Heat collection
Includes the removal of dissipated heat from the equipment in which it is created to avoid unwanted increases in the spacecraft's temperature.
Heat transport
Is taking the heat from where it is created to a radiating device.
Heat rejection
The heat collected and transported has to be rejected at an appropriate temperature to a heat sink, which is usually the surrounding space environment. The rejection temperature depends on the amount of heat involved, the temperature to be controlled and the temperature of the environment into which the device radiates the heat.
Heat provision and storage.
Is to maintain a desired temperature level where heat has to be provided and suitable heat storage capability has to be foreseen.
Environment
For a spacecraft the main environmental interactions are the energy coming from the Sun and the heat radiated to deep space. Other parameters also influence the thermal control system design such as the spacecraft's altitude, orbit, attitude stabilization, and spacecraft shape. Different types of orbit, such as low Earth orbit and geostationary orbit, also affect the design of the thermal control system.
Low Earth orbit (LEO)
This orbit is frequently used by spacecraft that monitor or measure the characteristics of the Earth and its surrounding environment and by uncrewed and crewed space laboratories, such as EURECA and the International Space Station. The orbit's proximity to the Earth has a great influence on the thermal control system needs, with the Earth's infrared emission and albedo playing a very important role, as well as the relatively short orbital period, less than 2 hours, and long eclipse duration. Small instruments or spacecraft appendages such as solar panels that have low thermal inertia can be seriously affected by this continuously changing environment and may require very specific thermal design solutions.
Geostationary orbit (GEO)
In this 24-hour orbit, the Earth's influence is almost negligible, except for the shadowing during eclipses, which can vary in duration from zero at solstice to a maximum of 1.2 hours at equinox. Long eclipses influence the design of both the spacecraft's insulation and heating systems. The seasonal variations in the direction and intensity of the solar input have a great impact on the design, complicating the heat transport by the need to convey most of the dissipated heat to the radiator in shadow, and the heat-rejection systems via the increased radiator area needed. Almost all telecommunications and many meteorological satellites are in this type of orbit.
Highly eccentric orbits (HEO)
These orbits can have a wide range of apogee and perigee altitudes, depending on the particular mission. Generally, they are used for astronomy observatories, and the TCS design requirements depend on the spacecraft's orbital period, the number and duration of the eclipses, the relative attitude of Earth, Sun and spacecraft, the type of instruments onboard and their individual temperature requirements.
Deep space and planetary exploration
An interplanetary trajectory exposes spacecraft to a wide range of thermal environments more severe than those encountered around Earth's orbits. Interplanetary mission includes many different sub-scenarios depending on the particular celestial body. In general, the common features are a long mission duration and the need to cope with extreme thermal conditions, such as cruises either close to or far away from the Sun (from 1 to 4–5 AU), low orbiting of very cold or very hot celestial bodies, descents through hostile atmospheres, and survival in the extreme (dusty, icy) environments on the surfaces of the bodies visited. The challenge for the TCS is to provide enough heat-rejection capability during the hot operating phases and yet still survive the cold inactive ones. The major problem is often the provision of the power required for that survival phase.
Temperature requirements
The temperature requirements of the instruments and equipment on board are the main factors in the design of the thermal control system. The goal of the TCS is to keep all the instruments working within their allowable temperature range. All of the electronic instruments on board the spacecraft, such as cameras, data-collection devices, batteries, etc., have a fixed operating temperature range. Keeping these instruments in their optimal operational temperature range is crucial for every mission. Some examples of temperature ranges include
Batteries, which have a very narrow operating range, typically between −5 and 20 °C.
Propulsion components, which have a typical range of 5 to 40 °C for safety reasons, however, a wider range is acceptable.
Cameras, which have a range of −30 to 40 °C.
Solar arrays, which have a wide operating range of −150 to 100 °C.
Infrared spectrometers, which have a range of −40 to 60 °C.
Current technologies
Coating
Coatings are the simplest and least expensive of the TCS techniques. A coating may be paint or a more sophisticated chemical applied to the surfaces of the spacecraft to lower or increase heat transfer. The characteristics of the type of coating depends on their absorptivity, emissivity, transparency, and reflectivity. The main disadvantage of coating is that it degrades quickly due to the operating environment. Coatings can also be applied in the form of adhesive tape or stickers to reduce degradation.
Multilayer insulation (MLI)
Multilayer insulation (MLI) is the most common passive thermal control element used on spacecraft. MLI prevents both heat losses to the environment and excessive heating from the environment. Spacecraft components such as propellant tanks, propellant lines, batteries, and solid rocket motors are also covered in MLI blankets to maintain ideal operating temperature. MLI consist of an outer cover layer, interior layer, and an inner cover layer. The outer cover layer needs to be opaque to sunlight, generate a low amount of particulate contaminates, and be able to survive in the environment and temperature to which the spacecraft will be exposed. Some common materials used for the outer layer are fiberglass woven cloth impregnated with PTFE Teflon, PVF reinforced with Nomex bonded with polyester adhesive, and FEP Teflon. The general requirement for the interior layer is that it needs to have a low emittance. The most commonly used material for this layer is Mylar aluminized on one or both sides. The interior layers are usually thin compared to the outer layer to save weight and are perforated to aid in venting trapped air during launch. The inner cover faces the spacecraft hardware and is used to protect the thin interior layers. Inner covers are often not aluminized in order to prevent electrical shorts. Some materials used for the inner covers are Dacron and Nomex netting. Mylar is not used because of flammability concerns. MLI blankets are an important element of the thermal control system.
Louvers
Louvers are active thermal control elements that are used in many different forms. Most commonly they are placed over external radiators, louvers can also be used to control heat transfer between internal spacecraft surfaces or be placed on openings on the spacecraft walls. A louver in its fully open state can reject six times as much heat as it does in its fully closed state, with no power required to operate it. The most commonly used louver is the bimetallic, spring-actuated, rectangular blade louver also known as venetian-blind louver. Louver radiator assemblies consist of five main elements: baseplate, blades, actuators, sensing elements, and structural elements.
Heaters
Heaters are used in thermal control design to protect components under cold-case environmental conditions or to make up for heat that is not dissipated. Heaters are used with thermostats or solid-state controllers to provide exact temperature control of a particular component. Another common use for heaters is to warm up components to their minimal operating temperatures before the components are turned on.
The most common type of heater used on spacecraft is the patch heater, which consists of an electrical-resistance element sandwiched between two sheets of flexible electrically insulating material, such as Kapton. The patch heater may contain either a single circuit or multiple circuits, depending on whether or not redundancy is required within it.
Another type of heater, the cartridge heater, is often used to heat blocks of material or high-temperature components such as propellants. This heater consists of a coiled resistor enclosed in a cylindrical metallic case. Typically a hole is drilled in the component to be heated, and the cartridge is potted into the hole. Cartridge heaters are usually a quarter-inch or less in diameter and up to a few inches long.
Another type of heater used on spacecraft is the radioisotope heater units also known as RHUs. RHUs are used for travelling to outer planets past Jupiter due to very low solar radiance, which greatly reduces the power generated from solar panels. These heaters do not require any electrical power from the spacecraft and provide direct heat where it is needed. At the center of each RHU is a radioactive material, which decays to provide heat. The most commonly used material is plutonium dioxide. A single RHU weighs just 42 grams and can fit in a cylindrical enclosure 26 mm in diameter and 32 mm long. Each unit also generates 1 W of heat at encapsulation, however the heat generation rate decreases with time. A total of 117 RHUs were used on the Cassini mission.
Radiators
Excess waste heat created on the spacecraft is rejected to space by the use of radiators. Radiators come in several different forms, such as spacecraft structural panels, flat-plate radiators mounted to the side of the spacecraft, and panels deployed after the spacecraft is on orbit. Whatever the configuration, all radiators reject heat by infrared (IR) radiation from their surfaces. The radiating power depends on the surface's emittance and temperature. The radiator must reject both the spacecraft waste heat and any radiant-heat loads from the environment. Most radiators are therefore given surface finishes with high IR emittance to maximize heat rejection and low solar absorptance to limit heat from the Sun. Most spacecraft radiators reject between 100 and 350 W of internally generated electronics waste heat per square meter. Radiators' weight typically varies from almost nothing, if an existing structural panel is used as a radiator, to around 12 kg/m2 for a heavy deployable radiator and its support structure.
The radiators of the International Space Station are clearly visible as arrays of white square panels attached to the main truss.
Heat pipes
Heat pipes use a closed two-phase liquid-flow cycle with an evaporator and a condenser to transport relatively large quantities of heat from one location to another without electrical power. Aerospace-grade specific heat pipes such as Constant Conductance Heat Pipes (CCHPs) or Axial Groove heat pipes are aluminum extrusions with ammonia used as the working fluid.
Typical Applications Include: Payload thermal management
Heat transport, Isothermalization, Radiator panel thermal enhancement
Future of thermal control systems
Various composite materials
Heat rejection through advanced passive radiators
Spray cooling devices (e.g. liquid droplet radiator)
Lightweight thermal insulation
Variable-emittance technologies
Diamond films
Advanced thermal control coatings
Microsheets
Advanced spray on thin films
Silvered quartz mirrors
Advanced metallized polymer-based films
3D printed evaporators for Loop Heat Pipes
Space Copper-water Heat Pipes for chip-level cooling
Events
A major event in the field of space thermal control is the International Conference on Environmental Systems, organized every year by AIAA. Another is the European Space Thermal Analysis Workshop
Sun shield
In spacecraft design, a Sun shield restricts or reduces heat caused by sunlight hitting a spacecraft. An example of use of a thermal shield is on the Infrared Space Observatory. The ISO sunshield helped protect the cryostat from sunlight, and it was also covered with solar panels.
For spacecraft approaching the Sun, the sunshade is usually called a heatshield. Notable spacecraft [designs] with heatshields include:
Messenger, launched 2004, orbited Mercury until 2015, had a ceramic cloth sunshade
Parker Solar Probe (was Solar Probe Plus), launched 2018 (carbon, carbon-foam, carbon sandwich heatshield)
Solar Orbiter, launched Feb 2020
BepiColombo, to orbit Mercury, with Optical Solar Reflectors (acting as a sunshade) on the Planetary Orbiter component.
Not to be confused with the concept of a global-scale Sun shield in geoengineering, often called a Space sunshade or "Sun shield", in which the spacecraft itself is used to block sunlight to a planet.
An example of a sunshield in spacecraft design is the sunshield on the James Webb Space Telescope. The JWST infrared telescope has a layered sunshade to keep the telescope cold.
See also
Environmental control and life-support system
Space sunshade
Temperature control
References
Bibliography
Gilmore, D. G., “Satellite Thermal Control Handbook”, The Aerospace Corporation Press, 1994.
Karam, R. D., Satellite Thermal Control for Systems Engineers, Progress in Astronautics and Aeronautics, AIAA, 1998.
Gilmore, D. G., “Spacecraft Thermal Control Handbook 2nd ed.”, The Aerospace Corporation Press, 2002.
De Parolis, M. N., and W. Pinter-Krainer. Current and Future Techniques for Spacecraft Thermal Control 1. Design Drivers and Current Technologies. 1 Aug 1996. Web: 5 Sep 2014.
Temperature control
Spacecraft design | Spacecraft thermal control | Technology,Engineering | 3,464 |
37,510,191 | https://en.wikipedia.org/wiki/Diffraction%20efficiency | In optics, diffraction efficiency is the performance of diffractive optical elements – especially diffraction gratings – in terms of power throughput. It's a measure of how much optical power is diffracted into a designated direction compared to the power incident onto the diffractive element of grating.
If the diffracted power is designated with and the incident power with , the efficiency reads
Grating efficiency
In the most common case – the diffraction efficiency of optical gratings (therefore also called grating efficiency) – there are two possibilities to specify efficiency:
The absolute efficiency is defined as above and relates the power diffracted into a particular order to the incident power.
The relative efficiency relates the power diffracted into a particular order to the power that would be reflected by a mirror of the same coating as the grating, therefore attributing to inevitable reflection losses at the grating but not caused by inefficient diffraction itself.
References
External links
Diffraction | Diffraction efficiency | Physics,Chemistry,Materials_science | 213 |
38,053,070 | https://en.wikipedia.org/wiki/Akka%20%28toolkit%29 | Akka is a source-available toolkit and runtime simplifying the construction of concurrent and distributed applications on the JVM. Akka supports multiple programming models for concurrency, but it emphasizes actor-based concurrency, with inspiration drawn from Erlang.
Language bindings exist for both Java and Scala. Akka is written in Scala and, as of Scala 2.10, the actors in the Scala standard library are deprecated in favor of Akka.
History
An actor implementation, written by Philipp Haller, was released in July 2006 as part of Scala 2.1.7. By 2008 Scala was attracting attention for use in complex server applications, but concurrency was still typically achieved by creating threads that shared memory and synchronized when necessary using locks. Aware of the difficulties with that approach and inspired by the Erlang programming language's library support for writing highly concurrent, event-driven applications, the Swedish programmer Jonas Bonér created Akka to bring similar capabilities to Scala and Java. Bonér began working on Akka in early 2009 and wrote up his vision for it in June of that year. The first public release was Akka 0.5, announced in January 2010. Akka is now part of the Lightbend Platform together with the Play framework and the Scala programming language.
In September 2022, Lightbend announced that Akka would change its license from the free software license Apache License 2.0 to a proprietary source-available license, known as the Business Source License (BSL). Any new code under the BSL would become available under the Apache License after three years.
Distinguishing features
The key points distinguishing applications based on Akka actors are:
Concurrency is message-based and asynchronous: typically no mutable data are shared and no synchronization primitives are used; Akka implements the actor model.
The way actors interact is the same whether they are on the same host or separate hosts, communicating directly or through routing facilities, running on a few threads or many threads, etc. Such details may be altered at deployment time through a configuration mechanism, allowing a program to be scaled up (to make use of more powerful servers) and out (to make use of more servers) without modification.
Actors are arranged hierarchically with regard to program failures, which are treated as events to be handled by an actor's supervisor (regardless of which actor sent the message triggering the failure). In contrast to Erlang, Akka enforces parental supervision, which means that each actor is created and supervised by its parent actor.
Akka has a modular structure, with a core module providing actors. Other modules are available to add features such as network distribution of actors, cluster support, Command and Event Sourcing, integration with various third-party systems (e.g. Apache Camel, ZeroMQ), and even support for other concurrency models such as Futures and Agents.
Project structure
Viktor Klang became the technical lead for the Akka project in September 2011. When Viktor became Director of Engineering at Lightbend in December 2012, Roland Kuhn became the technical lead for Akka. The main part of the development is done by a core team employed at Lightbend, supported by an active community. The current emphasis is on extending cluster support.
Relation to other libraries
Other frameworks and toolkits have emerged to form an ecosystem around Akka:
The Spray toolkit is implemented using Akka and features a HTTP server as well as related facilities, such as a domain-specific language (DSL) for creating RESTful APIs
The Play framework for developing web applications offers integration with Akka
Up until version 1.6, Apache Spark used Akka for communication between nodes
The Socko Web Server library supports the implementation of REST APIs for Akka applications
The eventsourced library provides event-driven architecture (see also domain-driven design) support for Akka actors
The Gatling stress test tool for load-testing web servers is built upon Akka
The Scalatra web framework offers integration with Akka.
The Vaadin web app development framework can integrate with Akka
The Apache Flink (platform for distributed stream and batch data processing) RPC system is built using Akka but isolated since v1.14.
The Lagom framework for building reactive microservices is implemented on top of Akka.
There are more than 250 public projects registered on GitHub which use Akka.
Publications about Akka
There are several books about Akka:
Akka Essentials
Akka Code Examples
Akka Concurrency
Akka in Action, Second Edition
Akka in Action
Effective Akka
Composable Futures with Akka 2.0, Featuring Java, Scala and Akka Code Examples
Akka also features in:
P. Haller's "Actors in Scala"
N. Raychaudhuri's "Scala in Action"
D. Wampler's "Functional Programming for Java Developers"
A. Alexander's "Scala Cookbook"
V. Subramaniam's "Programming Concurrency on the JVM"
M. Bernhardt's "Reactive Web Applications"
Besides many web articles that describe the commercial use of Akka,
there are also overview articles about it.
References
External links
Official website for Akka
Java platform
Software development kits
Java development tools
Actor model (computer science)
Software using the Business Source License | Akka (toolkit) | Technology | 1,088 |
532,901 | https://en.wikipedia.org/wiki/Aquaphobia | Aquaphobia () is an irrational fear of water.
Aquaphobia is considered a specific phobia of natural environment type in the Diagnostic and Statistical Manual of Mental Disorders. A specific phobia is an intense fear of something that poses little or no actual danger.
Etymology
The correct Greek-derived term for "water-fear" is hydrophobia, from ὕδωρ (hudōr), "water" and φόβος (phobos), "fear". However, this word has long been used in many languages, including English, to refer specifically to a symptom of later-stage rabies, which manifests itself in humans as difficulty in swallowing, fear when presented with liquids to drink, and an inability to quench one's thirst. Therefore, fear or aversion to water in general is referred to as aquaphobia.
Prevalence
A study of epidemiological data from 22 low, lower-middle, upper-middle and high-income countries revealed "fear of still water or weather events" had a prevalence of 2.3%, across all countries; in the US the prevalence was 4.3%. In an article on anxiety disorders, Lindal and Stefansson suggest that aquaphobia may affect as many as 1.8% of the general Icelandic population, or almost one in fifty people. In America, 46% of American adults are afraid of deep water in pools and 64% are afraid of deep open waters.
Manifestation for aquaphobia
Specific phobias are a type of anxiety disorder in which a person may feel extremely anxious or have a panic attack when exposed to the object of fear. Specific phobias are a common mental disorder.
Psychologists indicate that aquaphobia manifests itself in people through a combination of experiential and genetic factors. Five common causes of aquaphobia: instinctive fear of drowning, experienced an incident of personal horror, has an overprotective parent/parent with aquaphobia, psychological difficulty adjusting to water and lack of trust in water.
In the case of a 37 year old media professor, he noted that his fear initially presented itself as a "severe pain, accompanied by a tightness of his forehead," and a choking sensation, discrete panic attacks and a reduction in his intake of fluids.
Signs and symptoms
Physical responses include nausea, dizziness, numbness, shortness of breath, increased heart rate, sweating, and shivering.
In addition the signs and symptoms above, some general signs and symptoms one may display in reaction to a specific phobia may include:
Physical Symptoms: trembling, hot flushes or chills, pain or tightness in chest, butterflies in stomach, feeling faint, dry mouth, ringing in ears, and confusion.
Psychological Symptoms: feeling fear of losing control, fainting, dread and dying.
Treatment and case studies
A few treatment options include:
Hypnosis and systematic desensitization - 28-year-old female, aquaphobia from childhood, hypnosis and systematic desensitization in an 8-week 5-session program, 2-month and 1-year follow up. 37-year-old male, 10 years of extreme aquaphobia (could not even drink water), 6 sessions of hypnotherapy, therapy was successful, no relapse and 6 month follow up.
Cognitive behavioral therapy
Exposure therapy
Medication
See also
List of phobias
Thalassophobia – fear of the sea
References
Phobias
Water | Aquaphobia | Environmental_science | 705 |
31,184,651 | https://en.wikipedia.org/wiki/Internet%20influences%20on%20communities | A community is "a body of people or things viewed collectively". According to Steven Brintgregates of people who share common activities and/or beliefs and who are bound together principally by relations of affect, loyalty, common values, and/or personal concern – i.e., interest in the personalities and life events of one another".
Jenny Preece suggested to evaluate communities according to physical features: size, location and the boundaries that confined them. When commuting became a way of life and cheaper transportation made it easier for people to join multiple communities to satisfy different needs, the strength and type of relationships among people seemed more promising criteria.
Since social capital is built of trust, rules, norms and networks, it can be said that the social capital of communities has grown. The lower entrance barriers to the community have made it easier to be a part of many different communities. This goes hand in hand with Don Tapscott's theory of how the digital society has changed collaboration and innovation to a world of co-creation.
From birth to death, people are shaped by the communities to which they belong, affecting everything from how they talk to whom they talk with. Just like the telephone and the television changed the way people interact socially, computers have transformed communication and at the same time created new norms for social capital.
"A virtual community is a group of people who may or may not meet one another face to face, who exchange words and ideas through the mediation of computer bulletin board systems and other digital networks". Along with the fact that computer usage has spread, the use of virtual communities have grown. Rheingold defines virtual communities as "social aggregations that emerge from the Net when enough people carry on those public discussions long enough, with sufficient human feeling, to form webs of personal relationships in cyberspace". Michael Porter describes a virtual community as "an aggregation of individuals or business partners who interact around a shared interest, where the interaction is at least partially supported and/or mediated by technology and guided by some protocols or norms".
Virtual communities consist of "people with shared interests or goals for whom electronic communication is a primary form of interaction" and have created new forms of collaboration. "The most skilled and experienced members of the community provide leadership and help integrate contributions from the community as a whole. This way, virtual communities can use the voluntary motivations that exist in a community to assign the right person to the right task more effectively than traditional forms".
According to Benkler, we can "see a thickening of the preexisting relations with friends and family, in particular with those who were hard to reach earlier". "Also, we are beginning to see the emergence of a greater scope for limited-purpose, loose relationships. Although these may not fit the ideal model of virtual communities, they are effective and meaningful to their participants".
The heightened individual capacity that actually is a driving social force have raised concerns by many that the Internet is further fragmenting the community, making people spend their time in front of their computer instead of socializing with each other. Empirical studies show, however, that we are using the Internet and communities at the expense of television, and that is an exchange that promotes social ties.
Social capital
Social capital is a concept built from the premise that some value emanates from social networking sites due to the social interaction which may have positive influence in the society of the individuals who belong to the group by facilitating coordinated actions (Putnam et al., 1993). Simply put, social capital is “the ability of people to work together for some common purpose” (Rosenfeld, 1997). Trust, rules, norms and networks create social capital (Barr, 2000), Narayan (1997).
Evaluation
A number of innovative ways have been employed to measure social capital, however, there is not a one true way of measuring it. First, the most comprehensive definitions of social capital are multidimensional, incorporating different levels and units of analysis. Second, any attempt to measure the properties of inherently ambiguous concepts such as "community", "network" and "organization" is correspondingly problematic. Third, few long-standing surveys were designed to measure "social capital", leaving contemporary researchers to compile indexes from a range of approximate items, such as measures of trust in government, voting trends, memberships in civic organizations, hours spent volunteering. New surveys currently being tested will hopefully produce more direct and accurate indicators.
Measuring social capital may be difficult, but it is not impossible, and several excellent studies have identified useful proxies for social capital, using different types and combinations of qualitative, comparative and quantitative research methodologies .
Knack and Keefer (1997) used indicators of trust and civic norms from the World Values Survey for a sample of 29 market economies. They used these measures as proxies for the strength of civic associations in order to test two different propositions on the effects of social capital on economic growth, the "Olson effects" (associations stifle growth through rent-seeking) and "Putnam effects" (associations facilitate growth by increasing trust). Inglehart (1997) has done the most extensive work on the implications of the WVS's results for general theories of modernization and development.
Narayan and Pritchett (1997) construct a measure of social capital in rural Tanzania, using data from the Tanzania Social Capital and Poverty Survey (SCPS). This large-scale survey asked individuals about the extent and characteristics of their associational activity, and their trust in various institutions and individuals. They match this measure of social capital with data on household income in the same villages (both from the SCPS and from an earlier household survey, the Human Resources Development Survey). They find that village-level social capital raises household incomes.
Temple and Johnson (1998), extending the earlier work of Adelman and Morris (1967), use ethnic diversity, social mobility, and the prevalence of telephone services in several sub-Saharan African countries as proxies for the density of social networks. They combine several related items into an index of "social capability", and show that this can explain significant amounts of variation in national economic growth rates.
Measuring social capital may be difficult, but it is not impossible, and several excellent studies have identified useful proxies for social capital, using different types and combinations of qualitative, comparative and quantitative research methodologies (Woolcock and Narayan, 2000).
How we measure social capital depends on how we define it. The most comprehensive definitions of social capital are multidimensional, incorporating different levels and units of analysis. Trust, civic engagement, and community involvement are generally seen as ways to measure social capital. Depending on the definition of social capital and the context, some indicators may be more appropriate than others.
Once it has been decided which how social capital is to be measured, for example by measuring civic engagement through household surveys, cultural factors may be taken into account in designing the survey instrument. Newspaper readership may be a better indicator of civic engagement in Italy (Putnam, 1993) than in India because of the varying literacy rates.
Measuring social capital among the poor, particularly studying the same households over time, is difficult because the poor are often involved in informal work, may not have a long-term address or may move.
Robert D. Putnam (2000) suggested a social capital concepts of bonding and bridging. Bonding is viewed as relations among members of the same community. Bridging is viewed as relationships between members among different communities.
Influences on communities
Business 'cluster' is “used to represent concentrations of firms that are able to produce synergy because of their geographic proximity and interdependence” (Rosenfeld, 1997). Steinfield, C. et al. (2010) found, that “the amount of perceived social capital significantly predicted market exposure” of company performance in a knowledge-intensive business cluster. Social capital strengthens regional production networks.
The rate of networking (defined as various forms of strategic alliances and joint ventures) generally reflects the levels of social capital and trust that exists (Rosenfeld, 1997). Robert Putnam (1993) found that stock of social capital predicts economic performance. There is some evidence suggesting that social relationships play an important role in the survival of small businesses (Granovetter, 1984), yet the relative contribution of other factors, such as managerial skills and environmental context are unknown.
At the institutional level, disciplinary climate and academic norms established by the school community and the mutual trust between home and school are major forms of social capital. These forms of social capital are found to contribute to student learning outcomes in East Asian countries such as Singapore, Korea, and Hong Kong. They have been shown to have a significant impact, not only on creating a learning and caring school climate, but also on improving the quality of schooling and reducing inequality of learning outcomes between social-class groups.
Information and communication technologies (ICT) affect various aspects of communities, including communications, social capital, friendships and trust. Internet has the most influence on communities due to its interactive nature and wide usage. According to Katz, Rice, Aspden (2001) “Internet has unique, even transformational qualities as a communication channel, including relative anonymity and the ability to easily link with others who have similar interests, values, and beliefs”.
Internet, and computer-mediated communication supports and accelerates ways how people operate at the centers of partial, personal communities, and switching rapidly and frequently between different groups (Wellman, 1996). Internet usage is associated with positive and negative aspects for communities.
For example, Bargh and McKenna (2004) claim that “Internet use does not appear to weaken the fabric of neighborhoods and communities”. Galston (1999) claims, that Internet is “capable of promoting a kind of socialization and moral learning through mutual adjustment”.
Kavanaugh and Patterson (2001) did not find that increased Internet usage increased community involvement and attachment. According to Gilleard, C. et al. (2007), “ownership and use of domestic information and communication technology reduces the sense of attachment to the local neighborhood among individuals 50 and older in England.” But they continue that “domestic information and communication technology may be more liberating of neighborhood boundedness than destructive of social capital.”
Anonymity is often mentioned in popular media as a possible cause for negative effects. But according to Bargh and McKenna (2004), anonymity also associated with positive effects: “research has found that the relative anonymity aspect encourages self-expression, and the relative absence of physical and nonverbal interaction cues (e.g., attractiveness) facilitates the formation of relationships on other, deeper bases such as shared values and beliefs.”
Pigg and Crank (2004) suggest how Internet can facilitate interaction within members of community. They suggest a concept of “reciprocity transaction”, that implies that “one person provides something of value to another in expectation that, at some point in time, the other person will act similarly”. It is suggested that ICT supports reciprocity transaction by providing social support or valuable information not available to public, and share meaning. Shared presence combined with depth of information provides shared meaning (Miranda and Saunders, 2003).
Internet usage is generally not associated with decline in social contact. For example, Katz, Rice, Aspden (2001) found that Internet users were more likely to communicate with others through other media (especially telephone) more than do nonusers, and Internet use was associated with greater levels of social interaction (although this was more widely dispersed). Thy claim, that “Internet use does not appear to weaken the fabric of neighborhoods and communities.” Ellison, Steinfield and Lampe (2007) claim that online interactions do not necessarily remove people from their offline world, but support relationships, especially when life changes move them away from each other. They say, that Internet “seems well-suited to social software applications because it enables users to maintain such ties cheaply and easily”.
Internet-based communications is usually cheaper than phone, fax and letter-based communications, and are regarded as cheap to keep up with family and friends abroad (Foley, 2004), to keep up with business friends (e.g. Molony, 2009).
Galston (1999) suggested an approach to analyze virtual communities based on entry and exist costs: “when barriers to leaving old groups and joining new ones are relatively low, exit will tend to be the preferred option; as these costs rise, the exercise of voice becomes more likely.” He suggested, that “exit [from community] will be the predominant response to dissatisfaction”. Also, “virtual communities do not promote the development of voice; because they emphasize personal choice, they do not acknowledge the need for authority”, and do not foster mutual obligation.
Influences on family, friends and neighbors
Positive Internet usage on relationships between family members and friends were found. For example, Bargh and McKenna (2004) wrote that “Internet, mainly through e-mail, has facilitated communication and thus close ties between family and friends, especially those too far away to visit in person on a regular basis”.
ICT helps to create friendships. “When Internet-formed relationships get close enough (i.e., when sufficient trust has been established), people tend to bring them into their “real world”—that is, the traditional face-to-face and telephone interaction sphere” (Bargh, McKenna, 2004.) “Internet facilitates new connections, in that it provides people with an alternative way to connect with others who share their interests or relational goals” (Ellison, Heino, & Gibbs, 2006).
Cummings, Lee and Kraut (2006) found that students who move off to college “communicating with these friends prevents the relationships from declining as swiftly as they otherwise would. Communication seems to inject energy into a relationship and prevents it from going dormant.” Email and instant messaging are found to be especially useful.
Hampton and Wellman (2001) found that, in a wired community, many neighbors got to know each other better through the use of a local computer network. But according to Katz (2001), “use of the Internet per se is not associated with different levels of awareness of one's neighbors”.
Influences on social network
Social networks play increasingly larger role for Internet users. According to Castells (1999), “social networks substitute for communities, with locally based communities being one of the many possible alternatives for the creation and maintenance of social networks, and the Internet providing another such alternative.”
Social networks provide possibilities to create new relationships, and to maintain existing ones. According to Lampe, Ellison, Steinfield (2007), users of a popular social network Facebook mainly use the network to learn more about people they meet offline, and are less inclined to initiate new connections: “Facebook members seem to be using Facebook as a surveillance tool for maintaining previous relationships, and as a “social search” tool by which they investigate people they've met offline”.
Connections formed online sometimes are transformed to off-line personal relationships. Parks and Floyd (1996) report that 60% of their random sample “reported that they had formed a personal relationship of some kind with someone they had first contacted through a newsgroup”, and that “relationships that begin on line rarely stay there”.
Privacy issues are commonly reported in popular media. According to Gross and Acquisti (2005), “many individuals in a person's online extended network would hardly be defined as actual friends by that person; in fact many may be complete strangers. And yet, personal and often sensitive information is freely and publicly provided.” Therefore, users potentially expose themselves to physical and cyber risks.
Influences on social capital
Internet usage can cause multiple effects for social capital, and its effects are not yet clear. For example, Pigg & Crank (2004) suggest that studies of relationship between online networks and social capital is still too much in their infancy to reach any useful conclusions. Although it is generally thought that Internet affects social capital, “mechanisms are unclear” (Hampton, Wellman, 1999).
Internet usage can both increase and decrease social capital: “people engage in social and asocial activities when online” (Hampton, Wellman, 1999).
For example, Nie (2001) claims that social capital can be decreased: “Internet use may actually reduce interpersonal interaction and communication”. He also claims, that “Internet users do not become more sociable; rather, they already display a higher degree of social connectivity and participation”. Hampton, Wellman (1999) claims, that “increased connectivity and involvement not only can expose people to more contact and more information, it can reduce commitment to community”, because “immersiveness can turn people away from community”.
Some researchers claim that social capital can be increased by Internet usage. For example, Ellison, Heino, & Gibbs (2006) claim that “Internet facilitates new connections, in that it provides people with an alternative way to connect with others who share their interests or relational goals”. Hampton, Wellman (1999) state that Internet supplements network capital “by extending existing levels of face-to-face and telephone contact.”
Reduction of communication costs increase the frequency and duration of communication, and increase social capital's bonding and bridging.
The Net is particularly suited to the development of multiple weak ties (Castells, 1999), thus expanding sociability beyond the socially defined boundaries of self-recognition. Internet supports weak ties between individuals, which can the foundation for bridging social capital (Ellison, Steinfieldm, Lampe, 2007). Resnick (2001) suggests that with the help of new technologies (e.g. distribution lists, photo directories, search) new forms of social capital occurs in online social network sites. Ellison, Steinfield and Lampe (2007) suggest that intensity of Facebook use is positively associated with individuals’ perceived bridging social capital: for undergraduate students, there is a “strong association between use of Facebook and the three types of social capital, with the strongest relationship being to bridging social capital.
According to Williams (2006), because of the low costs of communication, there might be more of the bridging function online than offline. "The social capital created by these networks generates broader identities and generalized reciprocity". Williams (2006) suggested Internet Social Capital Scales (ICST) to measure social capital bridging and bonding. Ellison, Steinfield and Lampe (2007) assessed social capital bonding by using ICST, and found, that “Facebook is indeed implicated in students’ efforts to develop and maintain bridging social capital at college, although we cannot assess causal direction.”
Intensity of Facebook use was positively associated with individuals’ perceived bonding social capital (Ellison, Steinfield and Lampe, 2007). But they also found, that bonding social capital was also predicted by high self-esteem, satisfaction with university life, as with use of Facebook. Therefore, high self-esteem, and satisfaction with university life are likely causes of perceived bonding social capital, and heavier Facebook use.
Friends use the Internet to maintain ties. “Internet is particularly useful for keeping contact among friends who are socially and geographically dispersed. ... Distance still matters: communication is lower with distant than nearby friends” (Hampton, Wellman, 1999).
See also
Online community
Tribe (internet)
Online participation
Social web
Notes
References
Andrade, A. E., 2009. The Value of Extended Networks: Social Capital in an ICT Intervention in Rural Peru. Information Technology for Development, Vol. 15 (2), pp. 108–132.
Bargh, J. A., McKenna, J. Y. A., 2004. The Internet and Social Life. Annual Review of Psychology, Vol. 55, pp. 573–590.
Benkler, Y., 2006. The Wealth of Networks. Yale University Press. London.
Castells, M., 2010. The Information Age: Economy, Society and Culture. Volume I: The Rise of the Network Society. John Wiley & Sons Ltd.
Ellison, N.B., Steinfieldm, C., Lampe, C., 2007. The Benefits of Facebook ‘‘Friends:’’ Social Capital and College Students’ Use of Online Social Network Sites. Journal of Computer-Mediated Communication, Vol. 12, pp. 1143–1168.
Encyclopædia Britannica Inc., 2011. Encyclopædia Britannica Online, Accessed 5 February 2011.
Foley, P., 2004. Does the Internet help to overcome social exclusion? Electronic Journal of e-government, pp. 139–146.
Freitag, M. (2003). Beyond Tocqueville: The Origins of Social Capital in Switzerland. European Sociological Review, Vol. 19, No. 2, pp. 217 – 232.
Galston, W. A., 2000. Does the Internet Strengthen Community? National Civic Review, Vol. 89, No. 3, pp. 193–202.
Gilleard, C., et al. Community and Communication in the Third Age: The Impact of Internet and Cell Phone Use on Attachment to Place in Later Life in England. The Journals of Gerontology: Series B, Vol. 62, Issue 4.
Granovetter, Mark. 1984. Small is bountiful: labor markets and establishment size. American Sociological Review, Vol. 49, pp. 323–334.
Gross, R., Acquisti, A., 2005. Information Revelation and Privacy in Online Social Networks. Workshop on Privacy in the Electronic Society (WPES), 2005.
Hampton, K., Wellman, B., 1999. Netville Online and Offline: Observing and Surveying a Wired Suburb. American Behavioral Scientist, Vol. 43, Issue 3, pp. 475–492.
Jahnke, I., 2009. Dynamics of social roles in a knowledge management community. Computers in Human Behavior, Vol. 26, pp. 533–546.
Katz, J.E., Rice, R.E., Aspden, P., 2001. The Internet, 1995-2000. American Behavioral Scientist, Vol. 45, No. 3, pp. 405–419.
Kavanaugh, A.L., Patterson, S.J., 2001. The Impact of Community Computer Networks on Social Capital and Community Involvement. American Behavioral Scientist, Vol. 45, pp. 496–509.
Knack, Stephen & Keefer, Philip, 1997. Does Social Capital Have an Economic Payoff? A Cross-Country Investigation, The Quarterly Journal of Economics, MIT Press, vol. 112(4), pages 1251-88, November.
Lampe, C., Ellison N., Steinfield, C., 2006. A Face(book) in the Crowd: Social Searching vs. Social Browsing'. CSCW'06, November 4–8, 2006
Miranda, S. M., Saunders, C. S., 2003. The Social Construction of Meaning: An Alternative Perspective on Information Sharing. Information Systems Research, Vol. 14, Issue 1, pp. 87–106.
Molony, T., 2009. Carving a Niche: ICT, Social Capital, and Trust in the Shift from Personal to Impersonal Trading, Information Technology for Development, Vol. 15 (4), pp. 283–301.
Narayan, 1997. Voices of the Poor, Poverty and Social Capital in Tanzania. World Bank, Washington D.C., USA.
Nie, N. H., 2001. Sociability, Interpersonal Relations, and the Internet. American Behavioral Scientist, Vol. 45, No. 3, pp. 420–435.
Oxford English Dictionary, 2011, Accessed 5 February 2011.
Parks, M. R., & Floyd, K., 1996. Making friends in cyberspace. Journal of Computer-Mediated Communication, Vol. 1, Issue 4.
Pigg, K.E., Crank, L. D., 2004. Building Community Social Capital: The Potential and Promise of Information and Communications Technologies. The Journal of Community Informatics, (2004), Vol. 1, Issue 1, pp. 58–73.
Preece, J. (2000) Online communities: Designing usability, supporting sociability. Wiley.
Putnam, R., 1993. Making Democracy Work Civic Traditions in Modern Italy. Princeton Press.
Putnam, R. D., 2000. Bowling Alone: The Collapse and Revival of American Community. Simon & Schuster.
Rosenfeld, A. A., 1997. Bringing Business Clusters into the Mainstream of Economic Development. European Planning Studies, Vol. 5, No. 1.
Steinfield, C., et al., 2010. Social capital, ICT use and company performance: Findings from the Medicon Valley Biotech Cluster. Technological Forecasting & Social Change, Vol. 77, pp. 1156–1166.
Tapscott, D. (2007). Wikinomics: How mass collaboration changes everything. Atlantic books. London.
Wellman, B., et al., 2001. Does the Internet Increase, Decrease, or Supplement Social Capital? - Social Networks, Participation, and Community Commitment. American Behavioral Scientist, Vol. 45, pp. 436–455.
Internet culture
Virtual communities
Community building
Social influence
Social information processing
Social software
Hyperreality | Internet influences on communities | Technology | 5,211 |
46,932,491 | https://en.wikipedia.org/wiki/Pappus%27s%20area%20theorem | Pappus's area theorem describes the relationship between the areas of three parallelograms attached to three sides of an arbitrary triangle. The theorem, which can also be thought of as a generalization of the Pythagorean theorem, is named after the Greek mathematician Pappus of Alexandria (4th century AD), who discovered it.
Theorem
Given an arbitrary triangle with two arbitrary parallelograms attached to two of its sides the theorem tells how to construct a parallelogram over the third side, such that the area of the third parallelogram equals the sum of the areas of the other two parallelograms.
Let ABC be the arbitrary triangle and ABDE and ACFG the two arbitrary parallelograms attached to the triangle sides AB and AC. The extended parallelogram sides DE and FG intersect at H. The line segment AH now "becomes" the side of the third parallelogram BCML attached to the triangle side BC, i.e., one constructs line segments BL and CM over BC, such that BL and CM are a parallel and equal in length to AH. The following identity then holds for the areas (denoted by A) of the parallelograms:
The theorem generalizes the Pythagorean theorem twofold. Firstly it works for arbitrary triangles rather than only for right angled ones and secondly it uses parallelograms rather than squares. For squares on two sides of an arbitrary triangle it yields a parallelogram of equal area over the third side and if the two sides are the legs of a right angle the parallelogram over the third side will be square as well. For a right-angled triangle, two parallelograms attached to the legs of the right angle yield a rectangle of equal area on the third side and again if the two parallelograms are squares then the rectangle on the third side will be a square as well.
Proof
Due to having the same base length and height the parallelograms ABDE and ABUH have the same area, the same argument applying to the parallelograms ACFG and ACVH, ABUH and BLQR, ACVH and RCMQ. This already yields the desired result, as we have:
References
Howard Eves: Pappus's Extension of the Pythagorean Theorem.The Mathematics Teacher, Vol. 51, No. 7 (November 1958), pp. 544–546 (JSTOR)
Howard Eves: Great Moments in Mathematics (before 1650). Mathematical Association of America, 1983, , p. 37 ()
Eli Maor: The Pythagorean Theorem: A 4,000-year History. Princeton University Press, 2007, , pp. 58–59 ()
Claudi Alsina, Roger B. Nelsen: Charming Proofs: A Journey Into Elegant Mathematics. MAA, 2010, , pp. 77–78 ()
External links
The Pappus Area Theorem
Pappus theorem
Area
Articles containing proofs
Equations
Euclidean plane geometry
Theorems about triangles | Pappus's area theorem | Physics,Mathematics | 634 |
7,667,249 | https://en.wikipedia.org/wiki/Capability-based%20operating%20system | Capability-based operating system generally refers to an operating system that uses capability-based security.
Examples include:
Hydra
KeyKOS
EROS
Midori
seL4
Genode
Fuchsia
HarmonyOS (Microkernel) (HarmonyOS NEXT)
Phantom OS
Control Program Facility
Capability systems
Operating system security | Capability-based operating system | Technology | 60 |
39,975,515 | https://en.wikipedia.org/wiki/Angelic%20non-determinism | In computer science, angelic non-determinism is the execution of a nondeterministic algorithm where particular choices are declared to always favor a desired result, if that result is possible.
For example, in halting analysis of a Nondeterministic Turing machine, the choices would always favor termination of the program.
The "angelic" terminology comes from the Christian religious conventions of angels being benevolent and acting on behalf of an omniscient God.
References
Theoretical computer science | Angelic non-determinism | Mathematics | 101 |
160,832 | https://en.wikipedia.org/wiki/Tunnel | A tunnel is an underground or undersea passageway. It is dug through surrounding soil, earth or rock, or laid under water, and is usually completely enclosed except for the two portals common at each end, though there may be access and ventilation openings at various points along the length. A pipeline differs significantly from a tunnel, though some recent tunnels have used immersed tube construction techniques rather than traditional tunnel boring methods.
A tunnel may be for foot or vehicular road traffic, for rail traffic, or for a canal. The central portions of a rapid transit network are usually in the tunnel. Some tunnels are used as sewers or aqueducts to supply water for consumption or for hydroelectric stations. Utility tunnels are used for routing steam, chilled water, electrical power or telecommunication cables, as well as connecting buildings for convenient passage of people and equipment.
Secret tunnels are built for military purposes, or by civilians for smuggling of weapons, contraband, or people. Special tunnels, such as wildlife crossings, are built to allow wildlife to cross human-made barriers safely. Tunnels can be connected together in tunnel networks.
A tunnel is relatively long and narrow; the length is often much greater than twice the diameter, although similar shorter excavations can be constructed, such as cross passages between tunnels. The definition of what constitutes a tunnel can vary widely from source to source. For example, in the United Kingdom, a road tunnel is defined as "a subsurface highway structure enclosed for a length of or more." In the United States, the NFPA definition of a tunnel is "An underground structure with a design length greater than and a diameter greater than ."
Etymology
The word "tunnel" comes from the Middle English tonnelle, meaning "a net", derived from Old French tonnel, a diminutive of tonne ("cask"). The modern meaning, referring to an underground passageway, evolved in the 16th century as a metaphor for a narrow, confined space like the inside of a cask.
History
In Babylon, about 2200 B.C., it is believed that the first artificial tunnel was constructed. To join the temple of Belos with the palace, this was built with the aid of the cut and cover technique.
In the Mahabharata, the Pandavas built a secret tunnel within their new home, called "Lakshagriha" (House of Lac), which was constructed by Purochana under the orders of Duryodhana by the intention of burning them alive inside, allowing them to escape when the palace was set on fire; this act of foresight by the Pandavas saved their lives
Some of the earliest tunnels used by humans were paleoburrows excavated by prehistoric mammals.
Much of the early technology of tunnelling evolved from mining and military engineering. The etymology of the terms "mining" (for mineral extraction or for siege attacks), "military engineering", and "civil engineering" reveals these deep historic connections.
Antiquity and early middle ages
Predecessors of modern tunnels were adits that transported water for irrigation, drinking, or sewerage. The first qanats are known from before 2000 BC.
The earliest tunnel known to have been excavated from both ends is the Siloam Tunnel, built in Jerusalem by the kings of Judah around the 8th century BC. Another tunnel excavated from both ends, maybe the second known, is the Tunnel of Eupalinos, which is a tunnel aqueduct long running through Mount Kastro in Samos, Greece. It was built in the 6th century BC to serve as an aqueduct.
In Pakistan, the mughal era tunnel has been restored in the Lahore.
In Ethiopia, the Siqurto foot tunnel, hand-hewn in the Middle Ages, crosses a mountain ridge.
In the Gaza Strip, the network of tunnels was used by Jewish strategists as rock-cut shelters, in first links to Judean resistance against Roman rule in the Bar Kokhba revolt during the 2nd century AD.
Geotechnical investigation and design
A major tunnel project must start with a comprehensive investigation of ground conditions by collecting samples from boreholes and by other geophysical techniques. An informed choice can then be made of machinery and methods for excavation and ground support, which will reduce the risk of encountering unforeseen ground conditions. In planning the route, the horizontal and vertical alignments can be selected to make use of the best ground and water conditions. It is common practice to locate a tunnel deeper than otherwise would be required, in order to excavate through solid rock or other material that is easier to support during construction.
Conventional desk and preliminary site studies may yield insufficient information to assess such factors as the blocky nature of rocks, the exact location of fault zones, or the stand-up times of softer ground. This may be a particular concern in large-diameter tunnels. To give more information, a pilot tunnel (or "drift tunnel") may be driven ahead of the main excavation. This smaller tunnel is less likely to collapse catastrophically should unexpected conditions be met, and it can be incorporated into the final tunnel or used as a backup or emergency escape passage. Alternatively, horizontal boreholes may sometimes be drilled ahead of the advancing tunnel face.
Other key geotechnical factors:
Stand-up time is the amount of time a newly excavated cavity can support itself without any added structures. Knowing this parameter allows the engineers to determine how far an excavation can proceed before support is needed, which in turn affects the speed, efficiency, and cost of construction. Generally, certain configurations of rock and clay will have the greatest stand-up time, while sand and fine soils will have a much lower stand-up time.
Groundwater control is very important in tunnel construction. Water leaking into a tunnel or vertical shaft will greatly decrease stand-up time, causing the excavation to become unstable and risking collapse. The most common way to control groundwater is to install dewatering pipes into the ground and to simply pump the water out. A very effective but expensive technology is ground freezing, using pipes which are inserted into the ground surrounding the excavation, which are then cooled with special refrigerant fluids. This freezes the ground around each pipe until the whole space is surrounded with frozen soil, keeping water out until a permanent structure can be built.
Tunnel cross-sectional shape is also very important in determining stand-up time. If a tunnel excavation is wider than it is high, it will have a harder time supporting itself, decreasing its stand-up time. A square or rectangular excavation is more difficult to make self-supporting, because of a concentration of stress at the corners.
Choice of tunnels versus bridges
For water crossings, a tunnel is generally more costly to construct than a bridge. However, both navigational and traffic considerations may limit the use of high bridges or drawbridges intersecting with shipping channels, necessitating a tunnel.
Bridges usually require a larger footprint on each shore than tunnels. In areas with expensive real estate, such as Manhattan and urban Hong Kong, this is a strong factor in favor of a tunnel. Boston's Big Dig project replaced elevated roadways with a tunnel system to increase traffic capacity, hide traffic, reclaim land, redecorate, and reunite the city with the waterfront.
The 1934 Queensway Tunnel under the River Mersey at Liverpool was chosen over a massively high bridge partly for defence reasons; it was feared that aircraft could destroy a bridge in times of war, not merely impairing road traffic but blocking the river to navigation. Maintenance costs of a massive bridge to allow the world's largest ships to navigate under were considered higher than for a tunnel. Similar conclusions were reached for the 1971 Kingsway Tunnel under the Mersey. In Hampton Roads, Virginia, tunnels were chosen over bridges for strategic considerations; in the event of damage, bridges might prevent US Navy vessels from leaving Naval Station Norfolk.
Water-crossing tunnels built instead of bridges include the Seikan Tunnel in Japan; the Holland Tunnel and Lincoln Tunnel between New Jersey and Manhattan in New York City; the Queens-Midtown Tunnel between Manhattan and the borough of Queens on Long Island; the Detroit-Windsor Tunnel between Michigan and Ontario; and the Elizabeth River tunnels between Norfolk and Portsmouth, Virginia; the 1934 River Mersey road Queensway Tunnel; the Western Scheldt Tunnel, Zeeland, Netherlands; and the North Shore Connector tunnel in Pittsburgh, Pennsylvania. The Sydney Harbour Tunnel was constructed to provide a second harbour crossing and to alleviate traffic congestion on the Sydney Harbour Bridge, without spoiling the iconic view.
Other reasons for choosing a tunnel instead of a bridge include avoiding difficulties with tides, weather, and shipping during construction (as in the Channel Tunnel), aesthetic reasons (preserving the above-ground view, landscape, and scenery), and also for weight capacity reasons (it may be more feasible to build a tunnel than a sufficiently strong bridge).
Some water crossings are a mixture of bridges and tunnels, such as the Denmark to Sweden link and the Chesapeake Bay Bridge-Tunnel in Virginia.
There are particular hazards with tunnels, especially from vehicle fires when combustion gases can asphyxiate users, as happened at the Gotthard Road Tunnel in Switzerland in 2001. One of the worst railway disasters ever, the Balvano train disaster, was caused by a train stalling in the Armi tunnel in Italy in 1944, killing 426 passengers. Designers try to reduce these risks by installing emergency ventilation systems or isolated emergency escape tunnels parallel to the main passage.
Project planning and cost estimates
Government funds are often required for the creation of tunnels. When a tunnel is being planned or constructed, economics and politics play a large factor in the decision making process. Civil engineers usually use project management techniques for developing a major structure. Understanding the amount of time the project requires, and the amount of labor and materials needed is a crucial part of project planning. The project duration must be identified using a work breakdown structure and critical path method. Also, the land needed for excavation and construction staging, and the proper machinery must be selected. Large infrastructure projects require millions or even billions of dollars, involving long-term financing, usually through issuance of bonds.
The costs and benefits for an infrastructure such as a tunnel must be identified. Political disputes can occur, as in 2005 when the US House of Representatives approved a $100 million federal grant to build a tunnel under New York Harbor. However, the Port Authority of New York and New Jersey was not aware of this bill and had not asked for a grant for such a project. Increased taxes to finance a large project may cause opposition.
Construction
Tunnels are dug in types of materials varying from soft clay to hard rock. The method of tunnel construction depends on such factors as the ground conditions, the groundwater conditions, the length and diameter of the tunnel drive, the depth of the tunnel, the logistics of supporting the tunnel excavation, the final use and the shape of the tunnel and appropriate risk management.
There are three basic types of tunnel construction in common use. Cut-and-cover tunnels are constructed in a shallow trench and then covered over. Bored tunnels are constructed in situ, without removing the ground above. Finally, a tube can be sunk into a body of water, which is called an immersed tunnel.
Cut-and-cover
Cut-and-cover is a simple method of construction for shallow tunnels where a trench is excavated and roofed over with an overhead support system strong enough to carry the load of what is to be built above the tunnel.
There are two basic forms of cut-and-cover tunnelling:
Bottom-up method: A trench is excavated, with ground support as necessary, and the tunnel is constructed in it. The tunnel may be of in situ concrete, precast concrete, precast arches, or corrugated steel arches; in early days brickwork was used. The trench is then carefully back-filled and the surface is reinstated.
Top-down method: Side support walls and capping beams are constructed from ground level by such methods as slurry walling or contiguous bored piling. Only a shallow excavation is needed to construct the tunnel roof using precast beams or in situ concrete sitting on the walls. The surface is then reinstated except for access openings. This allows early reinstatement of roadways, services, and other surface features. Excavation then takes place under the permanent tunnel roof, and the base slab is constructed.
Shallow tunnels are often of the cut-and-cover type (if under water, of the immersed-tube type), while deep tunnels are excavated, often using a tunnelling shield. For intermediate levels, both methods are possible.
Large cut-and-cover boxes are often used for underground metro stations, such as Canary Wharf tube station in London. This construction form generally has two levels, which allows economical arrangements for ticket hall, station platforms, passenger access and emergency egress, ventilation and smoke control, staff rooms, and equipment rooms. The interior of Canary Wharf station has been likened to an underground cathedral, owing to the sheer size of the excavation. This contrasts with many traditional stations on London Underground, where bored tunnels were used for stations and passenger access. Nevertheless, the original parts of the London Underground network, the Metropolitan and District Railways, were constructed using cut-and-cover. These lines pre-dated electric traction and the proximity to the surface was useful to ventilate the inevitable smoke and steam.
A major disadvantage of cut-and-cover is the widespread disruption generated at the surface level during construction. This, and the availability of electric traction, brought about London Underground's switch to bored tunnels at a deeper level towards the end of the 19th century.
Prior to the replacement of manual excavation by the use of boring machines, Victorian tunnel excavators developed a specialized method called clay-kicking for digging tunnels in clay-based soils. The clay-kicker lies on a plank at a 45-degree angle away from the working face and rather than a mattock with his hands, inserts with his feet a tool with a cup-like rounded end, then turns the tool with his hands to extract a section of soil, which is then placed on the waste extract.
Clay-kicking is a specialized method developed in the United Kingdom of digging tunnels in strong clay-based soil structures. This method of cut and cover construction required relatively little disturbance of property during the renewal of the United Kingdom's then ancient sewerage systems. It was also used during the First World War by Royal Engineer tunnelling companies placing mines beneath German lines, because it was almost silent and so not susceptible to listening methods of detection.
Boring machines
Tunnel boring machines (TBMs) and associated back-up systems are used to highly automate the entire tunnelling process, reducing tunnelling costs. In certain predominantly urban applications, tunnel boring is viewed as a quick and cost-effective alternative to laying surface rails and roads. Expensive compulsory purchase of buildings and land, with potentially lengthy planning inquiries, is eliminated. Disadvantages of TBMs arise from their usually large size – the difficulty of transporting the large TBM to the site of tunnel construction, or (alternatively) the high cost of assembling the TBM on-site, often within the confines of the tunnel being constructed.
There are a variety of TBM designs that can operate in a variety of conditions, from hard rock to soft water-bearing ground. Some TBMs, the bentonite slurry and earth-pressure balance types, have pressurized compartments at the front end, allowing them to be used in difficult conditions below the water table. This pressurizes the ground ahead of the TBM cutter head to balance the water pressure. The operators work in normal air pressure behind the pressurized compartment, but may occasionally have to enter that compartment to renew or repair the cutters. This requires special precautions, such as local ground treatment or halting the TBM at a position free from water. Despite these difficulties, TBMs are now preferred over the older method of tunnelling in compressed air, with an airlock/decompression chamber some way back from the TBM, which required operators to work in high pressure and go through decompression procedures at the end of their shifts, much like deep-sea divers.
In February 2010, Aker Wirth delivered a TBM to Switzerland, for the expansion of the Linth–Limmern Power Stations located south of Linthal in the canton of Glarus. The borehole has a diameter of . The four TBMs used for excavating the Gotthard Base Tunnel, in Switzerland, had a diameter of about . A larger TBM was built to bore the Green Heart Tunnel (Dutch: Tunnel Groene Hart) as part of the HSL-Zuid in the Netherlands, with a diameter of . This in turn was superseded by the Madrid M30 ringroad, Spain, and the Chong Ming tunnels in Shanghai, China. All of these machines were built at least partly by Herrenknecht. , the world's largest TBM was "Big Bertha", a diameter machine built by Hitachi Zosen Corporation, which dug the Alaskan Way Viaduct replacement tunnel in Seattle, Washington (US).
Shafts
A temporary access shaft is sometimes necessary during the excavation of a tunnel. They are usually circular and go straight down until they reach the level at which the tunnel is going to be built. A shaft normally has concrete walls and is usually built to be permanent. Once the access shafts are complete, TBMs are lowered to the bottom and excavation can start. Shafts are the main entrance in and out of the tunnel until the project is completed. If a tunnel is going to be long, multiple shafts at various locations may be bored so that entrance to the tunnel is closer to the unexcavated area.
Once construction is complete, construction access shafts are often used as ventilation shafts, and may also be used as emergency exits.
Sprayed concrete techniques
The new Austrian tunnelling method (NATM)—also referred to as the Sequential Excavation Method (SEM)—was developed in the 1960s.
The main idea of this method is to use the geological stress of the surrounding rock mass to stabilize the tunnel, by allowing a measured relaxation and stress reassignment into the surrounding rock to prevent full loads becoming imposed on the supports. Based on geotechnical measurements, an optimal cross section is computed. The excavation is protected by a layer of sprayed concrete, commonly referred to as shotcrete. Other support measures can include steel arches, rock bolts, and mesh. Technological developments in sprayed concrete technology have resulted in steel and polypropylene fibers being added to the concrete mix to improve lining strength. This creates a natural load-bearing ring, which minimizes the rock's deformation.
By special monitoring the NATM method is flexible, even at surprising changes of the geomechanical rock consistency during the tunneling work. The measured rock properties lead to appropriate tools for tunnel strengthening.
Pipe jacking
In pipe jacking, hydraulic jacks are used to push specially made pipes through the ground behind a TBM or shield. This method is commonly used to create tunnels under existing structures, such as roads or railways. Tunnels constructed by pipe jacking are normally small diameter bores with a maximum size of around .
Box jacking
Box jacking is similar to pipe jacking, but instead of jacking tubes, a box-shaped tunnel is used. Jacked boxes can be a much larger span than a pipe jack, with the span of some box jacks in excess of . A cutting head is normally used at the front of the box being jacked, and spoil removal is normally by excavator from within the box. Recent developments of the Jacked Arch and Jacked deck have enabled longer and larger structures to be installed to close accuracy.
Underwater tunnels
There are also several approaches to underwater tunnels, the two most common being bored tunnels or immersed tubes, examples are Bjørvika Tunnel and Marmaray. Submerged floating tunnels are a novel approach under consideration; however, no such tunnels have been constructed to date.
Temporary way
During construction of a tunnel it is often convenient to install a temporary railway, particularly to remove excavated spoil, often narrow gauge so that it can be double track to allow the operation of empty and loaded trains at the same time. The temporary way is replaced by the permanent way at completion, thus explaining the term "Perway".
Enlargement
The vehicles or traffic using a tunnel can outgrow it, requiring replacement or enlargement:
The original single line Gib Tunnel near Mittagong was replaced with a double-track tunnel, with the original tunnel used for growing mushrooms.
The 1832 double-track -long tunnel from Edge Hill to Lime Street in Liverpool was near totally removed, apart from a section at Edge Hill and a section nearer to Lime Street, as four tracks were required. The tunnel was dug out into a very deep four-track cutting, with short tunnels in places along the cutting. Train services were not interrupted as the work progressed. There are other occurrences of tunnels being replaced by open cuts, for example, the Auburn Tunnel.
The Farnworth Tunnel in England was enlarged using a tunnel boring machine (TBM) in 2015. The Rhyndaston Tunnel was enlarged using a borrowed TBM so as to be able to take ISO containers.
Tunnels can also be enlarged by lowering the floor.
Open building pit
An open building pit consists of a horizontal and a vertical boundary that keeps groundwater and soil out of the pit. There are several potential alternatives and combinations for (horizontal and vertical) building pit boundaries. The most important difference with cut-and-cover is that the open building pit is muted after tunnel construction; no roof is placed.
Other construction methods
Drilling and blasting
Hydraulic splitter
Slurry-shield machine
Wall-cover construction method.
Variant tunnel types
Double-deck and multipurpose tunnels
Some tunnels are double-deck, for example, the two major segments of the San Francisco–Oakland Bay Bridge (completed in 1936) are linked by a double-deck tunnel section through Yerba Buena Island, the largest-diameter bored tunnel in the world. At construction this was a combination bidirectional rail and truck pathway on the lower deck with automobiles above, now converted to one-way road vehicle traffic on each deck.
In Turkey, the Eurasia Tunnel under the Bosphorus, opened in 2016, has at its core a two-deck road tunnel with two lanes on each deck.
Additionally, in 2015 the Turkish government announced that it will build three-level tunnel, also under the Bosporus. The tunnel is intended to carry both the Istanbul metro and a two-level highway, over a length of .
The French in west Paris consists of two bored tunnel tubes, the eastern one of which has two levels for light motorized vehicles, over a length of . Although each level offers a physical height of , only traffic up to tall is allowed in this tunnel tube, and motorcyclists are directed to the other tube. Each level was built with a three-lane roadway, but only two lanes per level are used – the third serves as a hard shoulder within the tunnel. The A86 Duplex is Europe's longest double-deck tunnel.
In Shanghai, China, a two-tube double-deck tunnel was built starting in 2002. In each tube of the both decks are for motor vehicles. In each direction, only cars and taxis travel on the high two-lane upper deck, and heavier vehicles, like trucks and buses, as well as cars, may use the high single-lane lower level.
In the Netherlands, a two-storey, eight-lane, cut-and-cover road tunnel under the city of Maastricht was opened in 2016. Each level accommodates a full height, two by two-lane highway. The two lower tubes of the tunnel carry the A2 motorway, which originates in Amsterdam, through the city; and the two upper tubes take the N2 regional highway for local traffic.
The Alaskan Way Viaduct replacement tunnel, is a $3.3 billion , double-decker bored highway tunnel under Downtown Seattle. Construction began in July 2013 using "Bertha", at the time the world's largest earth pressure balance tunnel boring machine, with a cutterhead diameter. After several delays, tunnel boring was completed in April 2017, and the tunnel opened to traffic on 4 February 2019.
New York City's 63rd Street Tunnel under the East River, between the boroughs of Manhattan and Queens, was intended to carry subway trains on the upper level and Long Island Rail Road commuter trains on the lower level. Construction started in 1969, and the two sides of the tunnel were bored through in 1972. The upper level, used by the IND 63rd Street Line () of the New York City Subway, was not opened for passenger service until 1989. The lower level, intended for commuter rail, saw passenger service after completion of the East Side Access project, in late 2022.
In the UK, the 1934 Queensway Tunnel under the River Mersey between Liverpool and Birkenhead was originally to have road vehicles running on the upper deck and trams on the lower. During construction the tram usage was cancelled. The lower section is only used for cables, pipes and emergency accident refuge enclosures.
Hong Kong's Lion Rock Tunnel, built in the mid 1960s, connecting New Kowloon and Sha Tin, carries a motorway but also serves as an aqueduct, featuring a gallery containing five water mains lines with diameters between below the road section of the tunnel.
Wuhan's Yangtze River Highway and Railway Tunnel is a two-tube double-deck tunnel under the Yangtze River completed in 2018. Each tube carries three lanes of local traffic on the top deck with one track Wuhan Metro Line 7 on the lower deck.
Mount Baker Tunnel has three levels. The bottom level is to be used by Sound Transit light rail. The middle level is used by car traffic, and the top layer is for bicycle and pedestrian access.
Some tunnels have more than one purpose. The SMART Tunnel in Malaysia is the first multipurpose "Stormwater Management And Road Tunnel" in the world, created to convey both traffic and occasional flood waters in Kuala Lumpur. When necessary, floodwater is first diverted into a separate bypass tunnel located underneath the double-deck roadway tunnel. In this scenario, traffic continues normally. Only during heavy, prolonged rains when the threat of extreme flooding is high, the upper tunnel tube is closed off to vehicles and automated flood control gates are opened so that water can be diverted through both tunnels.
Covered passageways
Over-bridges can sometimes be built by covering a road or river or railway with brick or steel arches, and then levelling the surface with earth. In railway parlance, a surface-level track which has been built or covered over is normally called a "covered way".
Snow sheds are a kind of artificial tunnel built to protect a railway from avalanches of snow. Similarly the Stanwell Park, New South Wales "steel tunnel", on the Illawarra railway line, protects the line from rockfalls.
Underpass
An underpass is a road or railway or other passageway passing under another road or railway, under an overpass. This is not strictly a tunnel.
Utility Tunnel
A Utility Tunnel is built for the purpose of carrying one or more utilities in the same space, for this reason they are also referred to as Multi-Utility Tunnels or MUTs. Through co-location of different utilities in one tunnel, organizations are able to reduce the financial and environmental costs of building and maintaining utilities. These tunnels can be used for many types of utilities, routing steam, chilled water, electrical power or telecommunication cables, as well as connecting buildings for convenient passage of people and equipment.
Safety and security
Owing to the enclosed space of a tunnel, fires can have very serious effects on users. The main dangers are gas and smoke production, with even low concentrations of carbon monoxide being highly toxic. Fires killed 11 people in the Gotthard tunnel fire of 2001 for example, all of the victims succumbing to smoke and gas inhalation. Over 400 passengers died in the Balvano train disaster in Italy in 1944, when the locomotive halted in a long tunnel. Carbon monoxide poisoning was the main cause of death. In the Caldecott Tunnel fire of 1982, the majority of fatalities were caused by toxic smoke, rather than by the initial crash. Likewise 84 people were killed in the Paris Métro train fire of 1904.
Motor vehicle tunnels usually require ventilation shafts and powered fans to remove toxic exhaust gases during routine operation.
Rail tunnels usually require fewer air changes per hour, but still may require forced-air ventilation. Both types of tunnels often have provisions to increase ventilation under emergency conditions, such as a fire. Although there is a risk of increasing the rate of combustion through increased airflow, the primary focus is on providing breathable air to persons trapped in the tunnel, as well as firefighters.
The aerodynamic pressure wave produced by high speed trains entering a tunnel reflect at its open ends and change sign (compression wavefront changes to rarefaction wavefront and vice versa). When two wavefronts of the same sign meet the train, significant and rapid air pressure may cause ear discomfort for passengers and crew. When a high-speed train exits a tunnel, a loud "Tunnel boom" may occur, which can disturb residents near the mouth of the tunnel, and it is exacerbated in mountain valleys where the sound can echo.
When there is a parallel, separate tunnel available, airtight but unlocked emergency doors are usually provided which allow trapped personnel to escape from a smoke-filled tunnel to the parallel tube.
Larger, heavily used tunnels, such as the Big Dig tunnel in Boston, Massachusetts, may have a dedicated 24-hour staffed operations center which monitors and reports on traffic conditions, and responds to emergencies. Video surveillance equipment is often used, and real-time pictures of traffic conditions for some highways may be viewable by the general public via the Internet.
A database of seismic damage to underground structures using 217 case histories shows the following general observations can be made regarding the seismic performance of underground structures:
Underground structures suffer appreciably less damage than surface structures.
Reported damage decreases with increasing over burden depth. Deep tunnels seem to be safer and less vulnerable to earthquake shaking than are shallow tunnels.
Underground facilities constructed in soils can be expected to suffer more damage compared to openings constructed in competent rock.
Lined and grouted tunnels are safer than unlined tunnels in rock. Shaking damage can be reduced by stabilizing the ground around the tunnel and by improving the contact between the lining and the surrounding ground through grouting.
Tunnels are more stable under a symmetric load, which improves ground-lining interaction. Improving the tunnel lining by placing thicker and stiffer sections without stabilizing surrounding poor ground may result in excess seismic forces in the lining. Backfilling with non-cyclically mobile material and rock-stabilizing measures may improve the safety and stability of shallow tunnels.
Damage may be related to peak ground acceleration and velocity based on the magnitude and epicentral distance of the affected earthquake.
Duration of strong-motion shaking during earthquakes is of utmost importance because it may cause fatigue failure and therefore, large deformations.
High frequency motions may explain the local spalling of rock or concrete along planes of weakness. These frequencies, which rapidly attenuate with distance, may be expected mainly at small distances from the causative fault.
Ground motion may be amplified upon incidence with a tunnel if wavelengths are between one and four times the tunnel diameter.
Damage at and near tunnel portals may be significant due to slope instability.
Earthquakes are one of nature's most formidable threats. A magnitude 6.7 earthquake shook the San Fernando valley in Los Angeles in 1994. The earthquake caused extensive damage to various structures, including buildings, freeway overpasses and road systems throughout the area. The National Center for Environmental Information estimates total damages to be 40 billion dollars. According to an article issued by Steve Hymon of TheSource – Transportation News and Views, there was no serious damage sustained by the LA subway system. Metro, the owner of the LA subway system, issued a statement through their engineering staff about the design and consideration that goes into a tunnel system. Engineers and architects perform extensive analysis as to how hard they expect earthquakes to hit that area. All of this goes into the overall design and flexibility of the tunnel.
This same trend of limited subway damage following an earthquake can be seen in many other places. In 1985 a magnitude 8.1 earthquake shook Mexico City; there was no damage to the subway system, and in fact the subway systems served as a lifeline for emergency personnel and evacuations. A magnitude 7.2 ripped through Kobe Japan in 1995, leaving no damage to the tunnels themselves. Entry portals sustained minor damages, however these damages were attributed to inadequate earthquake design that originated from the original construction date of 1965. In 2010 a magnitude 8.8, massive by any scale, afflicted Chile. Entrance stations to subway systems suffered minor damages, and the subway system was down for the rest of the day. By the next afternoon, the subway system was operational again.
Examples
In history
The history of ancient tunnels and tunneling in the world is reviewed in various sources which include many examples of these structures that were built for different purposes. Some well known ancient and modern tunnels are briefly introduced below:
The qanat or kareez of Persia are water management systems used to provide a reliable supply of water to human settlements or for irrigation in hot, arid and semi-arid climates. The deepest known qanat is in the Iranian city of Gonabad, which after 2700 years, still provides drinking and agricultural water to nearly 40,000 people. Its main well depth is more than , and its length is .
The Siloam Tunnel was built before 701 BC for a reliable supply of water, to withstand siege attacks.
The Eupalinian aqueduct on the island of Samos (North Aegean, Greece) was built in 520 BC by the ancient Greek engineer Eupalinos of Megara under a contract with the local community. Eupalinos organised the work so that the tunnel was begun from both sides of Mount Kastro. The two teams advanced simultaneously and met in the middle with excellent accuracy, something that was extremely difficult in that time. The aqueduct was of utmost defensive importance, since it ran underground, and it was not easily found by an enemy who could otherwise cut off the water supply to Pythagoreion, the ancient capital of Samos. The tunnel's existence was recorded by Herodotus (as was the mole and harbour, and the third wonder of the island, the great temple to Hera, thought by many to be the largest in the Greek world). The precise location of the tunnel was only re-established in the 19th century by German archaeologists. The tunnel proper is and visitors can still enter it.
One of the first known drainage and sewage networks in form of tunnels was constructed at Persepolis in Iran at the same time as the construction of its foundation in 518 BC. In most places the network was dug in the sound rock of the mountain and then covered by large pieces of rock and stone followed by earth and piles of rubble to level the ground. During investigations and surveys, long sections of similar rock tunnels extending beneath the palace area were traced by Herzfeld and later by Schmidt and their archaeological teams.
The Via Flaminia, an important Roman road, penetrated the Furlo pass in the Apennines through a tunnel which emperor Vespasian had ordered built in 76–77 AD. A modern road, the SS 3 Flaminia, still uses this tunnel, which had a precursor dating back to the 3rd century BC, remnants of this earlier tunnel (one of the first road tunnels) are also still visible.
The world's oldest tunnel traversing under a water body is claimed to be the Terelek kaya tüneli under Kızıl River, a little south of the towns of Boyabat and Durağan in Turkey, just downstream from where Kizil River joins its tributary Gökırmak. The tunnel is presently under a narrow part of a lake formed by a dam some kilometres further downstream. Estimated to have been built more than 2000 years ago, possibly by the same civilization that also built the royal tombs in a rock face nearby, it is assumed to have had a defensive purpose.
Sapperton Canal Tunnel on the Thames and Severn Canal in England, dug through hills, which opened in 1789, was long and allowed boat transport of coal and other goods. Above it the Sapperton Long Tunnel was constructed which carries the "Golden Valley" railway line between Swindon and Gloucester.
The 1791 Dudley canal tunnel is on the Dudley Canal, in Dudley, England. The tunnel is long. Closed in 1962 the tunnel was reopened in 1973. The series of tunnels was extended in 1984 and 1989.
Fritchley Tunnel, constructed in 1793 in Derbyshire by the Butterley Company to transport limestone to its ironworks factory. The Butterley company engineered and built its own railway. A victim of the depression the company closed after 219 years in 2009. The tunnel is the world's oldest railway tunnel traversed by rail wagons. Gravity and horse haulage was utilised. The railway was converted to steam locomotion in 1813 using a Steam Horse locomotive engineered and built by the Butterley company, however reverted to horses. Steam trains used the tunnel continuously from the 1840s when the railway was converted to a narrow gauge. The line closed in 1933. In the Second World War, the tunnel was used as an air raid shelter. Sealed up in 1977 it was rediscovered in 2013 and inspected. The tunnel was resealed to preserved the construction as it was designated an ancient monument.
The 1794 Butterley canal tunnel canal tunnel is in length on the Cromford Canal in Ripley, Derbyshire, England. The tunnel was built simultaneously with the 1793 Fritchley railway tunnel. The tunnel partially collapsed in 1900 splitting the Cromford Canal, and has not been used since. The Friends of Cromford Canal, a group of volunteers, are working at fully restoring the Cromford Canal and the Butterley Tunnel.
The 1796 Stoddart Tunnel in Chapel-en-le-Frith in Derbyshire is reputed to be the oldest rail tunnel in the world. The rail wagons were originally horse-drawn.
Derby Tunnels in Salem, Massachusetts, were built in 1801 to smuggle imports affected by President Thomas Jefferson's new customs duties. Jefferson had ordered local militias to help the Custom House in each port collect these dues, but the smugglers, led by Elias Derby, hired the Salem militia to dig the tunnels and hide the spoil.
A tunnel was created for the first true steam locomotive, from Penydarren to Abercynon. The Penydarren locomotive was built by Richard Trevithick. The locomotive made the historic journey from Penydarren to Abercynon in 1804. Part of this tunnel can still be seen at Pentrebach, Merthyr Tydfil, Wales. This is arguably the oldest railway tunnel in the world, dedicated only to self-propelled steam engines on rails.
The Montgomery Bell Tunnel in Tennessee, an water diversion tunnel, , to power a water wheel, was built by slave labour in 1819, being the first full-scale tunnel in North America.
Bourne's Tunnel, Rainhill, near Liverpool, England. It is long. Built in the late 1820s, the exact date is unknown, however probably built in 1828 or 1829. This is the first tunnel in the world constructed under a railway line. The construction of the Liverpool to Manchester Railway ran over a horse-drawn tramway that ran from the Sutton collieries to the Liverpool-Warrington turnpike road. A tunnel was bored under the railway for the tramway. As the railway was being constructed the tunnel was made operational, opening prior to the Liverpool tunnels on the Liverpool to Manchester line. The tunnel was made redundant in 1844 when the tramway was dismantled.
Crown Street station, Liverpool, England, 1829. Built by George Stephenson, a single track railway tunnel , was bored from Edge Hill to Crown Street to serve the world's first intercity passenger railway terminus station. The station was abandoned in 1836 being too far from Liverpool city centre, with the area converted for freight use. Closed down in 1972, the tunnel is disused. However it is the oldest passenger rail tunnel running under streets in the world.
The 1829 Wapping Tunnel in Liverpool, England, at long on a twin track railway, was the first rail tunnel bored under a metropolis. The tunnel's path is from Edge Hill in the east of the city to Wapping Dock in the south end Liverpool docks. The tunnel was used only for freight terminating at the Park Lane goods terminal. Currently disused since 1972, the tunnel was to be a part of the Merseyrail metro network, with work started and abandoned because of costs. The tunnel is in excellent condition and is still being considered for reuse by Merseyrail, maybe with an underground station cut into the tunnel for Liverpool university. The river portal is opposite the new King's Dock Liverpool Arena being an ideal location for a serving station. If reused the tunnel will be the oldest used underground rail tunnel in the world and oldest section of any underground metro system.
1832, Lime Street railway station tunnel, Liverpool. A two track rail tunnel, long was bored under the metropolis from Edge Hill in the east of the city to Lime Street in Liverpool's city centre. The tunnel was in use from 1832 being used to transport building materials to the new Lime St station while under construction. The station and tunnel was opened to passengers in 1836. In the 1880s the tunnel was converted to a deep cutting, open to the atmosphere, being four tracks wide. This is the only occurrence of a major tunnel being removed. Two short sections of the original tunnel still exist at Edge Hill station and further towards Lime Street, giving the two tunnels the distinction of being the oldest rail tunnels in the world still in use, and the oldest in use under streets. Over time a section of the deep cutting has been converted back into tunnel due to sections having buildings built over.
Box Tunnel in England, which opened in 1841, was the longest railway tunnel in the world at the time of construction. It was dug by hand, and has a length of .
The 1842 Prince of Wales Tunnel, in Shildon near Darlington, England, is the oldest sizeable tunnel in the world still in use under a settlement.
The Victoria Tunnel Newcastle opened in 1842, is a subterranean wagonway with a maximum depth of that drops from entrance to exit. The tunnel runs under Newcastle upon Tyne, England, and originally exited at the River Tyne. It remains largely intact. Originally designed to carry coal from Spital Tongues to the river, in WW2 part of the tunnel was used as a shelter. Under the management of a charitable foundation called the Ouseburn Trust it is currently used for heritage tours.
The Thames Tunnel, built by Marc Isambard Brunel and his son Isambard Kingdom Brunel opened in 1843, was the first tunnel (after Terelek) traversing under a water body, and the first to be built using a tunnelling shield. Originally used as a foot-tunnel, the tunnel was converted to a railway tunnel in 1869 and was a part of the East London Line of the London Underground until 2007. It was the oldest section of the network, although not the oldest purpose built rail section. From 2010 the tunnel became a part of the London Overground network.
The Victoria Tunnel/Waterloo Tunnel in Liverpool, England, was bored under a metropolis opening in 1848. The tunnel was initially used only for rail freight serving the Waterloo Freight terminal, and later freight and passengers serving the Liverpool ship liner terminal. The tunnel's path is from Edge Hill in the east of the city to the north end Liverpool docks at Waterloo Dock. The tunnel is split into two tunnels with a short open air cutting linking the two. The cutting is where the cable hauled trains from Edge Hill were hitched and unhitched. The two tunnels are effectively one on the same centre line and are regarded as one. However, as initially the long Victoria section was originally cable hauled and the shorter Waterloo section was locomotive hauled, two separate names were given, the short section was named the Waterloo Tunnel. In 1895 the two tunnels were converted to locomotive haulage. Used until 1972, the tunnel is still in excellent condition. A short section of the Victoria tunnel at Edge Hill is still used for shunting trains. The tunnel is being considered for reuse by the Merseyrail network. Stations cut into the tunnel are being considered and also reuse by a monorail system from the proposed Liverpool Waters redevelopment of Liverpool's Central Docks has been proposed.
The summit tunnel of the Semmering railway, the first Alpine tunnel, was opened in 1848 and was long. It connected rail traffic between Vienna, the capital of Austro-Hungarian Empire, and Trieste, its port.
The Giovi Rail Tunnel through the Appennini Mounts opened in 1854, linking the capital city of the Kingdom of Sardinia, Turin, to its port, Genoa. The tunnel was long.
The oldest underground sections of the London Underground were built using the cut-and-cover method in the 1860s, and opened in January 1863. What are now the Metropolitan, Hammersmith & City and Circle lines were the first to prove the success of a metro or subway system.
On 18 June 1868, the Central Pacific Railroad's Summit Tunnel (Tunnel #6) at Donner Pass in the California Sierra Nevada mountains was opened, permitting the establishment of the commercial mass transportation of passengers and freight over the Sierras for the first time. It remained in daily use until 1993, when the Southern Pacific Railroad closed it and transferred all rail traffic through the long Tunnel #41 (a.k.a. "The Big Hole") built a mile to the south in 1925.
In 1870, after fourteen years of works, the Fréjus Rail Tunnel was completed between France and Italy, being the second-oldest Alpine tunnel, long. At that time it was the longest in the world.
The third Alpine tunnel, the Gotthard Rail Tunnel, between northern and southern Switzerland, opened in 1882 and was the longest rail tunnel in the world, measuring .
The 1882 Col de Tende Road Tunnel, at long, was one of the first long road tunnels under a pass, running between France and Italy.
The Mersey Railway tunnel opened in 1886, running from Liverpool to Birkenhead under the River Mersey. The Mersey Railway was the world's first deep-level underground railway. By 1892 the extensions on land from Birkenhead Park station to Liverpool Central Low level station gave a tunnel in length. The under river section is in length, and was the longest underwater tunnel in world in January 1886.
The rail Severn Tunnel was opened in late 1886, at long, although only of the tunnel is actually under the River Severn. The tunnel replaced the Mersey Railway tunnel's longest under water record, which was held for less than a year.
James Greathead, in constructing the City & South London Railway tunnel beneath the Thames, opened in 1890, brought together three key elements of tunnel construction under water:
shield method of excavation;
permanent cast iron tunnel lining;
construction in a compressed air environment to inhibit water flowing through soft ground material into the tunnel heading.
Built in sections between 1890 and 1939, the section of London Underground's Northern line from Morden to East Finchley via Bank was the longest railway tunnel in the world at in length.
St. Clair Tunnel, also opened later in 1890, linked the elements of the Greathead tunnels on a larger scale.
In 1906 the fourth Alpine tunnel opened, the Simplon Tunnel, between Switzerland and Italy. It is long, and was the longest tunnel in the world until 1982. It was also the deepest tunnel in the world, with a maximum rock overlay of approximately .
The 1927 Holland Tunnel was the first underwater tunnel designed for automobiles. The construction required a novel ventilation system.
In 1945 the Delaware Aqueduct tunnel was completed, supplying water to New York City. At it is the longest tunnel in the world.
In 1988 the long Seikan Tunnel in Japan was completed under the Tsugaru Strait, linking the islands of Honshu and Hokkaido. It was the longest railway tunnel in the world at that time.
Ryfast is the longest undersea road tunnel. It is in length. The tunnel opened for use in 2020.
Longest
The Thirlmere Aqueduct in North West England, United Kingdom is sometimes considered the longest tunnel, of any type, in the world at , though the aqueduct's tunnel section is not continuous.
The Dahuofang Water Tunnel in China, opened in 2009, is the third longest water tunnel in the world at length.
The Gotthard Base Tunnel in Switzerland, opened in 2016, is the longest and deepest railway tunnel in the world at length and maximum depth below the Gotthard Massif. It provides a flat transit route between the North and South of Europe under the Swiss Alps, at a maximum elevation of .
The Seikan Tunnel in Japan connects the main island of Honshu with the northern island of Hokkaido by rail. It is long, of which are crossing the Tsugaru Strait undersea.
The Channel Tunnel crosses the English Channel between France and the United Kingdom. It has a total length of , of which are the world's longest undersea tunnel section.
The Lötschberg Base Tunnel in Switzerland was the longest land rail tunnel, with a length of , from its inauguration in 2007 until the completion of the Gotthard Base Tunnel in 2016.
The Lærdal Tunnel in Norway from Lærdal to Aurland is the world's longest road tunnel, intended for cars and similar vehicles, at .
The Zhongnanshan Tunnel in People's Republic of China opened in January 2007 is the world's second longest highway tunnel and the longest mountain road tunnel in Asia, at .
The longest canal tunnel is the Rove Tunnel in France, over long.
Notable
The Moffat Tunnel, opened in 1928, passes under the Continental Divide of the Americas in Colorado. The tunnel is long and at an elevation of is the highest active railroad tunnel in the U.S. (The inactive Tennessee Pass Line and the historic Alpine Tunnel are higher.)
Williamson's tunnels in Liverpool, from 1804 and completed around 1840 by a wealthy eccentric, are probably the largest underground folly in the world. The tunnels were built with no functional purpose.
The Chicago freight tunnel network is the largest urban street tunnel network, comprising of tunnels beneath the majority of downtown Chicago streets. It operated between 1906 and 1956 as a freight network, connecting building basements and railway stations. Following a 1992 flood the network was sealed, although some parts still carry utility and communications infrastructure.
The Pennsylvania Turnpike opened in 1940 with seven tunnels, most of which were bored as part of the stillborn South Pennsylvania Railroad and giving the highway the nickname "Tunnel Highway". Four of the tunnels (Allegheny Mountain, Tuscarora Mountain, Kittatinny Mountain, and Blue Mountain) remain in active use, while the other three (Laurel Hill, Rays Hill, and Sideling Hill) were bypassed in the 1960s; the latter two tunnels are on a bypassed section of the Turnpike now commonly known as the Abandoned Pennsylvania Turnpike.
The Fredhälls road tunnel was opened in 1966, in Stockholm, Sweden, and the New Elbe road tunnel opened in 1975 in Hamburg, Germany. Both tunnels handle around 150,000 vehicles a day, making them two of the most trafficked tunnels in the world.
The Honningsvåg Tunnel ( long) opened in 1999 on European route E69 in Norway as the world's northernmost road tunnel, except for mines (which exist on Svalbard).
The Central Artery road tunnel in Boston, Massachusetts, is a part of the larger Big Dig completed around 2007, and carries approximately 200,000 vehicles/day under the city along Interstate 93, US Route 1, and Massachusetts Route 3, which share a concurrency through the tunnels. The Big Dig replaced Boston's old badly deteriorated I-93 elevated highway.
The Stormwater Management And Road Tunnel or SMART Tunnel, is a combined storm drainage and road structure opened in 2007 in Kuala Lumpur, Malaysia. The tunnel is the longest stormwater drainage tunnel in South East Asia and second longest in Asia. The facility can be operated as a simultaneous traffic and stormwater passage, or dedicated exclusively to stormwater when necessary.
The Eiksund Tunnel on national road Rv 653 in Norway is the world's deepest subsea road tunnel, measuring long, with deepest point at below the sea level, opened in February 2008.
Gerrards Cross railway tunnel, in England, opened in 2010, is notable in that it converted an existing railway cutting into a tunnel to create ground to build a supermarket over the tunnel. The railway in the cutting was first opened around 1906, stretching over 104 years to complete a railway tunnel. The tunnel was built using the cover method with craned in prefabricated forms in order to keep the busy railway operating. A branch of the Tesco supermarket chain occupies the newly created ground above the railway tunnel, with an adjacent existing railway station at the end of the tunnel. During construction, a portion of the tunnel collapsed when soil cover was added. The prefabricated forms were covered with a layer of reinforced concrete after the collapse.
The Fenghuoshan tunnel, completed in 2005 on the Qinghai-Tibet railway is the world's highest railway tunnel, about above sea level and long.
The La Linea Tunnel in Colombia, 2016, is the longest, , mountain tunnel in South America. It crosses beneath a mountain at above sea level with six traffic lanes, and it has a parallel emergency tunnel. The tunnel is subject to serious groundwater pressure. The tunnel will link Bogotá and its urban area with the coffee-growing region, and with the main port on the Colombian Pacific coast.
The Chicago Deep Tunnel Project is a network of of drainage tunnels designed to reduce flooding in the Chicago area. Started in the mid-1970s, the project is due to be completed in 2029.
New York City Water Tunnel No. 3, started in 1970, has an expected completion beyond 2026, and will measure more than .
Mining
The use of tunnels for mining is called drift mining. Drift mining can help find coal, goal, iron, and other minerals, just like normal mining.
Sub-surface mining consists of digging tunnels or shafts into the earth to reach buried ore deposits.
Military use
Some tunnels are not for transport at all but rather, are fortifications, for example Mittelwerk and Cheyenne Mountain Complex. Excavation techniques, as well as the construction of underground bunkers and other habitable areas, are often associated with military use during armed conflict, or civilian responses to threat of attack. Another use for tunnels was for the storage of chemical weapons .
Secret tunnels
Secret tunnels have given entrance to or escape from an area, such as the Cu Chi Tunnels or the smuggling tunnels in the Gaza Strip which connect it to Egypt. Although the Underground Railroad network used to transport escaped slaves was "underground" mostly in the sense of secrecy, hidden tunnels were occasionally used. Secret tunnels were also used during the Cold War, under the Berlin Wall and elsewhere, to smuggle refugees, and for espionage.
Smugglers use secret tunnels to transport or store contraband, such as illegal drugs and weapons. Elaborately engineered tunnels built to smuggle drugs across the Mexico-US border were estimated to require up to 9 months to complete, and an expenditure of up to $1 million. Some of these tunnels were equipped with lighting, ventilation, telephones, drainage pumps, hydraulic elevators, and in at least one instance, an electrified rail transport system. Secret tunnels have also been used by thieves to break into bank vaults and retail stores after hours. Several tunnels have been discovered by the Border Security Forces across the Line of Control along the India-Pakistan border, mainly to allow terrorists access to the Indian territory of Jammu and Kashmir.
The actual usage of erdstall tunnels is unknown but theories connect it to a rebirth ritual.
Natural tunnels
Lava tubes are emptied lava conduits, formed during volcanic eruptions by flowing and cooling lava.
Natural Tunnel State Park (Virginia, US) features an natural tunnel, really a limestone cave, that has been used as a railroad tunnel since 1890.
Punarjani Guha in Kerala, India. Hindus believe that crawling through the tunnel (which they believe was created by a Hindu god) from one end to the other will wash away all of one's sins and thus allow one to attain rebirth. Only men are permitted to crawl through the tunnel.
Torghatten, a Norwegian island with a hat-shaped silhouette, has a natural tunnel in the middle of the hat, letting light come through. The long, high, and wide tunnel is said to be the hole made by an arrow of the angry troll Hestmannen, the hill being the hat of the troll-king of Sømna trying to save the beautiful Lekamøya. The tunnel is thought actually to be the work of ice. The sun shines through the tunnel during two few minutes long periods every year.
Major accidents
Clayton Tunnel rail crash (1861) – confusion about block signals leading to collision, 23 killed.
Welwyn Tunnel rail crash (1866) – train failed in tunnel, guard did not protect train.
Paris Métro train fire (1904) – train fire in Couronnes underground station, 84 killed by smoke and gases.
Church Hill Tunnel collapse (1925) – tunnel collapse on a work train during renovation, killing four men and trapping a steam locomotive and ten flat cars.
Balvano train disaster (1944) – asphyxiation of about 500 "unofficial" passengers on freight train.
Caldecott Tunnel fire (1982) – major motor vehicle tunnel crash and fire.
Channel Tunnel fire (1996) – Train carrying Heavy Good Vehicles (HGV) caught on fire.
Princess Diana's death (1997) – Car crash in Pont de l'Alma tunnel, Paris, which killed Princess Diana.
Mont Blanc Tunnel fire (1999) – Transport truck caught on fire and combusted inside tunnel.
Big Dig Ceiling collapse (2006) – Concrete ceiling panel falls in Fort Point tunnel, Boston, which causes the Big Dig project to be closed for a year.
See also
Euphrates Tunnel
Cattle creep
Counter-beam lighting
Culvert
Hobby tunneling
Megaproject
Rapid transit
Sequential Excavation Method
Structure gauge – measure of maximum physical clearance in a tunnel
Tree tunnel – tunnel-like effect from tree canopies above a road
Tunnel tree – tunnel bored through the trunk of a tree
Tunnels in popular culture
Underground living
References
Bibliography
Railway Tunnels in Queensland by Brian Webber, 1997, .
Sullivan, Walter. Progress In Technology Revives Interest In Great Tunnels, New York Times, 24 June 1986. Retrieved 15 August 2010.
External links
ITA-AITES International Tunnelling Association
Tunnels & Tunnelling International magazine
Crossings
Civil engineering
Transport buildings and structures
Earthworks (engineering) | Tunnel | Engineering | 12,057 |
48,859,720 | https://en.wikipedia.org/wiki/C27H28N2O3 | {{DISPLAYTITLE:C27H28N2O3}}
The molecular formula C27H28N2O3 (molar mass: 428.52 g/mol, exact mass: 428.2100 u) may refer to:
AdipoRon
Benzodioxolefentanyl | C27H28N2O3 | Chemistry | 68 |
3,735,970 | https://en.wikipedia.org/wiki/Geospatial%20content%20management%20system | A geospatial content management system (GeoCMS) is a content management system where objects (users, images, articles, blogs..) can have a latitude, longitude position to be displayed on an online interactive map. In addition the online maps link to informational pages (wiki pages essentially) on the data represented. Some GeoCMS do also allow users to edit spatial data (points, lines, polygons on maps) as part of content objects. Spatial data can be published by GeoCMS as part of their contents or using standardized interfaces such as WMS or WFS.
A GeoCMS can have a map of registered users allowing to build communities geographically, by looking at users location. The help of wiki for describing geographical layers present a way to solve the problem of geographical metadata.
Since the advent of Google Maps and the publication of its API, numerous users have used online maps to illustrate their web pages. Google Maps is in itself not a GeoCMS but a building block for GeoCMS applications. Similarly Mapserver can also be used for creating GeoCMS.
GeoCMS comparison
References
Content management systems
Geographic data and information | Geospatial content management system | Technology | 239 |
43,098,155 | https://en.wikipedia.org/wiki/Eucosma%20rigens | "Eucosma" rigens is a species of moth of the family Tortricidae. It is found in the Democratic Republic of Congo.
References
Moths described in 1938
Eucosmini
Endemic fauna of the Democratic Republic of the Congo
Unplaced names | Eucosma rigens | Biology | 49 |
1,865,916 | https://en.wikipedia.org/wiki/Signalling%20%28economics%29 | Signalling (or signaling; see spelling differences) in contract theory is the idea that one party (the agent) credibly conveys some information about itself to another party (the principal).
Signalling was already discussed and mentioned in the seminal Theory of Games and Economic Behavior, which is considered to be the text that created the research field of game theory.
Although signalling theory was initially developed by Michael Spence based on observed knowledge gaps between organisations and prospective employees, its intuitive nature led it to be adapted to many other domains, such as Human Resource Management, business, and financial markets.
In Spence's job-market signaling model, (potential) employees send a signal about their ability level to the employer by acquiring education credentials. The informational value of the credential comes from the fact that the employer believes the credential is positively correlated with having the greater ability and difficult for low-ability employees to obtain. Thus the credential enables the employer to reliably distinguish low-ability workers from high-ability workers. The concept of signaling is also applicable in competitive altruistic interaction, where the capacity of the receiving party is limited.
Introductory questions
Signalling started with the idea of asymmetric information (a deviation from perfect information), which relates to the fact that, in some economic transactions, inequalities exist in the normal market for the exchange of goods and services. In his seminal 1973 article, Michael Spence proposed that two parties could get around the problem of asymmetric information by having one party send a signal that would reveal some piece of relevant information to the other party. That party would then interpret the signal and adjust their purchasing behaviour accordingly—usually by offering a higher price than if they had not received the signal.
There are, of course, many problems that these parties would immediately run into.
Effort: How much time, energy, or money should the sender (agent) spend on sending the signal?
Reliability: How can the receiver (the principal, who is usually the buyer in the transaction) trust the signal to be an honest declaration of information?
Stability: Assuming there is a signalling equilibrium under which the sender signals honestly and the receiver trusts that information, under what circumstances will that equilibrium break down?
Job-market signalling
In the job market, potential employees seek to sell their services to employers for some wage, or price. Generally, employers are willing to pay higher wages to employ better workers. While the individual may know their own level of ability, the hiring firm is not (usually) able to observe such an intangible trait—thus there is an asymmetry of information between the two parties. Education credentials can be used as a signal to the firm, indicating a certain level of ability that the individual may possess; thereby narrowing the informational gap. This is beneficial to both parties as long as the signal indicates a desirable attribute—a signal such as a criminal record may not be so desirable. Furthermore, signaling can sometimes be detrimental in the educational scenario, when heuristics of education get overvalued such as an academic degree, that is, despite having equivalent amounts of instruction, parties that own a degree get better outcomes—the sheepskin effect.
Spence 1973: "Job Market Signaling" paper
Assumptions and groundwork
Michael Spence considers hiring as a type of investment under uncertainty analogous to buying a lottery ticket and refers to the attributes of an applicant which are observable to the employer as indices. Of these, attributes which the applicant can manipulate are termed signals. Applicant age is thus an index but is not a signal since it does not change at the discretion of the applicant. The employer is supposed to have conditional probability assessments of productive capacity, based on previous experience of the market, for each combination of indices and signals. The employer updates those assessments upon observing each employee's characteristics. The paper is concerned with a risk-neutral employer. The offered wage is the expected marginal product. Signals may be acquired by sustaining signalling costs (monetary and not). If everyone invests in the signal in the exactly the same way, then the signal can't be used as discriminatory, therefore a critical assumption is made: the costs of signalling are negatively correlated with productivity. This situation as described is a feedback loop: the employer updates their beliefs upon new market information and updates the wage schedule, applicants react by signalling, and recruitment takes place. Michael Spence studies the signalling equilibrium that may result from such a situation. He began his 1973 model with a hypothetical example: suppose that there are two types of employees—good and bad—and that employers are willing to pay a higher wage to the good type than the bad type. Spence assumes that for employers, there's no real way to tell in advance which employees will be of the good or bad type. Bad employees aren't upset about this, because they get a free ride from the hard work of the good employees. But good employees know that they deserve to be paid more for their higher productivity, so they desire to invest in the signal—in this case, some amount of education. But he does make one key assumption: good-type employees pay less for one unit of education than bad-type employees. The cost he refers to is not necessarily the cost of tuition and living expenses, sometimes called out of pocket expenses, as one could make the argument that higher ability persons tend to enroll in "better" (i.e. more expensive) institutions. Rather, the cost Spence is referring to is the opportunity cost. This is a combination of 'costs', monetary and otherwise, including psychological, time, effort and so on. Of key importance to the value of the signal is the differing cost structure between "good" and "bad" workers. The cost of obtaining identical credentials is strictly lower for the "good" employee than it is for the "bad" employee. The differing cost structure need not preclude "bad" workers from obtaining the credential. All that is necessary for the signal to have value (informational or otherwise) is that the group with the signal is positively correlated with the previously unobservable group of "good" workers. In general, the degree to which a signal is thought to be correlated to unknown or unobservable attributes is directly related to its value.
The result
Spence discovered that even if education did not contribute anything to an employee's productivity, it could still have value to both the employer and employee. If the appropriate cost/benefit structure exists (or is created), "good" employees will buy more education in order to signal their higher productivity.
The increase in wages associated with obtaining a higher credential is sometimes referred to as the “sheepskin effect”, since “sheepskin” informally denotes a diploma. It is important to note that this is not the same as the returns from an additional year of education. The "sheepskin" effect is actually the wage increase above what would normally be attributed to the extra year of education. This can be observed empirically in the wage differences between 'drop-outs' vs. 'completers' with an equal number of years of education. It is also important that one does not equate the fact that higher wages are paid to more educated individuals entirely to signalling or the 'sheepskin' effects. In reality, education serves many different purposes for individuals and society as a whole. Only when all of these aspects, as well as all the many factors affecting wages, are controlled for, does the effect of the "sheepskin" approach its true value. Empirical studies of signalling indicate it as a statistically significant determinant of wages, however, it is one of a host of other attributes—age, sex, and geography are examples of other important factors.
The model
To illustrate his argument, Spence imagines, for simplicity, two productively distinct groups in a population facing one employer. The signal under consideration is education, measured by an index y and is subject to individual choice. Education costs are both monetary and psychic. The data can be summarized as:
Suppose that the employer believes that there is a level of education y* below which productivity is 1 and above which productivity is 2. Their offered wage schedule W(y) will be:
Working with these hypotheses Spence shows that:
There is no rational reason for someone choosing a different level of education from 0 or y*.
Group I sets y=0 if 1>2-y*, that is if the return for not investing in education is higher than investing in education.
Group II sets y=y* if 2-y*/2>1, that is the return for investing in education is higher than not investing in education.
Therefore, putting the previous two inequalities together, if 1<y*<2, then the employer's initial beliefs are confirmed.
There are infinite equilibrium values of y* belonging to the interval [1,2], but they are not equivalent from the welfare point of view. The higher y* the worse off is Group II, while Group I is unaffected.
If no signaling takes place each person is paid their unconditional expected marginal product . Therefore, Group, I is worse off when signaling is present.
In conclusion, even if education has no real contribution to the marginal product of the worker, the combination of the beliefs of the employer and the presence of signalling transforms the education level y* in a prerequisite for the higher paying job. It may appear to an external observer that education has raised the marginal product of labor, without this necessarily being true.
Another model
For a signal to be effective, certain conditions must be true. In equilibrium, the cost of obtaining the credential must be lower for high productivity workers and act as a signal to the employer such that they will pay a higher wage.
In this model it is optimal for the higher ability person to obtain the credential (the observable signal) but not for the lower ability individual. The table shows the outcome of low ability person l and high ability person h with and without signal S*:
The structure is as follows:
There are two individuals with differing abilities (productivity) levels.
A higher ability / productivity person: h
A lower ability / productivity person : l
The premise for the model is that a person of high ability (h) has a lower cost for obtaining a given level of education than does a person of lower ability (l). Cost can be in terms of monetary, such as tuition, or psychological, stress incurred to obtain the credential.
Wo is the expected wage for an education level less than S*
W* is the expected wage for an education level equal or greater than S*
For the individual:
Person(credential) - Person(no credential) ≥ Cost(credential) → Obtain credential
Person(credential) - Person(no credential) < Cost(credential) → Do not obtain credential
Thus, if both individuals act rationally it is optimal for person h to obtain S* but not for person l so long as the following conditions are satisfied.
Edit: note that this is incorrect with the example as graphed. Both 'l' and 'h' have lower costs than W* at the education level. Also, Person(credential) and Person(no credential) are not clear.
Edit: note that this is ok as for low type "l": , and thus low type will choose Do not obtain credential.
Edit: For there to be a separating equilibrium the high type 'h' must also check their outside option; do they want to choose the net pay in the separating equilibrium (calculated above) over the net pay in the pooling equilibrium. Thus we also need to test that: Otherwise high type 'h' will choose Do not obtain credential of the pooling equilibrium.
For the employers:
Person(credential) = E(Productivity | Cost(credential) ≤ Person(credential) - Person(no credential))
Person(no credential) = E(Productivity | Cost(credential) > Person(credential) - Person(no credential))
In equilibrium, in order for the signalling model to hold, the employer must recognize the signal and pay the corresponding wage and this will result in the workers self-sorting into the two groups. One can see that the cost/benefit structure for a signal to be effective must fall within certain bounds or else the system will fail.
IPOs
Signaling typically occurs in an IPO, where a company issues out shares to the public market to raise equity capital. This arises due to information asymmetry between potential investors and the company raising capital. Given firms are private before an IPO, prospective investors have limited information about the firm's true value or future prospects, which may lead to market inefficiencies and mispricing. To overcome this information asymmetry, firms may use signaling to communicate their true value to potential investors.
Leland and Pyle (1977) analyzed the role of signals within the process of an IPO, finding that companies with good future perspectives and higher possibilities of success ("good companies") should always send clear signals to the market when going public, i.e. the owner should keep control of a significant percentage of the company. In order for this signal to be perceived as reliable, the signal must be too costly to be imitated by "bad companies". By not providing a signal to the market, asymmetric information will result in adverse selection in the IPO market.
Various forms of signaling have also been observed during IPOs, especially when companies underprice the offered share price to prospective investors. Underpricing can be explained by prospect theory, which suggests that investors tend to be more risk-averse when it comes to gains than losses. Hence, when a company offers its shares at a discount to their true value, it creates the perception of a gain for investors, which can increase demand for the shares and lead to a higher aftermarket price. This excess demand also sends a positive signal to the market that the firm is undervalued, as the issuer signals to the market that they are leaving money on the table - defined as number of shares sold times the difference between the first-day closing market price and the offer price. This represents a substantial indirect cost to the issuing firm, but allows initial investors to achieve sizeable financial returns at the very first day of trading.
In spite of leaving money on the table, underpricing is still beneficial to the firm because it allows them to raise more capital than they would have if they had priced the shares at their true value, assuming a higher price at market close. This also helps to generate positive publicity and media attention for the issuer, providing further signaling for a company's positive growth prospects.
Additionally, firms can also signal their quality to the market through their choice of an underwriter. A reputable underwriter, such as a well-known investment bank, can signal that the issuing firm is of high quality and has a strong likelihood of future success. Considering the underwriter's role in providing due diligence and expertise in the IPO process, it is unlikely for an underwriter to associate themselves with firms that have a high likelihood of failure. This helps increase the credibility of the issuing firm, and hence the share capital on offer. Additionally, the underwriter's compensation structure, which is typically based on the success of the IPO, provides an incentive for the underwriter to ensure the success of the IPO. Therefore, by choosing a reputable underwriter, the issuing firm can signal its quality to potential investors, which increases the demand for its shares and can potentially lead to a higher aftermarket price.
However, while signaling mechanisms can benefit issuers, they can also impose costs on investors. Information asymmetry can make it difficult for investors to distinguish between true signals of quality and mere attempts to manipulate the market. Moreover, the use of signals can lead to a "winner's curse" where investors overpay for shares that are not worth the price paid. Thus, understanding the costs and benefits of different signaling mechanisms is crucial in improving market efficiency and reducing information asymmetry problems.
Brands
The development of brand capital is an important strategy firms use to signal quality and reliability to consumers. Waldfogel and Chen (2006) studied the impact of retailers providing information on internet retail sites to the importance of branding as a signalling mechanism. Their study used web visits to branded vendors, unbranded vendors and third party sites which took data and collated it for consumers labelled information intermediaries. The paper did not directly measure the outcome on consumer spending because it did not include actual consumer expenditure on branded or unbranded products. It further acknowledged there is the potential consumer spending deviates from visiting behaviour. Nonetheless, it found using information intermediaries increases the number of consumer visits to unbranded vendors while it also depresses visits to branded vendors. The authors concluded by observing that while branding is a market concentrating mechanism, the internet has the potential to result in reducing market concentration as information provision undermines the effectiveness of brand spending. The extent of its effectiveness depends on the ease and cost effectiveness by which information can be provided.
Altruism and Signalling
Various studies and experiments have analysed signalling in the context of altruism. Historically, due to the nature of small communities, cooperation was particularly important to ensure human flourishing. Signalling altruism is critical in human societies because altruism is a method of signalling willingness to cooperate. Studies indicate that altruism boosts an individual’s reputation in the community, which in turn enables the individual to reap greater benefits from reputation including increased assistance if they are in need. There is often difficulty in distinguishing between pure altruists who do altruistic acts expecting no benefit to themselves whatsoever and impure altruists who do altruistic acts expecting some form of benefit. Pure altruists will be altruistic irrespective of whether there is anyone observing their conduct, whereas impure altruists will give where their altruism is observed and can be reciprocated.
Laboratory experiments conducted by behavioural economists has found that pure altruism is relatively rare. A study conducted by Dana, Weber and Xi Kuang found that in dictator games, the level of proposing 5:5 distributions were much higher when proposers could not excuse their choice by reference to moral considerations. In games where voters were provided by the testers with a mitigating reason they could cite to the other person to explain their decision, 6:1 splits were much more common than fair 50:50 split.
Empirical research in real world scenarios shows charitable giving diminishes with anonymity. Anonymous donations are much less common than non-anonymous donations. In respect to donations to a national park, researchers found participants were 25% less generous when their identities were not revealed relative to when they were. They also found donations were subject to reference effects. Participants on average gave less money where researchers told them the average donation was lower than in other instances where the researchers told participants the amount of the average donation was higher.
A study on charity runs where donors could reveal only their name, only the amount, their name and amount or remain completely anonymous with no reference to donation amount had three main findings. First, donors that gave a significant amount of money revealed the amounts donated but were more likely to not reveal their names. Second, those who gave small donations were more likely to reveal their names but hide their donations. Third, average donors were most likely to reveal both name and amount information. The researchers noted small donor donations were consistent with free riding behaviour where participants would try and obtain reputation enhancement by noting their donation, without having to donate at levels that would otherwise be necessary to get the same boost if amount information was published. Average donors revealed name and amount to also gain reputation. With respect to high donors, the researchers thought two alternatives were possible. Either, donors did not reveal names because despite high donations signalling high cost altruism there were larger reputational drawbacks to what is perceived to be showboating, or large contributors were genuinely altruistic and wanted to signal the importance of the cause. Revealing amount values the authors thought is more consistent with the latter hypothesis.
eBay Motors' Price Premium
Signalling has been studied and proposed as a means to address asymmetric information in markets for "lemons". Recently, signalling theory has been applied in used cars market such as eBay Motors. Lewis (2011) examines the role of information access and shows that the voluntary disclosure of private information increases the prices of used cars on eBay. Dimoka et al. (2012) analyzed data from eBay Motors on the role of signals to mitigate product uncertainty. Extending the information asymmetry literature in consumer behavior literature from the agent (seller) to the product, authors theorized and validated the nature and dimensions of product uncertainty, which is distinct from, yet shaped by, seller uncertainty. Authors also found information signals (diagnostic product descriptions and third-party product assurances) to reduce product uncertainty, which negatively affect price premiums (relative to the book values) of the used cars in online used cars markets.
Internet-Based Hospitality Exchange
In internet-based hospitality exchange networks such as BeWelcome and Warm Showers, hosts do not expect to receive payments from travelers. The relation between traveler and host is rather shaped by mutual altruism. Travelers send homestay requests to the hosts, which the hosts are not obligated to accept. Both networks as non-profit organizations grant trustworthy teams of scientists access to their anonymized data for publication of insights to the benefit of humanity. In 2015, datasets from BeWelcome and Warm Showers were analyzed. Analysis of 97,915 homestay requests from BeWelcome and 285,444 homestay requests from Warm Showers showed general regularity — the less time is spent on writing a homestay request, the less is the probability of being accepted by a host. Low-effort communication aka 'copy and paste requests' obviously sends the wrong signal.
Outside options
Most signalling models are plagued by a multiplicity of possible equilibrium outcomes. In a study published in the Journal of Economic Theory, a signalling model has been proposed that has a unique equilibrium outcome. In the principal-agent model it is argued that an agent will choose a large (observable) investment level when he has a strong outside option. Yet, an agent with a weak outside option might try to bluff by also choosing a large investment, in order to make the principal believe that the agent has a strong outside option (so that the principal will make a better contract offer to the agent). Hence, when an agent has private information about his outside option, signalling may mitigate the hold-up problem.
Foreign policy and international relations
Due to the nature of international relations and foreign policy, signaling has long been a topic of interest when analyzing the actions of the agents involved. This study of signaling regarding foreign policy has further allowed economists and academics to understand the actions and reactions of foreign bodies when presented with varying information. Typically when interacting with one another, the actions of these foreign parties are heavily dependent on the proposed actions and reactions of each other. In many cases however, there is an asymmetry of information between the two parties with both looking to aid their own non-mutually beneficial interests.
Costly signaling
In foreign policy, it is common to see game theory problems such as the prisoner’s dilemma and chicken game occur as the different parties both have a dominating strategy regardless of the actions of the other party. In order to signal to the other parties, and furthermore for the signal to be credible, strategies such as tying hands and sinking costs are often implemented. These are examples of costly signals which typically present some form of assurance and commitment in order to show that the signal is credible and the party receiving the signal should act on the information given. Despite this however, there is still much contention as to whether, in practice, costly signaling is effective. In studies by Quek (2016) it was suggested that decision makers such as politicians and leaders don't seem to interpret and understand signals the way that models suggest they should.
Sinking costs and Tying hands
A costly signal in which the cost of an action is incurred upfront ("ex ante") is a sunk cost. An example of this would be the mobilization of an army as this sends a clear signal of intentions and the costs are incurred immediately.
When the cost of the action is incurred after the decision is made ("ex post") it is considered to be tying hands. A common example is an alliance which does not have a large initial monetary cost yet ties the hands of the parties, as either party would incur significant costs if they abandoned the other party, especially in crises.
Theoretically both sinking costs and tying hands are valid forms of costly signaling however they have garnered much criticism due to differing beliefs regarding the overall effectiveness of the methods in altering the likelihood of war. Recent studies such as the Journal of Conflict Resolution suggest that sinking costs and tying hands are both effective in increasing credibility. This was done by finding how the change in the costs of costly signals vary their credibility. Prior to this research studies conducted were binary and static by nature, limiting the capability of the model. This increased the validity of the use of these signaling mechanisms in foreign diplomacy.
Effectiveness of signaling through time
The initial research into signaling suggested that it was an effective tool in order to manage foreign economic and military affairs however, with time and more thorough analysis problems began to present themselves, these being:
Whether or not the extent to which the signal is received and acted upon may not justify the cost of the signal
Parties and those who govern them are able to signal in more ways than just through actions
Different signals often provoke different responses from different parties (heterogeneity plays a large part in the effectiveness of signals)
In Fearon’s original models (Bargaining model of war) the model was simple in that a party would display their intentions, their intended audience would then interpret the signals and act upon them. Thus, creating a perfect scenario which validates the use of signaling. Later in works by Slantchev (2005), it was suggested that due to the nature of using military mobilization as a signal, despite having intentions to avoid war can increase tensions and thus both be a sunk cost and can tie the party’s hands. Furthermore Yarhi-Milo, Kertzer and Renshon (2017) were able to use a more dynamic model to assess the effectiveness of these signals given varying cost levels and reaction levels.
See also
Countersignalling
Forward guidance
Impression management
Signalling game
Stigma management
Virtue signalling
Handicap Principle
References
Further reading
paper
(also available as his Nobel Prize lecture PDF)
Asymmetric information
Game theory | Signalling (economics) | Physics,Mathematics | 5,501 |
43,109,752 | https://en.wikipedia.org/wiki/T.H.%20Tse | T.H. Tse () is a Hong Kong academic who is a professor and researcher in program testing and debugging. He is ranked internationally as the second most prolific author in metamorphic testing.
According to Bruel et al., "Research on integrated formal and informal techniques can trace its roots to the work of T.H. Tse in the mid-eighties." The application areas of his research include object-oriented software, services computing, pervasive computing, concurrent systems, imaging software, and numerical programs. In addition, he creates graphic designs for non-government organizations.
Tse received the PhD from the London School of Economics in 1988 under the supervision of Frank Land and Ian Angell. He was a Visiting Fellow at the University of Oxford in 1990 and 1992. He is currently an honorary professor in computer science at The University of Hong Kong after retiring from his full professorship in 2014. He was decorated with an MBE by Queen Elizabeth II of the United Kingdom.
In 2013, an international event entitled "The Symposium on Engineering Test Harness" was held in Nanjing, China "in honor of the retirement of T.H. Tse". The acronym of the symposium was "TSE-TH".
In 2017 to 2021, Tse served as the intermediary for the fundraising of $150 million for The University of Hong Kong to establish the Tam Wing Fan Innovation Wings I and II in the Faculty of Engineering.
In 2019, Tse and team applied metamorphic testing to verify the robustness of citation indexing services, including Scopus and Web of Science. The innovative method, known as "metamorphic robustness testing", revealed that the presence of simple hyphens in the titles of scholarly papers adversely affects citation counts and journal impact factors, regardless of the quality of the publications. This "bizarre new finding", as well as the refutation by Web of Science and the clarification by Tse, was reported in ScienceAlert,<ref Nature Index, Communications of the ACM, Psychology Today, and The Australian.
In 2021, Tse and team were selected as the Grand Champion of the Most Influential Paper Award by the Journal of Systems and Software for their 2010 paper. According to Google Scholar, the journal ranks no. 3 in h5-index among international publication venues in software systems.
In 2024, Tse successfully nominated Tsong Yueh Chen of Swinburne University of Technology, Melbourne, Australia to receive the ACM SIGSOFT Outstanding Research Award 2024 “for contributions to software testing through the invention and development of metamorphic testing”. This award is presented to individual(s) who have made significant and lasting research contributions to the theory or practice of software engineering. The awardees are invited to give a keynote presentation at the ICSE conference. Past winners include Gail Murphy 2023, Lionel Briand 2022, Mark Harman 2019, Daniel Jackson 2017, Carlo Ghezzi 2015, Alexander Wolf 2014, David Notkin 2013, Lori Clarke 2012, David Garlan and Mary Shaw 2011, Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides 2010, Axel van Lamsweerde 2008, Elaine J. Weyuker 2007, David Harel 2006, Nancy Leveson 2004, Leon J. Osterweil 2003, Gerard Holzmann 2002, Michael Jackson 2001, Victor Basili 2000, Harlan Mills 1999, Niklaus Wirth 1999, David Parnas 1998, and Barry Boehm 1997.
In July 2024, Tse was Selected as Featured Reviewer of the Month by ACM Computing Reviews.
Books
T.H. Tse, A Unifying Framework for Structured Analysis and Design Models: An Approach using Initial Algebra Semantics and Category Theory, Cambridge Tracts in Theoretical Computer Science, vol. 11, Cambridge University Press, Cambridge. Ebook edition (2010). Paperback edition (2009). Hardcover edition (1991).
Selected publications
.
References
Year of birth missing (living people)
Living people
Computer scientists
Software engineering researchers
Software testing people
Academic staff of the University of Hong Kong
Alumni of the London School of Economics
Fellows of the British Computer Society
Hong Kong people with disabilities | T.H. Tse | Technology | 847 |
27,145,688 | https://en.wikipedia.org/wiki/International%20Association%20for%20Environmental%20Philosophy | The International Association for Environmental Philosophy (IAEP) is a philosophical organization focused on the field of environmental philosophy.
Since 2004 it publishes the peer-reviewed academic journal Environmental Philosophy.
See also
International Society for Environmental Ethics
External links
IAEP website
International environmental organizations
Environmental philosophy
Ethics organizations | International Association for Environmental Philosophy | Environmental_science | 56 |
6,613,536 | https://en.wikipedia.org/wiki/Animal%20culture | Animal culture can be defined as the ability of non-human animals to learn and transmit behaviors through processes of social or cultural learning.
Culture is increasingly seen as a process, involving the social transmittance of behavior among peers and between generations. It can involve the transmission of novel behaviors or regional variations that are independent of genetic or ecological factors.
The existence of culture in non-humans has been a contentious subject, sometimes forcing researchers to rethink "what it is to be human".
The notion of culture in other animals dates back to Aristotle in classical antiquity, and more recently to Charles Darwin, but the association of other animals' actions with the actual word 'culture' originated with Japanese primatologists' discoveries of socially-transmitted food behaviours in the 1940s. Evidence for animal culture is often based on studies of
feeding behaviors, vocalizations, predator avoidance, mate selection, and migratory routes.
An important area of study for animal culture is vocal learning, the ability to make new sounds through imitation. Most species cannot learn to imitate sounds. Some can learn how to use innate vocalizations in new ways. Only a few species can learn new calls. The transmission of vocal repertoires, including some types of bird vocalization, can be viewed as social processes involving cultural transmission. Some evidence suggests that the ability to engage in vocal learning depends on the development of specialized brain circuitry, detected in humans, dolphins, bats and some birds. The lack of common ancestors suggests that the basis for vocal learning has evolved independently through evolutionary convergence.
Animal culture can be an important consideration in conservation management. As of 2020, culture and sociality were included in the aspects of the management framework of the Convention on the Conservation of Migratory Species of Wild Animals (CMS).
Background
Culture can be defined as "all group-typical behavior patterns, shared by members of animal communities, that are to some degree reliant on socially learned and transmitted information".
Organizational culture
One definition of culture, particularly in relation to the organizational aspect is the utilization of "involvement, consistency, adaptation, and mission." Cultural traits that are indicators of a successful form of organization are more likely to be assimilated into our everyday lives. Organizations that utilize the four aforementioned aspects of culture are the ones that are the most successful. Therefore, cultures that are better able to involve their citizens towards a common goal have a much higher rate of effectiveness than those who do not have a shared goal. A further definition of culture is, "[s]ocially transmitted behavior patterns that serve to relate human communities to their ecological settings." This definition connects cultural behavior to the environment. Since culture is a form of adaptation to one's environment, it is mirrored in many aspects of our current and past societies.
Cultural sociology
Other researchers are currently exploring the idea that there is a connection between cultural sociology and psychology. Certain individuals are especially concerned with the analysis of studies connecting "identity, collective memory, social classification, logics of action, and framing." Views of what exactly culture is have been changing due to the convergence of sociological and psychological thought on the subject by the 1990s. Culture is specific to region and not just one umbrella definition or concept can truly give us the essence of what culture is. Also referenced is the importance of symbols and rituals as cognitive building blocks for a psychological concept of shared culture.
Memes and cultural transmission
Richard Dawkins argues for the existence of a "unit of cultural transmission" called a meme. This concept of memes has become much more accepted as more extensive research has been done into cultural behaviors. Much as one can inherit genes from each parent, it is suggested that individuals acquire memes through imitating what they observe around them. The more relevant actions (actions that increase ones probability of survival), such as architecture and craftwork are more likely to become prevalent, enabling a culture to form. The idea of memes as following a form of Natural Selection was first presented by Daniel Dennett. It has also been argued by Dennett that memes are responsible for the entirety of human consciousness. He claims that everything that constitutes humanity, such as language and music is a result of memes.
Evolutionary culture
A closely related concept to memes is the idea of evolutionary culture. The concept of evolutionary culture gained greater acceptance due to the re-evaluation of the term by anthropologists. The broadening scope of evolution from simple genes to more abstract concepts, such as designs and behaviors makes the idea of evolutionary culture more plausible. Evolutionary culture theory is defined as "a theory of cultural phylogeny." The idea that all human culture evolved from one main culture, citing the interconnectedness of languages, has also been presented. There is, however, also the possibility for disparate ancestral cultures, in that the cultures observed today may potentially have stemmed from more than one original culture.
Culture in animals
According to the Webster's dictionary definition of culture, learning and transmission are the two main components of culture, specifically referencing tool making and the ability to acquire behaviors that will enhance one's quality of life. Using this definition it is possible to conclude that other animals are just as likely to adapt to cultural behaviors as humans. One of the first signs of culture in early humans was the utilization of tools. Chimpanzees have been observed using tools such as rocks and sticks to obtain better access to food. There are other learned activities that have been exhibited by other animals as well. Some examples of these activities that have been shown by varied animals are opening oysters, swimming, washing of food, and unsealing tin lids. This acquisition and sharing of behaviors correlates directly to the existence of memes. It especially reinforces the natural selection component, seeing as these actions employed by other animals are all mechanisms for making their lives easier, and therefore longer.
History of animal culture theory
Though the idea of 'culture' in other animals has only been around for just over half of a century, scientists have been noting social behaviors of other animals for centuries. Aristotle was the first to provide evidence of social learning in the songs of birds. Charles Darwin first attempted to find the existence of imitation in other animals when attempting to prove his theory that the human mind had evolved from that of lower beings. Darwin was also the first to suggest what became known as social learning in attempting to explain the transmission of an adaptive pattern of behavior through a population of honey bees.
Whiten's Culture in Chimpanzees
Andrew Whiten, professor of Evolutionary and Developmental Psychology at the University of St. Andrews, contributed to the greater understanding of cultural transmission with his work on chimpanzees. In Cultural Traditions in Chimpanzees, Whiten created a compilation of results from seven long-term studies totaling 151 years of observation analyzing behavioral patterns in different communities of chimpanzees in Africa (read more about it below). The study expanded the notion that cultural behavior lies beyond linguistic mediation, and can be interpreted to include distinctive socially learned behavior such as stone-handling and sweet potato washing in Japanese macaques. The implications of their findings indicate that chimpanzee behavioral patterns mimic the distinct behavioral variants seen in different human populations in which cultural transmission has generally always been an accepted concept.
Cavalli-Sforza and Feldman models
Population geneticists Cavalli-Sforza & Feldman have also been frontrunners in the field of cultural transmission, describing behavioral "traits" as characteristics pertaining to a culture that are recognizable within that culture. Using a quantifiable approach, Cavalli-Sforza & Feldman were able to produce mathematical models for three forms of cultural transmission, each of which have distinct effects on socialization: vertical, horizontal, and oblique.
Vertical transmission occurs from parents to offspring and is a function which shows that the probability that parents of specific types give rise to an offspring of their own or of another type. Vertical transmission, in this sense, is similar to genetic transmission in biological evolution as mathematical models for gene transmission account for variation. Vertical transmission also contributes strongly to the buildup of between-population variation.
Horizontal transmission, or non-vertical transmission is cultural transmission that occurs among individuals from the same generation.
Oblique transmission occurs to offspring from the generation to which their parents belong that is, from adults other than the offspring's parents, such as teachers.
Mechanisms of cultural transmission in animals
Cultural transmission, also known as cultural learning, is the process and method of passing on socially learned information. Within a species, cultural transmission is greatly influenced by how adults socialize with each other and with their young. Differences in cultural transmission across species have been thought to be largely affected by external factors, such as the physical environment, that may lead an individual to interpret a traditional concept in a novel way. The environmental stimuli that contribute to this variance can include climate, migration patterns, conflict, suitability for survival, and endemic pathogens. Cultural transmission can also vary according to different social learning strategies employed at the species and or individual level. Cultural transmission is hypothesized to be a critical process for maintaining behavioral characteristics in both humans and nonhuman animals over time, and its existence relies on innovation, imitation, and communication to create and propagate various aspects of animal behavior seen today.
Culture, when defined as the transmission of behaviors from one generation to the next, can be transmitted among animals through various methods. The most common of these methods include imitation, teaching, and language. Imitation has been found to be one of the most prevalent modes of cultural transmission in non-human animals, while teaching and language are much less widespread, with the possible exceptions of primates and cetaceans. Some research has suggested that teaching, as opposed to imitation, may be a characteristic of certain animals who have more advanced cultural capacities.
The likelihood of larger groups within a species developing and sharing these intra-species traditions with peers and offspring is much higher than that of one individual spreading some aspect of animal behavior to one or more members. Cultural transmission, as opposed to individual learning, is therefore a more efficient manner of spreading traditions and allowing members of a species to collectively inherit more adaptive behavior. This process by which offspring within a species acquires his or her own culture through mimicry or being introduced to traditions is referred to as enculturation. The role of cultural transmission in cultural evolution, then, is to provide the outlet for which organisms create and spread traditions that shape patterns of animal behavior visibly over generations.
Genetic vs. cultural transmission
Culture, which was once thought of as a uniquely human trait, is now firmly established as a common trait among animals and is not merely a set of related behaviors passed on by genetic transmission as some have argued. Genetic transmission, like cultural transmission, is a means of passing behavioral traits from one individual to another. The main difference is that genetic transmission is the transfer of behavioral traits from one individual to another through genes which are transferred to an organism from its parents during the fertilization of the egg. As can be seen, genetic transmission can only occur once during the lifetime of an organism. Thus, genetic transmission is quite slow compared to the relative speed of cultural transmission. In cultural transmission, behavioral information is passed through means of verbal, visual, or written methods of teaching. Therefore, in cultural transmission, new behaviors can be learned by many organisms in a matter of days and hours rather than the many years of reproduction it would take for a behavior to spread among organisms in genetic transmission.
Social learning
Culture can be transmitted among animals through various methods, the most common of which include imitation, teaching, and language. Imitation is one of the most prevalent modes of cultural transmission in non-human animals, while teaching and language are much less widespread. In a study on food acquisition techniques in meerkats (Suricata suricatta), researchers found evidence that meerkats learned foraging tricks through imitation of conspecifics. The experimental setup consisted of an apparatus containing food with two possible methods that could be used to obtain the food. Naïve meerkats learned and used the method exhibited by the "demonstrator" meerkat trained in one of the two techniques. Although in this case, imitation is not the clear mechanism of learning given that the naïve meerkat could simply have been drawn to certain features of the apparatus from observing the "demonstrator" meerkat and from there discovered the technique on their own.
Teaching
Teaching is often considered one mechanism of social learning, and occurs when knowledgeable individuals of some species have been known to teach others. For this to occur, a teacher must change its behavior when interacting with a naïve individual and incur an initial cost from teaching, while an observer must acquire skills rapidly as a direct consequence.
Until the 1980s, teaching, or social learning, was a skill that was thought to be uniquely human. However, research continued through the 1990s and beyond documented the existence of social learning among animal groups, which is not limited to mammals. Many insects, for example have been observed demonstrating various forms of teaching in order to obtain food. Ants, for example, will guide each other to food sources through a process called "tandem running", in which an ant will guide a companion ant to a source of food. It has been suggested that the "pupil" ant is able to learn this route in order to obtain food in the future or teach the route to other ants. By the early 2000s, various studies that show that cetaceans are able to transmit culture through teaching as well. Killer whales are known to "intentionally beach" themselves in order to catch and eat pinnipeds who are breeding on the shore. Mother killer whales teach their young to catch pinnipeds by pushing them onto the shore and encouraging them to attack and eat the prey. Because the mother killer whale is altering her behavior in order to help her offspring learn to catch prey, this is evidence of teaching and cultural learning. The intentional beaching of the killer whales, along with other cetacean behaviors such as the variations of songs among humpback whales and the sponging technique used by the bottlenose dolphin to obtain food, provide substantial support for the idea of cetacean cultural transmission.
Teaching is arguably the social learning mechanism that affords the highest fidelity of information transfer between individuals and generations, and allows a direct pathway through which local traditions can be passed down and transmitted.
Imitation
Imitation is often misinterpreted as merely the observation and copying of another's actions. This would be known as mimicry, because the repetition of the observed action is done for no other purpose than to copy the original doer or speaker. In the scientific community, imitation is rather the process in which an organism purposefully observes and copies the methods of another in order to achieve a tangible goal. Therefore, the identification and classification of animal behavior as being imitation has been very difficult. By the 2000s, research into imitation in animals had resulted in the tentative labeling of certain species of birds, monkeys, apes, and cetaceans as having the capacity for imitation. For example, a Grey parrot by the name of Alex underwent a series of tests and experiments at the University of Arizona in which scientist Irene Pepperberg judged his ability to imitate the human language in order to create vocalizations and object labels. Through the efforts of Pepperberg, Alex has been able to learn a large vocabulary of English words and phrases. Alex can then combine these words and phrases to make completely new words which are meaningless, but utilize the phonetic rules of the English language. Alex's capabilities of using and understanding more than 80 words, along with his ability to put together short phrases, demonstrates how birds, who many people do not credit with having deep intellect, can actually imitate and use rudimentary language skills in an effective manner. The results of this experiment culminated with the conclusion that the use of the English language to refer to objects is not unique to humans and is arguably true imitation, a basic form of cultural learning found in young children.
Language
Language is another key indicator of animals who have greater potential to possess culture. Though animals do not naturally use words like humans when they are communicating, the well-known parrot Alex demonstrated that even animals with small brains, but are adept at imitation, can have a deeper understanding of language after lengthy training. A bonobo named Kanzi has taken the use of the English language even further. Kanzi was taught to recognize words and their associations by using a lexigram board. Through observation of its mother's language training, Kanzi was able to learn how to use the lexigrams to obtain food and other items that he desired. Also, Kanzi is able to use his understanding of lexigrams to decipher and comprehend simple sentences. For example, when he was told to "give the doggie a shot," Kanzi grabbed a toy dog and a syringe and gave it a realistic injection. This type of advanced behavior and comprehension is what scientists have used as evidence for language-based culture in animals.
Primate culture
The beginning of the modern era of animal culture research in the middle of the 20th century came with the gradual acceptance of the term "culture" in referring to animals. In 1952, Japan's leading primatologist of the time, Kinji Imanishi, first introduced the idea of "kaluchua" or "pre-culture" in referring to the now famous potato-washing behavior of Japanese macaques.
In 1948, Imanishi and his colleagues began studying macaques across Japan, and began to notice differences among the different groups of primates, both in social patterns and feeding behavior. In one area, paternal care was the social norm, while this behavior was absent elsewhere. One of the groups commonly dug up and ate the tubers and bulbs of several plants, while monkeys from other groups would not even put these in their mouths. Imanishi reasoned that, "if one defines culture as learned by offspring from parents, then differences in the way of life of members of the same species belonging to different social groups could be attributed to culture." Following this logic, the differences Imanishi and his colleagues observed among the different groups of macaques may suggest that they had arisen as a part of the groups' unique cultures.
The most famous of these eating behaviors was observed on the island of Koshima, where one young female was observed carrying soiled sweet potatoes to a small stream, where she proceeded to wash off all of the sand and dirt before eating. This behavior was then observed in one of the monkey's playmates, then her mother and a few other playmates. The potato-washing eventually spread throughout the whole macaque colony. Imanishi introduced the Japanese term kaluchua which was later translated by Masao Kawai and others to refer to the behavior as "pre-culture" and as being acquired through "pre-cultural propagation". The researchers caution that "we must not overestimate the situation and say that 'monkeys have culture' and then confuse it with human culture." At this point, most of the observed behaviors in animals, like those observed by Imanishi, were related to survival in some way.
The first evidence of apparently arbitrary traditions came in the late-1970s, also in the behavior of primates. At this time, researchers McGrew and Tutin found a social grooming handclasp behavior to be prevalent in a certain troop of chimpanzees in Tanzania, but not found in other groups nearby. This grooming behavior involved one chimpanzee taking hold of the hand of another and lifting it into the air, allowing the two to groom each other's armpits. Though this would seem to make grooming of the armpits easier, the behavior actually has no apparent advantage. As the primatologist Frans de Waal explains from his later observations of the hand-clasp grooming behavior in a different group of chimpanzees, "A unique property of the handclasp grooming posture is that it is not required for grooming the armpit of another individual... Thus it appears to yield no obvious benefits or rewards to the groomers."
Prior to these findings, opponents to the idea of animal culture had argued that the behaviors being called cultural were simply behaviors that had evolutionarily evolved due to their importance to survival. After the identification of this initial non-evolutionarily advantageous evidence of culture, scientists began to find differences in group behaviors or traditions in various groups of primates, specifically in Africa. More than 40 different populations of wild chimpanzees have been studied across Africa, between which many species-specific, as well as population-specific, behaviors have been observed. The researching scientists found 65 different categories of behaviors among these various groups of chimpanzees, including the use of leaves, sticks, branches, and stones for communication, play, food gathering or eating, and comfort. Each of the groups used the tools slightly differently, and this usage was passed from chimpanzee to chimpanzee within the group through a complex mix of imitation and social learning.
Chimpanzees
In 1999, Whiten et al. examined data from 151 years of chimpanzee observation in an attempt to discover how much cultural variation existed between populations of the species. The synthesis of their studies consisted of two phases, in which they (1) created a comprehensive list of cultural variant behavior specific to certain populations of chimpanzees and (2) rated the behavior as either customary – occurring in all individuals within that population; habitual – not present in all individuals, but repeated in several individuals; present – neither customary or habitual but clearly identified; absent – instance of behavior not recorded and has no ecological explanation; ecological – absence of behavior can be attributed to ecological features or lack thereof in the environment, or of unknown origin. Their results were extensive: of the 65 categories of behavior studied, 39 (including grooming, tool usage and courtship behaviors) were found to be habitual in some communities but nonexistent in others.
Whiten et al. further made sure that these local traditions were not due to differences in ecology, and defined cultural behaviors as behaviors that are "transmitted repeatedly through social or observational learning to become a population-level characteristic". Eight years later, after "conducting large-scale controlled social-diffusion experiments with captive groups", Whiten et al. stated further that "alternative foraging techniques seeded in different groups of chimpanzees spread differentially...across two further groups with substantial fidelity".
This finding confirms not only that nonhuman species can maintain unique cultural traditions; it also shows that they can pass these traditions on from one population to another. The Whiten articles are a tribute to the unique inventiveness of wild chimpanzees, and help prove that humans' impressive capacity for culture and cultural transmission dates back to the now-extinct common ancestor we share with chimpanzees.
Similar to humans, social structure plays an important role in cultural transmission in chimpanzees. Victoria Horner conducted an experiment where an older, higher ranking individual and a younger, lower ranking individual were both taught the same task with only slight aesthetic modification. She found that chimpanzees tended to imitate the behaviors of the older, higher ranking chimpanzee as opposed to the younger, lower ranking individual when given a choice. It is believed that the older higher ranking individual had gained a level of 'prestige' within the group. This research demonstrates that culturally transmitted behaviors are often learned from individuals that are respected by the group.
The older, higher ranking individual's success in similar situations in the past led the other individuals to believe that their fitness would be greater by imitating the actions of the successful individual. This shows that not only are chimpanzees imitating behaviors of other individuals, they are choosing which individuals they should imitate in order to increase their own fitness. This type of behavior is very common in human culture as well. People will seek to imitate the behaviors of an individual that has earned respect through their actions. From this information, it is evident that the cultural transmission system of chimpanzees is more complex than previous research would indicate.
Chimpanzees have been known to use tools for as long as they have been studied. Andrew Whiten found that chimpanzees not only use tools, but also conform to using the same method as the majority of individuals in the group. This conformity bias is prevalent in human culture as well and is commonly referred to as peer pressure.
The results from the research of Victoria Horner and Andrew Whiten show that chimpanzee social structures and human social structures have more similarities than previously thought.
Cetacean culture
Second only to non-human primates, culture in species within the order Cetacea, which includes whales, dolphins, and porpoises, has been studied for numerous years. In these animals, much of the evidence for culture comes from vocalizations and feeding behaviors.
Cetacean vocalizations have been studied for many years, specifically those of the bottlenose dolphin, humpback whale, killer whale, and sperm whale. Since the early 1970s, scientists have studied these four species in depth, finding potential cultural attributes within group dialects, foraging, and migratory traditions. Hal Whitehead, a leading cetologist, and his colleagues conducted a study in 1992 of sperm whale groups in the South Pacific, finding that groups tended to be clustered based on their vocal dialects. The differences in the whales' songs among and between the various groups could not be explained genetically or ecologically, and thus was attributed to social learning. In mammals such as these sperm whales or bottlenose dolphins, the decision on whether an animal has the capacity for culture comes from more than simple behavioral observations. As described by ecologist Brooke Sergeant, "on the basis of life-history characteristics, social patterns, and ecological environments, bottlenose dolphins have been considered likely candidates for socially learned and cultural behaviors," due to being large-brained and capable of vocal and motor imitation.
In dolphins, scientists have focused mostly on foraging and vocal behaviors, though many worry about the fact that social functions for the behaviors have not yet been found. As with primates, many humans are reluctantly willing, yet ever so slightly willing, to accept the notion of cetacean culture, when well evidenced, due to their similarity to humans in having "long lifetimes, advanced cognitive abilities, and prolonged parental care."
Matrilineal whales
In the cases of three species of matrilineal cetaceans, pilot whales, sperm whales, and orcas (also known as killer whales), mitochondrial DNA nucleotide diversities are about ten times lower than other species of whale. Whitehead found that this low mtDNA nucleotide diversity yet high diversity in matrilineal whale culture may be attributed to cultural transmission, since learned cultural traits have the ability to have the same effect as normal maternally inherited mtDNA. The divergence of the sympatric resident and transient ecotypes of orcas off Vancouver Island is attributed to differences in diet. The resident ecotype feeds on fish and a little squid, and the transient ecotype feeds on marine mammals.
Vocalizations have also been proven to be culturally acquired in orca and sperm whale populations, as evidenced by the distinct vocalization patterns maintained by members of these different populations even in cases where more than one population may occupy one home range. Even within the same community clan, the three southern resident orca pods maintain unique, stable dialects separate from each other's, though they are associated and share some pulsed calls and whistles. The majority of their vocalizations are repetitions of the same calls, referred to as discrete or stereotyped calls, recorded since the 1960s and passed on by the orcas from generation to generation. A Southern Resident calf only learns the discrete calls used in the pod of their mother, though exposed to other calls in the clan.
Further study is being done in the matrilineal whales to uncover the cultural transmission mechanisms associated with other advanced techniques, such as migration strategies, new foraging techniques, and babysitting.
Dolphins
By using a "process of elimination" approach, researchers Krutzen et al. reported evidence of culturally transmitted tool use in bottlenose dolphins (Tursiops sp.). It has been previously noted that tool use in foraging, called "sponging" exists in this species. "Sponging" describes a behavior where a dolphin will break off a marine sponge, wear it over its rostrum, and use it to probe for fish. Using various genetic techniques, Krutzen et al. showed that the behavior of "sponging" is vertically transmitted from the mother, with most spongers being female. Additionally, they found high levels of genetic relatedness from spongers, suggesting recent ancestry and the existence of a phenomenon researchers call a "sponging eve".
In order to make a case for cultural transmission as the mode of behavioral inheritance in this case, Krutzen et al. needed to rule out possible genetic and ecological explanations. The Krutzen et al. refer to data that indicate both spongers and nonspongers use the same habitat for foraging. Using mitochondrial DNA data, Krutzen et al. found a significant non-random association between the types of mitochondrial DNA pattern and sponging. Because mitochondrial DNA is inherited maternally, this result suggests sponging is passed from the mother.
In a later study one more possible explanation for the transmission of sponging was ruled out in favor of cultural transmission. Scientists from the same lab looked at the possibility that 1.) the tendency for "sponging" was due to a genetic difference in diving ability and 2.) that these genes were under selection. From a test of 29 spongers and 54 nonspongers, the results showed that the coding mitochondrial genes were not a significant predictor of sponging behavior. Additionally, there was no evidence of selection in the investigated genes.
Rat culture
Notable research has been done with black rats and Norwegian rats. Among studies of rat culture, the most widely discussed research is that performed by Joseph Terkel in 1991 on a species of black rats that he had originally observed in the wild in Israel. Terkel conducted an in-depth study aimed to determine whether the observed behavior, the systematic stripping of pine cone scales from pine cones prior to eating, was a socially acquired behavior, as this action had not been observed elsewhere. The experimentation with and observation of these black rats was one of the first to integrate field observations with laboratory experiments to analyze the social learning involved. From the combination of these two types of research, Terkel was able to analyze the mechanisms involved in this social learning to determine that this eating behavior resulted from a combination of ecology and cultural transmission, as the rats could not figure out how to eat the pinecones without being "shown" by mature rats.
Though this research is fairly recent, it is often used as a prime example of evidence for culture in non-primate, non-cetacean beings. Animal migration may be in part cultural; released ungulates have to learn over generations the seasonal changes in local vegetation.
In the black rat (Rattus rattus), social transmission appears to be the mechanism of how optimal foraging techniques are transmitted. In this habitat, the rats' only source of food is pine seeds that they obtain from pine cones. Terkel et al. studied the way in which the rats obtained the seeds and the method that this strategy was transmitted to subsequent generations. Terkel et al. found that there was an optimal strategy for obtaining the seeds that minimized energy inputs and maximized outputs. Naïve rats that did not use this strategy could not learn it from trial and error or from watching experienced rats. Only young offspring could learn the technique. Additionally, from cross-fostering experiments where pups of naïve mothers were placed with experienced mothers and vice versa, those pups placed with experienced mothers learned the technique while those with naïve mothers did not. This result suggests that this optimal foraging technique is socially rather than genetically transmitted.
Avian culture
Birds have been a strong study subject on the topic of culture due to their observed vocal "dialects" similar to those studied in the cetaceans. These dialects were first discovered by zoologist Peter Marler, who noted the geographic variation in the songs of various songbirds. Many scientists have found that, in attempting to study these animals, they approach a stumbling block in that it is difficult to understand these animals' societies due to their being so different from our own.
Despite this hindrance, evidence for differing dialects among songbird populations has been discovered, especially in sparrows, starlings, and cowbirds. In these birds, scientists have found strong evidence for imitation-based learning, one of the main types of social learning. Though the songbirds obviously learn their songs through imitating other birds, many scientists remain skeptical about the correlation between this and culture: "...the ability to imitate sound may be as reflexive and cognitively uncomplicated as the ability to breathe. It is how imitation affects and is affected by context, by ongoing social behavior, that must be studied before assuming its explanatory power." The scientists have found that simple imitation does not itself lay the ground for culture, whether in humans or birds, but rather it is how this imitation affects the social life of an individual that matters.
Examples of culturally transmitted behaviors in birds
The complexity of several avian behaviors can be explained by the accumulation of cultural traits over many generations.
Bird song
In an experiment regarding at vocal behavior in birds, researchers Marler and Tamura found evidence of song dialects in a sparrow species known as Zonotrichia leucophrys. Located in the eastern and southern parts of North America, these white-crowned song-birds exhibit learned vocal behavior. Marler and Tamura found that while song variation existed between individual birds, each population of birds had a distinct song pattern that varied in accordance to geographical location. For this reason, Marler and Tamura called the patterns of each region a "dialect": however, this term has since been disputed, as different types of in bird song are much less distinct than dialects in human language.
By raising twelve male sparrows in twelve different acoustic settings and observing effects on their verbal behavior, Marler and Tamura found that sparrows learned songs during the first 100 days of their lives. In this experimental setting, male birds in acoustic chambers were exposed to recorded sounds played through a loudspeaker. They also showed that white-crowned sparrows only learn songs recorded from other members of their species. Marler and Tamura noted that this case of cultural transmission was interesting because it required no social bond between the learner and the emitter of sound (since all sounds originated from a loudspeaker in their experiments).
However, the presence of social bonds strongly facilitates song imitation in certain songbirds. Zebra finches rarely imitate songs played from a loudspeaker, but they regularly imitate songs of an adult bird after only a few hours of interaction. Interestingly, imitation in zebra finches is inhibited when the number of siblings (pupils) increases.
Innovative foraging
In 20th century Britain, bottled milk was delivered to households in the early morning by milkmen and left on doorsteps to be collected. Birds such as tits (Paridae) began to attack the bottles, opening the foil or cardboard lids and drinking the cream of the top. It was later shown that this innovative behavior arose independently in several different sites and spread horizontally (i.e. between living members) in the existing population. Later experimental evidence showed that conformity may lead to the horizontal spread of innovative behaviors in wild birds, and that this may in turn result in a lasting cultural tradition.
A spread of new foraging behaviors also occurred in an Argentinian population of kelp gulls (Larus dominicanus). During the 20th century, individuals in this population began to non-fatally wound the backs of swimming whales with their beaks, feeding on the blubber and creating deeper lesions in areas that were already wounded. Aerial photographs showed that gull-induced lesions on local whales increased in frequency from 2% to 99% from 1974 to 2011, and that this behavior was not observed in any other kelp gull populations other than two isolated incidents.
In New South Wales, researchers and citizen scientists were able to track the spread of lid-flipping skills as cockatoos learned from each other to open garbage bins. Bin-opening spread more quickly to neighbouring suburbs than suburbs further away. In addition, birds in different areas developed their own variants for accomplishing the complex task.
Migration
Juvenile birds that migrate in flocks may learn to navigate accurately through cultural transmission of route choice skills from older birds. Cultural inheritance of migration patterns has been shown in bustards (Otis tarda), and the pattern of inheritance was shown to depend on social structures in the flock.
Nest construction
White-browed sparrow-weaver show cultural traditions in the construction of their communal nests. These nests vary in shape and size among different groups, even when living in close proximity. This variation is not influenced by genetic factors or environmental conditions, but rather reflects group-specific preferences that are passed down through generations.
Avian social networks
Social networks are a specific mechanism of cultural transmission in birds. Information learned in social contexts can allow them to make decisions that lead to increased fitness. A great deal of research has focused on the communication of new foraging locations or behaviors through social networks. These networks are currently being analyzed through computational methods such as network-based diffusion analysis (NBDA).
In wild songbirds, social networks are a mechanism for information transmission both within and between species. Interspecific networks (i.e. networks including birds of different species) were shown to exist in multispecies flocks containing three different types of tits whose niches overlapped. In this study, knowledge about new feeding areas spread through social interactions: more birds visited the new area than the number of birds that discovered the area independently. The researchers noted that information likely travelled faster among members of the same species (conspecifics), but that individuals did not depend solely on conspecifics for transmission. Another study on army-ant-following birds has also evidenced interspecific transmission of foraging information.
A 2016 study used RFID identification transponders to experimentally manipulate avian social networks: this scanner technology allowed them to restrict access to feeders for some birds and not others. Their data showed that individuals are more likely to learn from those who were able to enter the same feeding area as them. Additionally, the existing "paths" of information transmission were altered following segregation during feeding: this was attributed to changes in the population's social network.
Others have been able to predict the pattern information transmission among individuals based on a preexisting social network. In this study, social interactions of ravens (Corvus corax) were first analyzed to create a comprehensive network. Then, the order in which individuals learned task-solving behavior from a trained tutor was compared with the network. They not only found that the pattern of learning reflected the network that they had built, but that different types of social connections (such as "affiliative interactions" and "aggressive interactions") characterized different rates of information transmission and observation.
Conformity in avian culture
Bartlett and Slater observed call convergence (i.e. conformity) in budgerigars introduced into groups with different flock-specific calls than their own. They also found that the original calls of flock members did not change significantly during this process.
Conformity is one mechanism through which innovative behaviors can become embedded in culture. In an experimental setting, tits preferentially adopted the locally popular method of opening a two-action puzzle box even after discovering the other possible way of accessing the food. This formed diverging local traditions when different populations were seeded with birds specifically trained in one method.
Other research showed that although conformity has a strong influence on behaviors adopted by birds, the local tradition can be abandoned in favor of an analogous behavior which gives higher reward. This showed that while conformity is a beneficial mechanism for quickly establishing traditions, unhelpful traditions will not necessarily be adhered to in the presence of a better alternative.
In some cases, conformity-based aggression may benefit individuals who conform to traditions. Researchers used the framework of sexual selection and conformism in of song types of songbirds to model territorial aggression against individuals with non-conforming song types. Their model showed that aggressors won more frequently when targeting non-conformers (than in un-targeted or random aggression). They also found that alleles for conformity-enforcement propagated more effectively than alleles for tolerance of non-conformity.
Finally, other species of birds have been observed to conform to the personality of other individuals in their presence. Gouldian finches (Erythrura gouldiae) exist in red- and black-headed subtypes, and these subtypes have been shown to have different levels of boldness (measured by the time taken to explore new areas, and other similar tests). Experiments placing black-headed birds (known to be less bold) in the company of red-headed birds (known to be more bold) resulted in the black-headed bird performing "bolder" behaviors, and red-headed birds became "shyer" in the presence of black-headed ones. The experimenters hypothesized that this individual-level conformity could lead to stronger social cohesion.
Fish culture
Evidence for cultural transmission has also been shown in wild fish populations. Scientists Helfman and Schultz conducted translocation experiments with French grunts (Haemulon flavolineatum) where they took fish native to a specific schooling site and transported them to other sites. In this species of fish, the organism uses distinct, traditional migration routes to travel to schooling sites on coral reefs. These routes persisted past one generation and so by relocating the fish to different sites, Helfman and Schultz wanted to see if the new fish could relearn that sites' migration route from the resident fish. Indeed this is what they found: that the newcomers quickly learned the traditional routes and schooling sites. But when residents were removed under similar situations, the new fish did not use the traditional route and instead use new routes, suggesting that the behavior could not be transmitted once the opportunity for learning was no longer there.
In a similar experiment looking at mating sites in blueheaded wrasse (Thalassoma bifasciatum), researcher Warner found that individuals chose mating sites based on social traditions and not based on the resource quality of the site. Warner found that although mating sites were maintained for four generations, when entire local populations were translocated elsewhere, new sites were used and maintained.
Controversies and criticisms
A popular method of approaching the study of animal culture (and its transmission) is the "ethnographic method," which argues that culture causes the geographical differences in the behavioral repertoires of large-brained mammals. Some researchers argue this downplays the roles that ecology and genetics play in influencing behavioral variation from population to population within a species.
Culture is just one source of adaptive behavior an organism exhibits to better exploit its environment. When behavioral variation reflects differential phenotypic plasticity, it is due more to ecological pressures than cultural ones. In other words, when an animal changes its behavior over its lifespan, this is most often a result of changes in its environment. Furthermore, animal behavior is also influenced by evolved predispositions, or genetics. It is very possible that "correlation between distance between sites and 'cultural difference' might reflect the well-established correlation between genetic and geographical distances". The farther two populations of a species are separated from each other, the less genetic traits they will share in common, and this may be one source of variance in culture.
Another argument against the "ethnographic method" is that it is impossible to prove that there are absolutely no ecological or genetic factors in any behavior. However, this criticism can also be applied to studies of human culture. Though culture has long been thought to arise and remain independent of genetics, the constraints on the propagation and innovation of cultural techniques inevitably caused by the genome of each respective animal species has led to the theory of gene-culture coevolution, which asserts that "cognitive, affective, and moral capacities" are the product of an evolutionary dynamic involving interactions between genes and culture over extended periods of time. The concept behind gene-culture coevolution is that, though culture plays a huge role in the progression of animal behavior over time, the genes of a particular species have the ability to affect the details of the corresponding culture and its ability to evolve within that species.
Culture can also contribute to differences in behavior, but like genes and environments, it carries different weight in different behaviors. As Laland and Janik explain, "to identify cultural variation, not only is it not sufficient to rule out the possibility that the variation in behavior constitutes unlearned responses to different selection pressures [from the environment], but it is also necessary to consider the possibility of genetic variation precipitating different patterns of learning." Gene-culture coevolution, much like the interaction between cultural transmission and environment, both serve as modifiers to the original theories on cultural transmission and evolution that focused more on differences in the interactions between individuals.
Unanswered questions and future areas of exploration
In the study of social transmissions, one of the important unanswered questions is an explanation of how and why maladaptive social traditions are maintained. For example, in one study on social transmission in guppies (Poecilia reticulata), naïve fish preferred taking a long, energetically costly route to a feeder that they had learned from resident fish rather than take a shorter route. These fish were also slower to learn the new, quicker route compared to naïve fish that had not been trained in the long route. In this case, not only is the social tradition maladaptive, but it also inhibits the acquisition of adaptive behavior.
See also
References
Further reading
External links
Dolphins teach their children to use sponges
Culture's not only human
Animal Culture
DeWaal serves up idea of animal culture
Detailed article on defining culture
What is Culture? - Washington State University
Define Culture Compilation of 100+ user submitted definitions of culture from around the globe
Animal communication
Ethology | Animal culture | Biology | 9,530 |
4,096,680 | https://en.wikipedia.org/wiki/Tunnel%20washer | A tunnel washer, also called a continuous batch washer, is an industrial washing machine designed specifically to handle heavy loads of laundry.
The screw is made of perforated metal, so items can progress through the washer in one direction, while water and washing chemicals move through in the opposite direction. Thus, the linen moves through pockets of progressively cleaner water and fresher chemicals. Soiled linen can be continuously fed into one end of the tunnel while clean linen emerges from the other.
Originally, one of the machine's major drawbacks was the necessity of using one wash formula for all items. Modern computerized tunnel washers can monitor and adjust the chemical levels in individual pockets, effectively overcoming this problem.
See also
Washing machine
References
Laundry washing equipment
Machines | Tunnel washer | Physics,Technology,Engineering | 154 |
44,369,616 | https://en.wikipedia.org/wiki/Acanthocystis%20turfacea%20chlorella%20virus%201 | Acanthocystis turfacea chlorella virus 1 (ATCV-1), also called Chlorovirus ATCV-1 or Chlorella virus ATCV-1 is a species of giant double-stranded DNA virus in the genus Chlorovirus.
The host of ATCV-1 is Chlorella heliozoae; it was demonstrated that "ATCV-1 neither attaches to nor infects" Chlorella variabilis.
Human infection
DNA from ATCV-1 has been isolated from the mucous membranes of the noses of humans. In both humans and mice, the presence of ATCV-1 on the oropharyngeal mucosa was associated with lower scores in tests of cognitive and motor skills. Injection of purified algal virus ATCV-1 intracranially results in long-lasting cognitive and behavioural effects in mice via induction of inflammatory factors.
References
External links
Phycodnaviridae | Acanthocystis turfacea chlorella virus 1 | Biology | 208 |
1,315,438 | https://en.wikipedia.org/wiki/Critical%20area%20%28aeronautics%29 | In aviation, a critical area refers to a designated area of an airport that all aircraft, vehicles, persons or physical obstructions must remain clear of when one or more Instrument Landing Systems (ILS) are in use, to protect against signal interference or attenuation that may lead to navigation errors, or accident. Critical areas also protect the ILS system's internal monitoring.
ILS technology delivers two main types of information to pilots. These types include the glideslope (vertical location relative to the designed glide path) and the localizer (lateral position relative to the designed approach course). Each type of information is broadcast using a separate antenna array and each type has a specific critical area:
Localizer critical area – aircraft/vehicles/persons or physical obstructions are not authorized in or over the critical area when an arriving aircraft is between the ILS final approach fix and the airport.
Glideslope critical area – aircraft/vehicles/persons or physical obstructions are not authorized in or over the critical area when an arriving aircraft is between the ILS final approach fix and the airport unless the aircraft has reported the airport in sight and is circling or sidestepping to land on a runway other than the ILS runway.
For practical purposes, these two areas are combined into the ILS critical area and identified by signs and pavement markings.
During times of reduced ceilings and visibility (800 ft/2 miles) or during ILS autoland (coupled) approaches pilots are expected to:
Before takeoff – stop aircraft before entering the critical area while waiting for takeoff.
After landing – move the aircraft out of the critical area before stopping to receive taxi instructions from the ground controller.
Much larger than the critical area is the sensitive area. Aircraft and vehicles are not allowed in this area when low visibility procedures are in force, since aircraft autoland during this time and therefore the accuracy of the guidance signals provided by the ILS is absolutely critical.
Multipathing is a potential error in the ILS system, which may affect the glideslope and/or the localizer. This occurs when the radio signals reaching the aircraft are distorted because a large metal object moves into the radiation zone of the transmitter, such as when an aircraft is flying ahead or a taxiing aircraft or truck enters the ILS critical area.
Air navigation
Airport infrastructure | Critical area (aeronautics) | Engineering | 467 |
2,983,497 | https://en.wikipedia.org/wiki/Builders%27%20rites | Builders' rites are ceremonies attendant on the laying of foundation stones, including ecclesiastical, masonic or other traditions connected with foundations or other aspects of construction.
One such custom is that of placing a few coins, newspapers, etc. within a cavity beneath the stone. Should the stone later be removed, the relics may be found. Though this tradition is still practiced, such memorials are likely deposited in the hope that they will never be disturbed. Another such rite is topping out, when the last beam (or its equivalent) is placed atop a structure during its construction.
History
Historians and folklorists in the 19th and early 20th centuries were fascinated by possible "foundation sacrifices". Jacob Grimm remarked, "It was often thought necessary to entomb live animals and even men in the foundation, on which the structure was to be raised, to secure immovable stability." Sabine Baring-Gould likewise claimed, "The old pagan laid the foundation of his house and fortress in blood." The 19th-century Folk-Lore Journal claimed that "under the walls of two round towers in Ireland (the only ones examined) human skeletons [were] discovered." In the 15th century, the wall of Holsworthy church was built over a living human being, and when this became unlawful, images of living beings were substituted.
References to this practice can be found in Greek folk culture in a poem about "Arta's bridge". According to the poem, the wife of the chief builder was sacrificed to establish a good foundation for a bridge that was of grave importance to the secluded city of Arta. The actual bridge was constructed in 1602. A similar legend appears in the Romanian folk poem Meșterul Manole, about the building of the church in the earliest Wallachian capital city.
See also
Bay Bridge Troll
Builder's signature
Ceremonial ship launching
Cornerstone
Foundation deposit
Groundbreaking
Hitobashira
Masonic manuscripts
Time capsule
Topping out
Votive offering
References
Further reading
Building
Ceremonies
Building engineering
History of construction
Rituals attending construction | Builders' rites | Engineering | 403 |
70,037,474 | https://en.wikipedia.org/wiki/Bdellovibrionota | Bdellovibrionota is a phylum of bacteria.
Phylogeny
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LSPN) and the National Center for Biotechnology Information (NCBI).
See also
List of bacterial orders
List of bacteria genera
References
Bacteria phyla | Bdellovibrionota | Biology | 70 |
18,640 | https://en.wikipedia.org/wiki/List%20of%20Mac%20models%20grouped%20by%20CPU%20type | This list of Mac models grouped by CPU type contains all central processing units (CPUs) used by Apple Inc. for their Mac computers. It is grouped by processor family, processor model, and then chronologically by Mac models.
Motorola 68k
Motorola 68000
The Motorola 68000 was the first Apple Macintosh processor. It has 32-bit CPU registers, a 24-bit address bus, and a 16-bit data path; Motorola referred to it as a "16-/32-bit microprocessor."
Motorola 68020
The Motorola 68020 was the first 32-bit Mac processor, first used on the Macintosh II. The 68020 has many improvements over the 68000, including an instruction cache, and was the first Mac processor to support a paged memory management unit, the Motorola 68851.
The Macintosh LC configured the 68020 to use a 16-bit system bus with ASICs that limited RAM to 10 MB (as opposed to the 32-bit limit of 4 GB).
Motorola 68030
The Motorola 68030 was the first Mac processor with an integrated paged memory management unit, allowing for virtual memory. Another improvement over the 68020 was the addition of a data cache.
Motorola 68040
The Motorola 68040 has improved per-clock performance compared to the 68030, as well as larger instruction and data caches, and was the first Mac processor with an integrated floating-point unit.
The MC68LC040 version was less expensive because it omitted the floating-point unit.
PowerPC
PowerPC 601
The PowerPC 601 was the first Mac processor to support the 32-bit PowerPC instruction set architecture.
PowerPC 603
PowerPC 604
The PowerPC 604e was the first Mac processor available in a symmetric multiprocessing (SMP) configuration.
PowerPC G3
PowerPC G4
The PowerPC 7400 was the first Mac processor to include an AltiVec vector processing unit.
The PowerPC 7455 was the first Mac processor over 1 GHz.
PowerPC G5
The PowerPC 970 ("G5") was the first 64-bit Mac processor.
The PowerPC 970MP was the first dual-core Mac processor and the first to be found in a quad-core configuration. It was also the first Mac processor with partitioning and virtualization capabilities.
Apple only used three variants of the G5, and soon moved entirely onto Intel architecture.
Intel x86
Sources: and
Overview
P6
Yonah was the first Mac processor to support the IA-32 instruction set architecture, in addition to the MMX, SSE, SSE2, and SSE3 extension instruction sets.
The Core Solo was a Core Duo with one of the two cores disabled.
Core
Woodcrest added support for the SSSE3 instruction set.
Merom was the first Mac processor to support the x86-64 instruction set, as well as the first 64-bit processor to appear in a Mac notebook.
Clovertown was the first quad-core Mac processor and the first to be found in an 8-core configuration.
Penryn
Penryn added support for a subset for SSE4 (SSE4.1).
Nehalem
Bloomfield and Gainestown introduced a number of notable features for the first time in any Mac processors:
Integrated memory controllers (with on-die DMI or QPI).
Simultaneous multithreading (branded as Hyper-threading).
Full support for the SSE4 instruction set (SSE4.2).
Support for Intel Turbo Boost.
Four cores on a single die rather than a multi-chip module of two dual-core dies.
Westmere
Arrandale introduced Intel HD Graphics, an on-die integrated GPU.
Sandy Bridge
Sandy Bridge added support for Intel Quick Sync Video, a dedicated on-die video encoding and decoding core. It was also the first quad-core processor to appear in a Mac notebook.
Ivy Bridge
Haswell
The Crystal Well variant used in some MacBook Pros contains an on-package L4 cache shared between the CPU and integrated graphics.
Broadwell
Skylake
Kaby Lake
Coffee Lake
Coffee Lake was the first 6-core processor to appear in a Mac notebook.
Cascade Lake
Comet Lake
Ice Lake
Ice Lake (Sunny Cove) is a 10th generation chip.
Apple silicon
Source:
M1
The M1 is a system on a chip fabricated by TSMC on the 5 nm process and contains 16 billion transistors. Its CPU cores are the first to be used in a Mac processor designed by Apple and the first to use the ARM instruction set architecture. It has 8 CPU cores (4 performance and 4 efficiency), up to 8 GPU cores, and a 16-core Neural Engine, as well as LPDDR4X memory with a bandwidth of 68 GB/s. The M1 Pro and M1 Max SoCs are fabricated by TSMC on the 5 nm process and contain 33.7 and 57 billion transistors respectively. Both have 10 CPU cores (8 performance and 2 efficiency) and a 16-core Neural Engine.
The M1 Pro and M1 Max have a 16-core and 32-core GPU, and a 256-bit and 512-bit LPDDR5 memory bus supporting 200 and 400 GB/s bandwidth respectively. Both chips were first introduced in the MacBook Pro in October 2021.
The M1 Ultra is a processor combining two M1 Max chips in one package. It was available exclusively in the highest-end variants of the Mac Studio, released on March 18, 2022. All parameters of the M1 Max processors are doubled in M1 Ultra processors, as they are essentially two M1 Max chips operating in parallel; they are, however, packed as one processor package (in size being bigger than Socket AM4 AMD Ryzen processor) and seen as one M1 Ultra processor in macOS.
M2
The M2 is a system on a chip fabricated by TSMC on an enhanced 5 nm process, containing 20 billion transistors. It has 8 CPU cores (4 performance and 4 efficiency), up to 10 GPU cores, and a 16 core Neural Engine, as well as LPDDR5 memory with a bandwidth of 100 GB/s. The M2 Pro and M2 Max SoCs are fabricated by TSMC on an enhanced 5 nm process and contain 40 and 67 billion transistors respectively. Both have 12 CPU cores (8 performance and 4 efficiency) and a 16-core Neural Engine.
The M2 Pro and M2 Max have a 19-core and 38-core GPU, and a 256-bit and 512-bit LPDDR5 memory bus supporting 200 and 400 GB/s bandwidth respectively. Both chips were first introduced in the MacBook Pro in January 2023.
The M2 Ultra is a processor combining two M2 Max dies in one package. It is available in the highest-end variants of the Mac Studio as well as the Mac Pro, both released on June 13, 2023.
M3
The M3 is a system on a chip fabricated by TSMC on the 3 nm process, containing 25 billion transistors. It has 8 CPU cores (4 performance and 4 efficiency), up to 10 GPU cores, and a 16 core Neural Engine, as well as LPDDR5 memory with a bandwidth of 100 GB/s. The M3 Pro and M3 Max SoCs are fabricated by TSMC on the 3 nm process and contain 37 and 92 billion transistors respectively. The M3 Pro has 12 CPU cores (6 performance and 6 efficiency), while the M3 Max has 16 CPU cores (12 performance and 4 efficiency); both have a 16-core Neural Engine.
The M3 Pro and M3 Max have an 18-core and 40-core GPU, and a 192-bit and 512-bit LPDDR5 memory bus supporting 150 and 400 GB/s bandwidth respectively. Both chips were first introduced in the MacBook Pro in October 2023.
M4
The M4 is a system on a chip fabricated by TSMC on an enhanced 3 nm process, containing 28 billion transistors. It has 10 CPU cores (4 performance and 6 efficiency), up to 10 GPU cores, and a 16 core Neural Engine, as well as LPDDR5X memory with a bandwidth of 120 GB/s. The M4 Pro and M4 Max SoCs are fabricated by TSMC on an enhanced 3 nm process. The M4 Pro has 14 CPU cores (10 performance and 4 efficiency), while the M4 Max has 16 CPU cores (12 performance and 4 efficiency); both have a 16-core Neural Engine.
The M4 Pro and M4 Max have a 20-core and 40-core GPU, and a 256-bit and 512-bit LPDDR5X memory bus supporting 273 and 546 GB/s bandwidth respectively. Both chips were first introduced in the MacBook Pro in October 2024.
See also
Mac (computer)
List of Mac models
Notes
References
Sources
Specifications, Apple, Inc.
Ian Page and contributors, MacTracker.
Glen Sanford, Apple History, apple-history.com.
Dan Knight, Computer Profiles, LowEndMac, Cobweb Publishing, Inc.
Product Specifications, Intel, Inc.
CPU
Apple Macintosh CPU
Macintosh models grouped by CPU type | List of Mac models grouped by CPU type | Technology | 1,880 |
678,078 | https://en.wikipedia.org/wiki/Inclusive%20fitness | Inclusive fitness is a conceptual framework in evolutionary biology first defined by W. D. Hamilton in 1964. It is primarily used to aid the understanding of how social traits are expected to evolve in structured populations. It involves partitioning an individual's expected fitness returns into two distinct components: direct fitness returns - the component of a focal individual’s fitness that is independent of who it interacts with socially; indirect fitness returns - the component that is dependent on who it interacts with socially. The direct component of an individual's fitness is often called its personal fitness, while an individual’s direct and indirect fitness components taken together are often called its inclusive fitness.
Under an inclusive fitness framework direct fitness returns are realised through the offspring a focal individual produces independent of who it interacts with, while indirect fitness returns are realised by adding up all the effects our focal individual has on the (number of) offspring produced by those it interacts with weighted by the relatedness of our focal individual to those it interacts with. This can be visualised in a sexually reproducing system (assuming identity by descent) by saying that an individual's own child, who carries one half of that individual's genes, represents one offspring equivalent. A sibling's child, who will carry one-quarter of the individual's genes, will then represent 1/2 offspring equivalent (and so on - see coefficient of relationship for further examples).
Neighbour-modulated fitness is the conceptual inverse of inclusive fitness. Where inclusive fitness calculates an individual’s indirect fitness component by summing the fitness that focal individual receives through modifying the productivities of those it interacts with (its neighbours), neighbour-modulated fitness instead calculates it by summing the effects an individual’s neighbours have on that focal individual’s productivity. When taken over an entire population, these two frameworks give functionally equivalent results. Hamilton’s rule is a particularly important result in the fields of evolutionary ecology and behavioral ecology that follows naturally from the partitioning of fitness into direct and indirect components, as given by inclusive and neighbour-modulated fitness. It enables us to see how the average trait value of a population is expected to evolve under the assumption of small mutational steps.
Kin selection is a well known case whereby inclusive fitness effects can influence the evolution of social behaviours. Kin selection relies on positive relatedness (driven by identity by descent) to enable individuals who positively influence the fitness of those they interact with at a cost to their own personal fitness, to outcompete individuals employing more selfish strategies. It is thought to be one of the primary mechanisms underlying the evolution of altruistic behaviour, alongside the less prevalent reciprocity (see also reciprocal altruism), and to be of particular importance in enabling the evolution of eusociality among other forms of group living. Inclusive fitness has also been used to explain the existence of spiteful behaviour, where individuals negatively influence the fitness of those they interact with at a cost to their own personal fitness.
Inclusive fitness and neighbour-modulated fitness are both frameworks that leverage the individual as the unit of selection. It is from this that the gene-centered view of evolution emerged: a perspective that has facilitated much of the work done into the evolution of conflict (examples include parent-offspring conflict, interlocus sexual conflict, and intragenomic conflict).
Overview
The British evolutionary biologist W. D. Hamilton showed mathematically that, because other members of a population may share one's genes, a gene can also increase its evolutionary success by indirectly promoting the reproduction and survival of other individuals who also carry that gene. This is variously called "kin theory", "kin selection theory" or "inclusive fitness theory". The most obvious category of such individuals is close genetic relatives, and where these are concerned, the application of inclusive fitness theory is often more straightforwardly treated via the narrower kin selection theory. Hamilton's theory, alongside reciprocal altruism, is considered one of the two primary mechanisms for the evolution of social behaviors in natural species and a major contribution to the field of sociobiology, which holds that some behaviors can be dictated by genes, and therefore can be passed to future generations and may be selected for as the organism evolves.
Belding's ground squirrel provides an example; it gives an alarm call to warn its local group of the presence of a predator. By emitting the alarm, it gives its own location away, putting itself in more danger. In the process, however, the squirrel may protect its relatives within the local group (along with the rest of the group). Therefore, if the effect of the trait influencing the alarm call typically protects the other squirrels in the immediate area, it will lead to the passing on of more copies of the alarm call trait in the next generation than the squirrel could leave by reproducing on its own. In such a case natural selection will increase the trait that influences giving the alarm call, provided that a sufficient fraction of the shared genes include the gene(s) predisposing to the alarm call.
Synalpheus regalis, a eusocial shrimp, is an organism whose social traits meet the inclusive fitness criterion. The larger defenders protect the young juveniles in the colony from outsiders. By ensuring the young's survival, the genes will continue to be passed on to future generations.
Inclusive fitness is more generalized than strict kin selection, which requires that the shared genes are identical by descent. Inclusive fitness is not limited to cases where "kin" ('close genetic relatives') are involved.
Hamilton's rule
Hamilton's rule is most easily derived in the framework of neighbour-modulated fitness, where the fitness of a focal individual is considered to be modulated by the actions of its neighbours. This is the inverse of inclusive fitness where we consider how a focal individual modulates the fitness of its neighbours. However, taken over the entire population, these two approaches are equivalent to each other so long as fitness remains linear in trait value. A simple derivation of Hamilton's rule can be gained via the Price equation as follows. If an infinite population is assumed, such that any non-selective effects can be ignored, the Price equation can be written as:
Where represents trait value and represents fitness, either taken for an individual or averaged over the entire population. If fitness is linear in trait value, the fitness for an individual can be written as:
Where is the component of an individual's fitness which is independent of trait value, parameterizes the effect of individual 's phenotype on its own fitness (written negative, by convention, to represent a fitness cost), is the average trait value of individual 's neighbours, and parameterizes the effect of individual 's neighbours on its fitness (written positive, by convention, to represent a fitness benefit). Substituting into the Price equation then gives:
Since by definition does not covary with , this rearranges to:
Since this term must, by definition, be greater than 0. This is because variances can never be negative, and negative mean fitness is undefined (if mean fitness is 0 the population has crashed, similarly 0 variance would imply a monomorphic population, in both cases a change in mean trait value is impossible). It can then be said that that mean trait value will increase () when:
or
Giving Hamilton's rule, where relatedness () is a regression coefficient of the form , or . Relatedness here can vary between a value of 1 (only interacting with individuals of the same trait value) and -1 (only interacting with individuals of a [most] different trait value), and will be 0 when all individuals in the population interact with equal likelihood.
Fitness in practice, however, does not tend to be linear in trait value -this would imply an increase to an infinitely large trait value being just as valuable to fitness as a similar increase to a very small trait value. Consequently, to apply Hamilton's rule to biological systems the conditions under which fitness can be approximated to being linear in trait value must first be found. There are two main methods used to approximate fitness as being linear in trait value; performing a partial regression with respect to both the focal individual's trait value and its neighbours average trait value, or taking a first order Taylor series approximation of fitness with respect to trait value. Performing a partial regression requires minimal assumptions, but only provides a statistical relationship as opposed to a mechanistic one, and cannot be extrapolated beyond the dataset that it was generated from. Linearizing via a Taylor series approximation, however, provides a powerful mechanistic relationship (see also causal model), but requires the assumption that evolution proceeds in sufficiently small mutational steps that the difference in trait value between an individual and its neighbours is close to 0 (in accordance with Fisher's geometric model): although in practice this approximation can often still retain predictive power under larger mutational steps.
As a first order approximation (linear in trait value), Hamilton's rule can only inform about how the mean trait value in a population is expected to change (directional selection). It contains no information about how the variance in trait value is expected to change (disruptive selection). As such it cannot be considered sufficient to determine evolutionary stability, even when Hamilton's rule predicts no change in trait value. This is because disruptive selection terms, and subsequent conditions for evolutionary branching, must instead be obtained from second order approximations (quadratic in trait value) of fitness.
Gardner et al. (2007) suggest that Hamilton's rule can be applied to multi-locus models, but that it should be done at the point of interpreting theory, rather than the starting point of enquiry. They suggest that one should "use standard population genetics, game theory, or other methodologies to derive a condition for when the social trait of interest is favoured by selection and then use Hamilton's rule as an aid for conceptualizing this result". It is now becoming increasingly popular to use adaptive dynamics approaches to gain selection conditions which are directly interpretable with respect to Hamilton's rule.
Altruism
The concept serves to explain how natural selection can perpetuate altruism. If there is an "altruism gene" (or complex of genes) that influences an organism's behaviour to be helpful and protective of relatives and their offspring, this behaviour also increases the proportion of the altruism gene in the population, because relatives are likely to share genes with the altruist due to common descent. In formal terms, if such a complex of genes arises, Hamilton's rule (rbc) specifies the selective criteria (in terms of cost, benefit and relatedness) for such a trait to increase in frequency in the population. Hamilton noted that inclusive fitness theory does not by itself predict that a species will necessarily evolve such altruistic behaviours, since an opportunity or context for interaction between individuals is a more primary and necessary requirement in order for any social interaction to occur in the first place. As Hamilton put it, "Altruistic or selfish acts are only possible when a suitable social object is available. In this sense behaviours are conditional from the start." In other words, while inclusive fitness theory specifies a set of necessary criteria for the evolution of altruistic traits, it does not specify a sufficient condition for their evolution in any given species. More primary necessary criteria include the existence of gene complexes for altruistic traits in gene pool, as mentioned above, and especially that "a suitable social object is available", as Hamilton noted. The American evolutionary biologist Paul W. Sherman gives a fuller discussion of Hamilton's latter point:
The occurrence of sibling cannibalism in several species underlines the point that inclusive fitness theory should not be understood to simply predict that genetically related individuals will inevitably recognize and engage in positive social behaviours towards genetic relatives. Only in species that have the appropriate traits in their gene pool, and in which individuals typically interacted with genetic relatives in the natural conditions of their evolutionary history, will social behaviour potentially be elaborated, and consideration of the evolutionarily typical demographic composition of grouping contexts of that species is thus a first step in understanding how selection pressures upon inclusive fitness have shaped the forms of its social behaviour. Richard Dawkins gives a simplified illustration:
Evidence from a variety of species including primates and other social mammals suggests that contextual cues (such as familiarity) are often significant proximate mechanisms mediating the expression of altruistic behaviour, regardless of whether the participants are always in fact genetic relatives or not. This is nevertheless evolutionarily stable since selection pressure acts on typical conditions, not on the rare occasions where actual genetic relatedness differs from that normally encountered. Inclusive fitness theory thus does not imply that organisms evolve to direct altruism towards genetic relatives. Many popular treatments do however promote this interpretation, as illustrated in a review:
Such misunderstandings of inclusive fitness' implications for the study of altruism, even amongst professional biologists utilizing the theory, are widespread, prompting prominent theorists to regularly attempt to highlight and clarify the mistakes. An example of attempted clarification is West et al. (2010):
Green-beard effect
As well as interactions in reliable contexts of genetic relatedness, altruists may also have some way to recognize altruistic behaviour in unrelated individuals and be inclined to support them. As Dawkins points out in The Selfish Gene and The Extended Phenotype, this must be distinguished from the green-beard effect.
The green-beard effect is the act of a gene (or several closely linked genes), that:
Produces a phenotype.
Allows recognition of that phenotype in others.
Causes the individual to preferentially treat other individuals with the same gene.
The green-beard effect was originally a thought experiment by Hamilton in his publications on inclusive fitness in 1964, although it hadn't yet been observed. As of today, it has been observed in few species. Its rarity is probably due to its susceptibility to 'cheating' whereby individuals can gain the trait that confers the advantage, without the altruistic behaviour. This normally would occur via the crossing over of chromosomes which happens frequently, often rendering the green-beard effect a transient state. However, Wang et al. has shown in one of the species where the effect is common (fire ants), recombination cannot occur due to a large genetic transversion, essentially forming a supergene. This, along with homozygote inviability at the green-beard loci allows for the extended maintenance of the green-beard effect.
Equally, cheaters may not be able to invade the green-beard population if the mechanism for preferential treatment and the phenotype are intrinsically linked. In budding yeast (Saccharomyces cerevisiae), the dominant allele FLO1 is responsible for flocculation (self-adherence between cells) which helps protect them against harmful substances such as ethanol. While 'cheater' yeast cells occasionally find their way into the biofilm-like substance that is formed from FLO1 expressing yeast, they cannot invade as the FLO1 expressing yeast will not bind to them in return, and thus the phenotype is intrinsically linked to the preference.
Parent–offspring conflict and optimization
Early writings on inclusive fitness theory (including Hamilton 1964) used K in place of B/C. Thus Hamilton's rule was expressed as
is the necessary and sufficient condition for selection for altruism.
Where B is the gain to the beneficiary, C is the cost to the actor and r is the number of its own offspring equivalents the actor expects in one of the offspring of the beneficiary. r is either called the coefficient of relatedness or coefficient of relationship, depending on how it is computed. The method of computing has changed over time, as has the terminology. It is not clear whether or not changes in the terminology followed changes in computation.
Robert Trivers (1974) defined "parent-offspring conflict" as any case where
i.e., K is between 1 and 2. The benefit is greater than the cost but is less than twice the cost. In this case, the parent would wish the offspring to behave as if r is 1 between siblings, although it is actually presumed to be 1/2 or closely approximated by 1/2. In other words, a parent would wish its offspring to give up ten offspring in order to raise 11 nieces and nephews. The offspring, when not manipulated by the parent, would require at least 21 nieces and nephews to justify the sacrifice of 10 of its own offspring.
The parent is trying to maximize its number of grandchildren, while the offspring is trying to maximize the number of its own offspring equivalents (via offspring and nieces and nephews) it produces. If the parent cannot manipulate the offspring and therefore loses in the conflict, the grandparents with the fewest grandchildren seem to be selected for. In other words, if the parent has no influence on the offspring's behaviour, grandparents with fewer grandchildren increase in frequency in the population.
By extension, parents with the fewest offspring will also increase in frequency. This seems to go against Ronald Fisher's "Fundamental Theorem of Natural Selection" which states that the change in fitness over the course of a generation equals the variance in fitness at the beginning of the generation. Variance is defined as the square of a quantity—standard deviation —and as a square must always be positive (or zero). That would imply that e fitness could never decrease as time passes. This goes along with the intuitive idea that lower fitness cannot be selected for. During parent-offspring conflict, the number of stranger equivalents reared per offspring equivalents reared is going down. Consideration of this phenomenon caused Orlove (1979) and Grafen (2006) to say that nothing is being maximized.
According to Trivers, if Sigmund Freud had tried to explain intra-family conflict after Hamilton instead of before him, he would have attributed the motivation for the conflict and for the castration complex to resource allocation issues rather than to sexual jealousy.
Incidentally, when k=1 or k=2, the average number of offspring per parent stays constant as time goes by. When k<1 or k>2 then the average number of offspring per parent increases as time goes by.
The term "gene" can refer to a locus (location) on an organism's DNA—a section that codes for a particular trait. Alternative versions of the code at that location are called "alleles." If there are two alleles at a locus, one of which codes for altruism and the other for selfishness, an individual who has one of each is said to be a heterozygote at that locus. If the heterozygote uses half of its resources raising its own offspring and the other half helping its siblings raise theirs, that condition is called codominance. If there is codominance the "2" in the above argument is exactly 2. If by contrast, the altruism allele is more dominant, then the 2 in the above would be replaced by a number smaller than 2. If the selfishness allele is the more dominant, something greater than 2 would replace the 2.
Opposing view
A 2010 paper by Martin Nowak, Corina Tarnita, and E. O. Wilson suggested that standard natural selection theory is superior to inclusive fitness theory, stating that the interactions between cost and benefit cannot be explained only in terms of relatedness. This, Nowak said, makes Hamilton's rule at worst superfluous and at best ad hoc. Gardner in turn was critical of the paper, describing it as "a really terrible article", and along with other co-authors has written a reply, submitted to Nature. The disagreement stems from a long history of confusion over what Hamilton's rule represents. Hamilton's rule gives the direction of mean phenotypic change (directional selection) so long as fitness is linear in phenotype, and the utility of Hamilton's rule is simply a reflection of when it is suitable to consider fitness as being linear in phenotype. The primary (and strictest) case is when evolution proceeds in very small mutational steps. Under such circumstances Hamilton's rule then emerges as the result of taking a first order Taylor series approximation of fitness with regards to phenotype. This assumption of small mutational steps (otherwise known as δ-weak selection) is often made on the basis of Fisher's geometric model and underpins much of modern evolutionary theory.
In work prior to Nowak et al. (2010), various authors derived different versions of a formula for , all designed to preserve Hamilton's rule. Orlove noted that if a formula for is defined so as to ensure that Hamilton's rule is preserved, then the approach is by definition ad hoc. However, he published an unrelated derivation of the same formula for – a derivation designed to preserve two statements about the rate of selection – which on its own was similarly ad hoc. Orlove argued that the existence of two unrelated derivations of the formula for reduces or eliminates the ad hoc nature of the formula, and of inclusive fitness theory as well. The derivations were demonstrated to be unrelated by corresponding parts of the two identical formulae for being derived from the genotypes of different individuals. The parts that were derived from the genotypes of different individuals were terms to the right of the minus sign in the covariances in the two versions of the formula for . By contrast, the terms left of the minus sign in both derivations come from the same source. In populations containing only two trait values, it has since been shown that is in fact Sewall Wright's coefficient of relationship.
Engles (1982) suggested that the c/b ratio be considered as a continuum of this behavioural trait rather than discontinuous in nature. From this approach fitness transactions can be better observed because there is more to what is happening to affect an individual's fitness than just losing and gaining.
See also
Criticism of evolutionary psychology
Evolutionary psychology
Gene-centered view of evolution
Hamiltonian spite
Kin selection
Reproductive success
r/K selection theory
References
Further reading
Campbell, N., Reece, J., et al. 2002. Biology. 6th ed. San Francisco, California. pp. 1145–1148.
Rheingold, Howard. "Technologies of cooperation". In Smart Mobs. Cambridge, Massachusetts: Perseus Publishing, 2002, Ch. 2: pp. 29–61.
Sherman, P. W. 2001. "Squirrels" (pp. 598–609, with L. Wauters) and "The Role of Kinship" (pp. 610–611) in D. W. Macdonald (Ed.) Encyclopedia of Mammals. UK: Andromeda.
Trivers, R. L. 1971. "The Evolution of Reciprocal Altruism". Quarterly Review of Biology 46: 35-57.
Trivers, R. L. 1972. "Parental Investment and Sexual Selection". In B. Campbell (ed.), Sexual Selection and the Descent of Man, 1871-1971. Chicago, Illinois: Aldine, pp. 136–179.
Evolutionary biology concepts
de:Gesamtfitness | Inclusive fitness | Biology | 4,775 |
3,531,066 | https://en.wikipedia.org/wiki/Discontinuous%20linear%20map | In mathematics, linear maps form an important class of "simple" functions which preserve the algebraic structure of linear spaces and are often used as approximations to more general functions (see linear approximation). If the spaces involved are also topological spaces (that is, topological vector spaces), then it makes sense to ask whether all linear maps are continuous. It turns out that for maps defined on infinite-dimensional topological vector spaces (e.g., infinite-dimensional normed spaces), the answer is generally no: there exist discontinuous linear maps. If the domain of definition is complete, it is trickier; such maps can be proven to exist, but the proof relies on the axiom of choice and does not provide an explicit example.
A linear map from a finite-dimensional space is always continuous
Let X and Y be two normed spaces and a linear map from X to Y. If X is finite-dimensional, choose a basis in X which may be taken to be unit vectors. Then,
and so by the triangle inequality,
Letting
and using the fact that
for some C>0 which follows from the fact that any two norms on a finite-dimensional space are equivalent, one finds
Thus, is a bounded linear operator and so is continuous. In fact, to see this, simply note that f is linear,
and therefore for some universal constant K. Thus for any
we can choose so that ( and
are the normed balls around and ), which gives continuity.
If X is infinite-dimensional, this proof will fail as there is no guarantee that the supremum M exists. If Y is the zero space {0}, the only map between X and Y is the zero map which is trivially continuous. In all other cases, when X is infinite-dimensional and Y is not the zero space, one can find a discontinuous map from X to Y.
A concrete example
Examples of discontinuous linear maps are easy to construct in spaces that are not complete; on any Cauchy sequence of linearly independent vectors which does not have a limit, there is a linear operator such that the quantities grow without bound. In a sense, the linear operators are not continuous because the space has "holes".
For example, consider the space of real-valued smooth functions on the interval [0, 1] with the uniform norm, that is,
The derivative-at-a-point map, given by
defined on and with real values, is linear, but not continuous. Indeed, consider the sequence
for . This sequence converges uniformly to the constantly zero function, but
as instead of , as would hold for a continuous map. Note that is real-valued, and so is actually a linear functional on (an element of the algebraic dual space ). The linear map which assigns to each function its derivative is similarly discontinuous. Note that although the derivative operator is not continuous, it is closed.
The fact that the domain is not complete here is important: discontinuous operators on complete spaces require a little more work.
A nonconstructive example
An algebraic basis for the real numbers as a vector space over the rationals is known as a Hamel basis (note that some authors use this term in a broader sense to mean an algebraic basis of any vector space). Note that any two noncommensurable numbers, say 1 and , are linearly independent. One may find a Hamel basis containing them, and define a map so that f acts as the identity on the rest of the Hamel basis, and extend to all of by linearity. Let {rn}n be any sequence of rationals which converges to . Then limn f(rn) = π, but By construction, f is linear over (not over ), but not continuous. Note that f is also not measurable; an additive real function is linear if and only if it is measurable, so for every such function there is a Vitali set. The construction of f relies on the axiom of choice.
This example can be extended into a general theorem about the existence of discontinuous linear maps on any infinite-dimensional normed space (as long as the codomain is not trivial).
General existence theorem
Discontinuous linear maps can be proven to exist more generally, even if the space is complete. Let X and Y be normed spaces over the field K where or Assume that X is infinite-dimensional and Y is not the zero space. We will find a discontinuous linear map f from X to K, which will imply the existence of a discontinuous linear map g from X to Y given by the formula where is an arbitrary nonzero vector in Y.
If X is infinite-dimensional, to show the existence of a linear functional which is not continuous then amounts to constructing f which is not bounded. For that, consider a sequence (en)n () of linearly independent vectors in X, which we normalize. Then, we define
for each Complete this sequence of linearly independent vectors to a vector space basis of X by defining T at the other vectors in the basis to be zero. T so defined will extend uniquely to a linear map on X, and since it is clearly not bounded, it is not continuous.
Notice that by using the fact that any set of linearly independent vectors can be completed to a basis, we implicitly used the axiom of choice, which was not needed for the concrete example in the previous section.
Role of the axiom of choice
As noted above, the axiom of choice (AC) is used in the general existence theorem of discontinuous linear maps. In fact, there are no constructive examples of discontinuous linear maps with complete domain (for example, Banach spaces). In analysis as it is usually practiced by working mathematicians, the axiom of choice is always employed (it is an axiom of ZFC set theory); thus, to the analyst, all infinite-dimensional topological vector spaces admit discontinuous linear maps.
On the other hand, in 1970 Robert M. Solovay exhibited a model of set theory in which every set of reals is measurable. This implies that there are no discontinuous linear real functions. Clearly AC does not hold in the model.
Solovay's result shows that it is not necessary to assume that all infinite-dimensional vector spaces admit discontinuous linear maps, and there are schools of analysis which adopt a more constructivist viewpoint. For example, H. G. Garnir, in searching for so-called "dream spaces" (topological vector spaces on which every linear map into a normed space is continuous), was led to adopt ZF + DC + BP (dependent choice is a weakened form and the Baire property is a negation of strong AC) as his axioms to prove the Garnir–Wright closed graph theorem which states, among other things, that any linear map from an F-space to a TVS is continuous. Going to the extreme of constructivism, there is Ceitin's theorem, which states that every function is continuous (this is to be understood in the terminology of constructivism, according to which only representable functions are considered to be functions). Such stances are held by only a small minority of working mathematicians.
The upshot is that the existence of discontinuous linear maps depends on AC; it is consistent with set theory without AC that there are no discontinuous linear maps on complete spaces. In particular, no concrete construction such as the derivative can succeed in defining a discontinuous linear map everywhere on a complete space.
Closed operators
Many naturally occurring linear discontinuous operators are closed, a class of operators which share some of the features of continuous operators. It makes sense to ask which linear operators on a given space are closed. The closed graph theorem asserts that an everywhere-defined closed operator on a complete domain is continuous, so to obtain a discontinuous closed operator, one must permit operators which are not defined everywhere.
To be more concrete, let be a map from to with domain written We don't lose much if we replace X by the closure of That is, in studying operators that are not everywhere-defined, one may restrict one's attention to densely defined operators without loss of generality.
If the graph of is closed in we call T closed. Otherwise, consider its closure in If is itself the graph of some operator is called closable, and is called the closure of
So the natural question to ask about linear operators that are not everywhere-defined is whether they are closable. The answer is, "not necessarily"; indeed, every infinite-dimensional normed space admits linear operators that are not closable. As in the case of discontinuous operators considered above, the proof requires the axiom of choice and so is in general nonconstructive, though again, if X is not complete, there are constructible examples.
In fact, there is even an example of a linear operator whose graph has closure all of Such an operator is not closable. Let X be the space of polynomial functions from [0,1] to and Y the space of polynomial functions from [2,3] to . They are subspaces of C([0,1]) and C([2,3]) respectively, and so normed spaces. Define an operator T which takes the polynomial function x ↦ p(x) on [0,1] to the same function on [2,3]. As a consequence of the Stone–Weierstrass theorem, the graph of this operator is dense in so this provides a sort of maximally discontinuous linear map (confer nowhere continuous function). Note that X is not complete here, as must be the case when there is such a constructible map.
Impact for dual spaces
The dual space of a topological vector space is the collection of continuous linear maps from the space into the underlying field. Thus the failure of some linear maps to be continuous for infinite-dimensional normed spaces implies that for these spaces, one needs to distinguish the algebraic dual space from the continuous dual space which is then a proper subset. It illustrates the fact that an extra dose of caution is needed in doing analysis on infinite-dimensional spaces as compared to finite-dimensional ones.
Beyond normed spaces
The argument for the existence of discontinuous linear maps on normed spaces can be generalized to all metrizable topological vector spaces, especially to all Fréchet spaces, but there exist infinite-dimensional locally convex topological vector spaces such that every functional is continuous. On the other hand, the Hahn–Banach theorem, which applies to all locally convex spaces, guarantees the existence of many continuous linear functionals, and so a large dual space. In fact, to every convex set, the Minkowski gauge associates a continuous linear functional. The upshot is that spaces with fewer convex sets have fewer functionals, and in the worst-case scenario, a space may have no functionals at all other than the zero functional. This is the case for the spaces with from which it follows that these spaces are nonconvex. Note that here is indicated the Lebesgue measure on the real line. There are other spaces with which do have nontrivial dual spaces.
Another such example is the space of real-valued measurable functions on the unit interval with quasinorm given by
This non-locally convex space has a trivial dual space.
One can consider even more general spaces. For example, the existence of a homomorphism between complete separable metric groups can also be shown nonconstructively.
See also
References
Constantin Costara, Dumitru Popa, Exercises in Functional Analysis, Springer, 2003. .
Schechter, Eric, Handbook of Analysis and its Foundations, Academic Press, 1997. .
Functional analysis
Axiom of choice
Functions and mappings | Discontinuous linear map | Mathematics | 2,458 |
80,799 | https://en.wikipedia.org/wiki/Breadboard | A breadboard, solderless breadboard, or protoboard is a construction base used to build semi-permanent prototypes of electronic circuits. Unlike a perfboard or stripboard, breadboards do not require soldering or destruction of tracks and are hence reusable. For this reason, breadboards are also popular with students and in technological education.
A variety of electronic systems may be prototyped by using breadboards, from small analog and digital circuits to complete central processing units (CPUs).
Compared to more permanent circuit connection methods, modern breadboards have high parasitic capacitance, relatively high resistance, and less reliable connections, which are subject to jostle and physical degradation. Signaling is limited to about 10 MHz, and not everything works properly even well below that frequency.
History
In the early days of radio, amateurs nailed bare copper wires or terminal strips to a wooden board (often literally a bread cutting board) and soldered electronic components to them. Sometimes a paper schematic diagram was first glued to the board as a guide to placing terminals, then components and wires were installed over their symbols on the schematic. Using thumbtacks or small nails as mounting posts was also common.
Breadboards have evolved over time with the term now being used for all kinds of prototype electronic devices. For example, US Patent 3,145,483, was filed in 1961 and describes a wooden plate breadboard with mounted springs and other facilities. US Patent 3,496,419, was filed in 1967 and refers to a particular printed circuit board layout as a Printed Circuit Breadboard. Both examples refer to and describe other types of breadboards as prior art.
In 1960, Orville Thompson of DeVry Technical Institute patented a solderless breadboard connecting rows of holes together with spring metal. In 1971, Ronald Portugal of E&L Instruments patented a similar concept with holes in spacings, the same as DIP IC packages, which became the basis of the modern solderless breadboard that is commonly used today.
Prior art
US Patent 231708, filed in 1880, "Electrical switch board".
US Patent 2477653, filed in 1943, "Primary electrical training test board apparatus".
US Patent 2592552, filed in 1944, "Electrical instruction board".
US Patent 2568535, filed in 1945, "Board for demonstrating electric circuits".
US Patent 2885602, filed in 1955, "Modular circuit fabrication", National Cash Register (NCR).
US Patent 3062991, filed in 1958, "Quick attaching and detaching circuit system".
US Patent 2983892, filed in 1958, "Mounting assemblage for electrical circuits".
US Patent 3085177, filed in 1960, "Device for facilitating construction of electrical apparatus", DeVry Technical Institute.
US Patent 3078596, filed in 1960, "Circuit assembly board".
US Patent 3145483, filed in 1961, "Test board for electronic circuits".
US Patent 3277589, filed in 1964, "Electrical experiment kit".
US Patent 3447249, filed in 1966, "Electronic building set". See Lectron blocks / dominoes.
US Patent 3496419, filed in 1967, "Printed circuit breadboard".
US Patent 3540135, filed in 1968, "Educational training aids".
US Patent 3733574, filed in 1971, "Miniature tandem spring clips", Vector Electronics.
US Patent D228136, filed in 1971, "Breadboard for electronic components or the like", E&L Instruments. This is the modern solderless breadboard.
Design
A modern solderless breadboard socket consists of a perforated block of plastic with numerous tin plated phosphor bronze or nickel silver alloy spring clips under the perforations. The clips are often called tie points or contact points. The number of tie points is often given in the specification of the breadboard.
The spacing between the clips (lead pitch) is typically . Integrated circuits (ICs) in dual in-line packages (DIPs) can be inserted to straddle the centerline of the block. Interconnecting wires and the leads of discrete components (such as capacitors, resistors, and inductors) can be inserted into the remaining free holes to complete the circuit. Where ICs are not used, discrete components and connecting wires may use any of the holes. Typically the spring clips are rated for 1 ampere at 5 volts and 0.333 amperes at 15 volts (5 watts).
Bus and terminal strips
Solderless breadboards connect pin to pin by metal strips inside the breadboard. The layout of a typical solderless breadboard is made up from two types of areas, called strips. Strips consist of interconnected electrical terminals. Often breadboard strips or blocks of one brand have male and female dovetail notches so boards can be clipped together to form a large breadboard.
The main areas, to hold most of the electronic components, are called terminal strips. In the middle of a terminal strip of a breadboard, one typically finds a notch running in parallel to the long side. The notch is to mark the centerline of the terminal strip and provides limited airflow (cooling) to DIP ICs straddling the centerline. The clips on the right and left of the notch are each connected in a radial way; typically five clips (i.e., beneath five holes) in a row on each side of the notch are electrically connected. The five columns on the left of the notch are often marked as A, B, C, D, and E, while the ones on the right are marked F, G, H, I and J. When a "skinny" dual in-line pin package (DIP) integrated circuit (such as a typical DIP-14 or DIP-16, which have a separation between the pin rows) is plugged into a breadboard, the pins of one side of the chip are supposed to go into column E while the pins of the other side go into column F on the other side of the notch. The rows are identified by numbers from 1 to as many the breadboard design goes. A full-size terminal breadboard strip typically consists of around 56 to 65 rows of connectors. Together with bus strips on each side this makes up a typical 784 to 910 tie point solderless breadboard. Most breadboards are designed to accommodate 17, 30 or 64 rows in the mini, half, and full configurations respectively.
To provide power to the electronic components, bus strips are used. A bus strip usually contains two columns: one for ground and one for a supply voltage. However, some breadboards only provide a single-column power distribution bus strip on each long side. Typically the row intended for a supply voltage is marked in red, while the row for ground is marked in blue or black. Some manufacturers connect all terminals in a column. Others just connect groups of, for example, 25 consecutive terminals in a column. The latter design provides a circuit designer with some more control over crosstalk (inductively coupled noise) on the power supply bus. Often the groups in a bus strip are indicated by gaps in the color marking. Bus strips typically run down one or both sides of a terminal strip or between terminal strips. On large breadboards additional bus strips can often be found on the top and bottom of terminal strips.
Some manufacturers provide separate bus and terminal strips. Others just provide breadboard blocks which contain both in one block.
Jump wires
Jump wires (also called jumper wires) for solderless breadboarding can be obtained in ready-to-use jump wire sets or can be manually manufactured. The latter can become tedious work for larger circuits. Ready-to-use jump wires come in different qualities, some even with tiny plugs attached to the wire ends. Jump wire material for ready-made or homemade wires should usually be 22 AWG (0.33 mm2) solid copper, tin-plated wire - assuming no tiny plugs are to be attached to the wire ends. The wire ends should be stripped . Shorter stripped wires might result in bad contact with the board's spring clips (insulation being caught in the springs). Longer stripped wires increase the likelihood of short-circuits on the board. Needle-nose pliers and tweezers are helpful when inserting or removing wires, particularly on crowded boards.
Differently colored wires and color-coding discipline are often adhered to for consistency. However, the number of available colors is typically far fewer than the number of signal types or paths. Typically, a few wire colors are reserved for the supply voltages and ground (e.g., red, blue, black), some are reserved for main signals, and the rest are simply used where convenient. Some ready-to-use jump wire sets use the color to indicate the length of the wires, but these sets do not allow a meaningful color-coding schema.
Advanced designs
In a more robust variant, one or more breadboard strips are mounted on a sheet of metal. Typically, that backing sheet also holds a number of binding posts. These posts provide a clean way to connect an external power supply. This type of breadboard may be slightly easier to handle.
Some manufacturers provide high-end versions of solderless breadboards. These are typically high-quality breadboard modules mounted on a flat casing. The casing contains additional equipment for breadboarding, such as a power supply, one or more signal generators, serial interfaces, LED display or LCD modules, and logic probes.
For high-frequency development, a metal breadboard affords a desirable solderable ground plane, often an unetched piece of printed circuit board; integrated circuits are sometimes stuck upside down to the breadboard and soldered to directly, a technique sometimes called "dead bug" construction because of its appearance. Examples of dead bug with ground plane construction are illustrated in a Linear Technologies application note.
Uses
A common use in the system on a chip (SoC) era is to obtain an microcontroller (MCU) on a pre-assembled printed circuit board (PCB) which exposes an array of input/output (IO) pins in a header suitable to plug into a breadboard, and then to prototype a circuit which exploits one or more of the MCU's peripherals, such as general-purpose input/output (GPIO), UART/USART serial transceivers, analog-to-digital converter (ADC), digital-to-analog converter (DAC), pulse-width modulation (PWM; used in motor control), Serial Peripheral Interface (SPI), or I²C.
Firmware is then developed for the MCU to test, debug, and interact with the circuit prototype. High frequency operation is then largely confined to the SoC's PCB. In the case of high speed interconnects such as SPI and I²C, these can be debugged at a lower speed and later rewired using a different circuit assembly methodology to exploit full-speed operation. A single small SoC often provides most of these electrical interface options in a form factor barely larger than a large postage stamp, available in the American hobby market (and elsewhere) for a few dollars, allowing fairly sophisticated breadboard projects to be created at modest expense.
Limitations
Due to relatively large parasitic capacitance compared to a properly laid out PCB (approx 2 pF between adjacent contact columns), high inductance of some connections and a relatively high and not very reproducible contact resistance, solderless breadboards are limited to operation at relatively low frequencies, usually less than 10 MHz, depending on the nature of the circuit. The relatively high contact resistance can already be a problem for some DC and very low frequency circuits. Solderless breadboards are further limited by their voltage and current ratings.
Solderless breadboards usually cannot accommodate surface-mount technology devices (SMD) or components with grid spacing other than . Further, they cannot accommodate components with multiple rows of connectors if these connectors do not match the dual in-line layout—it is impossible to provide the correct electrical connectivity. Sometimes small PCB adapters called "breakout adapters" can be used to fit the component to the board. Such adapters carry one or more components and have spaced male connector pins in a single in-line or dual in-line layout, for insertion into a solderless breadboard. Larger components are usually plugged into a socket on the adapter, while smaller components (e.g., SMD resistors) are usually soldered directly onto the adapter. The adapter is then plugged into the breadboard via the connectors. However, the need to solder the components onto the adapter negates some of the advantage of using a solderless breadboard.
Very complex circuits can become unmanageable on a solderless breadboard due to the large amount of wiring required. The very convenience of easy plugging and unplugging of connections also makes it too easy to accidentally disturb a connection, and the system becomes unreliable. It is possible to prototype systems with thousands of connecting points, but great care must be taken in careful assembly, and such a system becomes unreliable as contact resistance develops over time. At some point, very complex systems must be implemented in a more reliable interconnection technology, to have a likelihood of working over a usable time period.
Alternatives
Alternative methods to create prototypes are point-to-point construction (reminiscent of the original wooden breadboards), wire wrap, wiring pencil, and boards like the stripboard. Complicated systems, such as modern computers comprising millions of transistors, diodes, and resistors, do not lend themselves to prototyping using breadboards, as their complex designs can be difficult to lay out and debug on a breadboard.
Modern circuit designs are generally developed using a schematic capture and simulation system, and tested in software simulation before the first prototype circuits are built on a printed circuit board. Integrated circuit designs are a more extreme version of the same process: since producing prototype silicon is costly, extensive software simulations are performed before fabricating the first prototypes. However, prototyping techniques are still used for some applications such as RF circuits, or where software models of components are inexact or incomplete.
It is also possible to use a square grid of pairs of holes where one hole per pair connects to its row and the other connects to its column. This same shape can be in a circle with rows and columns each spiraling opposite clockwise/counterclockwise.
See also
Brassboard
DIN rail
Expansion spring
Fahnestock clip
Iterative design
Optical table
References
External links
Large parallel processing design prototyped on 50 connected breadboards
Electronic design
Electronics substrates
Electronic test equipment
Electronics work tools
Electronics prototyping | Breadboard | Technology,Engineering | 3,078 |
25,381,180 | https://en.wikipedia.org/wiki/Pyrrolizidine%20alkaloid%20sequestration | Pyrrolizidine alkaloid sequestration by insects is a strategy to facilitate defense and mating. Various species of insects have been known to use molecular compounds from plants for their own defense and even as their pheromones or precursors to their pheromones. A few Lepidoptera have been found to sequester chemicals from plants which they retain throughout their life and some members of Erebidae are examples of this phenomenon. Starting in the mid-twentieth century researchers investigated various members of Arctiidae, and how these insects sequester pyrrolizidine alkaloids (PAs) during their life stages, and use these chemicals as adults for pheromones or pheromone precursors. PAs are also used by members of the Arctiidae for defense against predators throughout the life of the insect.
Overview
Pyrrolizidine alkaloids are a group of chemicals produced by plants as secondary metabolites, all of which contain a pyrrolizidine nucleus. This nucleus is made up of two pyrrole rings bonded by one carbon and one nitrogen. There are two forms in which PAs can exist and will readily interchange between: a pro-toxic free base form, also called a tertiary amine, or in a non-toxic form of N-oxide.
Researchers have collected data that strongly suggests that PAs can be registered by taste receptors of predators, acting as a deterrent from being ingested. Taste receptors are also used by the various moth species that sequester PAs, which often stimulates them to feed. As of 2005, all of the PA sequestering insects that have been studied have all evolved a system to keep concentrations of the PA pro-toxic form low within the insect's tissues.
Researchers have found a number of Arctiidae that use PAs for protection and for male pheromones or precursors of the male pheromones, and some studies have found evidence suggesting PAs have behavioral and developmental effects. Estigmene acrea, Cosmosoma myrodora, Utetheisa ornatrix, Creatonotos gangis and Creatonotos transiens are all members of the family Arctiidae and found to use PAs for their defense and/or male pheromones. Parsimony suggests that the sequestering of PAs in the larval stage evolved in the subfamily Arctiinae common ancestor. The loss of ability to sequester and use PAs has occurred in a number of species, along with the switch from larval uptake to adult uptake of PAs occurring multiple times within the Arctiinae taxon.
Members of Arctiidae typically sequester PAs from their diets, but sometimes must specifically ingest fluids excreted by plants that are not a part of their diets. Sequestered PAs are kept in various tissues and varying concentration which is dependent upon the species. PAs are found in the cuticle of all studied Arctiidae mentioned here, but some also package these chemicals into their spermatophores as seen in Creatonotos gangis and Creatonotos transiens. The display of PAs on the exoskeleton is believed to cue predators to the unpalatability of the prey.
Eisner and Eisner looked at the palatability of PA positive and negative U. ornatrix to wolf spiders, Lycosa ceratiola, in both the larval form and adult form. They found that the pyrrolizidine-positive organisms were typically released unharmed by spiders except in two field circumstances where the larvae were probably envenomated prior to the spider's release and died two days after the attack. All of the PA-negative organisms were eaten by spiders. These findings were in line with prior studies done by Eisner and Meinwald which looked at orb weavers and U. ornatrix, along with spiders being fed beetle larva covered in PAs, which they rejected. All of these findings support PAs being used for defense against predation.
Studies have further elucidated the defenses and uses of PAs in Arctiidae. One study researched C. myrodora and how PAs protect this species from spider predation among other things. It found that PAs ingested from fluids excreted by plants aided in defense from predation. All organisms permitted access to PA-containing diets that were fed to spiders were cut loose from the webs. Females that had PA-deprived diets, but were allowed to mate with PA-positive males, were also released from the spider's webs. Further observations showed that male C. myrodora have a pair of pouches where they produce PA-laden filaments, which are typically released over the female prior to copulation as a nuptial gift. Experiments show that the filaments give the females more PAs, explaining why spiders released mated PA-negative females from their webs. Most of the PAs from the males were subsequently transferred to the eggs when deposited. Three clusters of eggs that were laid after copulation with a PA-positive male all tested positive for alkaloids and the one cluster that resulted from a PA-negative male copulation tested negative. By the eggs getting a dose of PAs, the authors suggest that the eggs are being protected from predators such as Coccinellidae beetles.
Jordan and others’ study found a very interesting effect of the larval ingestion of PAs. Male Estigmene acrea moths that consumed PAs in their diet as larvae produced hydroxydanaidal, a volatile PA compound, and displayed their coremata: a bifid, inflatable male-specific organ, used in dispersing pheromones in the adult stage. Larvae that were fed diets without PAs rarely displayed their coremata and did not produce hydroxydanaidal. E. acrea have been observed in the wild displaying their coremata, an activity which attracts both males and females and is known as lekking. Lekking was described by Willis and Birch in 1982, but larvae raised in the laboratory prior to this study rarely engaged in lekking or corematal displays. Scientists were unsure of why this phenomenon didn't occur in the lab, but laboratory raised larvae were usually reared on commercially available food which lacks PAs. The authors suggest that the PAs are used by the males to attract other moths by releasing the volatile PA hydroxydanaidal into the air. It is suggested in this study that this strategy of mate attraction came about by tapping into the PA affinity already programmed into the moths for feeding, which is further supported by the observation that E. acrea females release their pheromones a little bit later in the evening than the males.
Similar uses of coremata to attract other moths have been observed in C. gangis and C. transiens along with altered development of coremata when larvae are reared without PAs. Boppre and Schneider observed adult males of these two species that were not permitted to eat PAs. Their coremata only developed into two, stalk-like projections with very few hairs arising from these stalks. Males that were given plants that produced PAs to feed upon, developed long coremata with four tubes, each longer than the males body, and each tube was highly pubescent. The authors suggest from this observation that there is a basic corematal phenotype, the two stalked coremata, and that PAs are required to form full coremata which is much larger and more elaborate than the basic corematal expression. These observations were further investigated by feeding larvae different amounts of PAs which had a direct correlation to the development of the coremata, which reached a maximum plateau around 2 mg of PAs ingested while in larval form. Similar to Jordan and others’ findings, the males raised on a diet devoid of PAs did not produce hydroxydanaidal.
References
Biology terminology
Pyrrolizidine alkaloids
de:Sequestrierung von Giften#Insekten | Pyrrolizidine alkaloid sequestration | Chemistry,Biology | 1,641 |
1,838,280 | https://en.wikipedia.org/wiki/Slutsky%20equation | In microeconomics, the Slutsky equation (or Slutsky identity), named after Eugen Slutsky, relates changes in Marshallian (uncompensated) demand to changes in Hicksian (compensated) demand, which is known as such since it compensates to maintain a fixed level of utility.
There are two parts of the Slutsky equation, namely the substitution effect and income effect. In general, the substitution effect is negative. Slutsky derived this formula to explore a consumer's response as the price of a commodity changes. When the price increases, the budget set moves inward, which also causes the quantity demanded to decrease. In contrast, if the price decreases, the budget set moves outward, which leads to an increase in the quantity demanded. The substitution effect is due to the effect of the relative price change, while the income effect is due to the effect of income being freed up. The equation demonstrates that the change in the demand for a good caused by a price change is the result of two effects:
a substitution effect: when the price of a good change, as it becomes relatively cheaper, consumer consumption could hypothetically remain unchanged. If so, income would be freed up, and money could be spent on one or more goods.
an income effect: the purchasing power of a consumer increases as a result of a price decrease, so the consumer can now purchase other products or more of the same product, depending on whether the product(s) is a normal good or an inferior good.
The Slutsky equation decomposes the change in demand for good i in response to a change in the price of good i:
where is the Hicksian demand and is the Marshallian demand, at the vector of price levels , wealth level (or income level) , and fixed utility level given by maximizing utility at the original price and income, formally presented by the indirect utility function . The right-hand side of the equation equals the change in demand for good i holding utility fixed at u minus the quantity of good j demanded, multiplied by the change in demand for good i when wealth changes.
The first term on the right-hand side represents the substitution effect, and the second term represents the income effect. Note that since utility is not observable, the substitution effect is not directly observable. Still, it can be calculated by referencing the other two observable terms in the Slutsky equation. This process is sometimes known as the Hicks decomposition of a demand change.
The equation can be rewritten in terms of elasticity:
where εp is the (uncompensated) price elasticity, εph is the compensated price elasticity, εw,i the income elasticity of good i, and bj the budget share of good j.
Overall, the Slutsky equation states that the total change in demand consists of an income effect and a substitution effect, and both effects must collectively equal the total change in demand.
The equation above is helpful because it demonstrates that changes in demand indicate different types of goods. The substitution effect is negative, as indifference curves always slope downward. However, the same does not apply to the income effect, which depends on how income affects the consumption of a good.
The income effect on a normal good is negative, so if its price decreases, the consumer's purchasing power or income increases. The reverse holds when the price increases and purchasing power or income decreases.
An example of inferior goods is instant noodles. When consumers run low on money for food, they purchase instant noodles; however, the product is not generally considered something people would normally consume daily. This is due to money constraints; as wealth increases, consumption decreases. In this case, the substitution effect is negative, but the income effect is also negative.
In any case, the substitution effect or income effect are positive or negative when prices increase depending on the type of goods:
However, it is impossible to tell whether the total effect will always be negative if inferior complementary goods are mentioned. For instance, the substitution effect and the income effect pull in opposite directions. The total effect will depend on which effect is ultimately stronger.
Derivation
While there are several ways to derive the Slutsky equation, the following method is likely the simplest. Begin by noting the identity where is the expenditure function, and u is the utility obtained by maximizing utility given p and w. Totally differentiating with respect to pj yields as the following:
.
Making use of the fact that by Shephard's lemma and that at optimum,
where is the indirect utility function,
one can substitute and rewrite the derivation above as the Slutsky equation.
The Slutsky matrix
The Slutsky equation can be rewritten in matrix form:
where Dp is the derivative operator with respect to prices and Dw is the derivative operator with respect to wealth.
The matrix is known as the Hicksian substitution matrix and is formally defined as:
The Slutsky matrix is given by:
When is the maximum utility the consumer achieves at prices and income , that is, , the Slutsky equation implies that each element of the Slutsky matrix is exactly equal to the corresponding component of the Hicksian substitution matrix . The Slutsky matrix is symmetric, and given that the expenditure function is concave, the Slutsky matrix is also negative semi-definite.
Example
A Cobb-Douglas utility function (see Cobb-Douglas production function) with two goods and income generates Marshallian demand for goods 1 and 2 of and
Rearrange the Slutsky equation to put the Hicksian derivative on the left-hand-side yields the substitution effect:
Going back to the original Slutsky equation shows how the substitution and income effects add up to give the total effect of the price rise on quantity demanded:
Thus, of the total decline of in quantity demanded when rises, 21/70 is from the substitution effect and 49/70 from the income effect. The good one is the good this consumer spends most of his income on (), which is why the income effect is so large.
One can check that the answer from the Slutsky equation is the same as from directly differentiating the Hicksian demand function, which here is
where is utility. The derivative is
so since the Cobb-Douglas indirect utility function is and when the consumer uses the specified demand functions, the derivative is:
which is indeed the Slutsky equation's answer.
The Slutsky equation also can be applied to compute the cross-price substitution effect. One might think it was zero here because when rises, the Marshallian quantity demanded of good 1, is unaffected (), but that is wrong. Again rearranging the Slutsky equation, the cross-price substitution effect is:
This says that when rises, there is a substitution effect of towards good 1. At the same time, the rise in has a negative income effect on good 1's demand, an opposite effect of the same size as the substitution effect, so the net effect is zero. This is a special property of the Cobb-Douglas function.
Changes in multiple prices at once
When there are two goods, the Slutsky equation in matrix form is:
Although strictly speaking, the Slutsky equation only applies to infinitesimal price changes, a linear approximation for finite changes is standardly used. If the prices of the two goods change by and , the effect on the demands for the two goods are:
Multiplying out the matrices, the effect on good 1, for example, would be
The first term is the substitution effect. The second term is the income effect, which is composed of the consumer's response to income loss multiplied by the size of the income loss from each price increase.
Giffen goods
A Giffen good is a product in greater demand when the price increases, which is also a special case of inferior goods. In the extreme case of income inferiority, the size of the income effect overpowers the size of the substitution effect, leading to a positive overall change in demand responding to an increase in the price. Slutsky's decomposition of the change in demand into a pure substitution effect and income effect explains why the law of demand doesn't hold for Giffen goods.
See also
Consumer choice
Hotelling's lemma
Hicksian demand function
Marshallian demand function
Cobb-Douglas production function
Giffen Goods
Purchasing power
Normal good
Substitute goods
Inferior goods
Complementary goods
References
Demand
Eponyms in economics
Equations
Microeconomics
Mathematical economics
References
Varian, H. R. (2020). Intermediate microeconomics : a modern approach (Ninth edition.). W.W. Norton & Company. | Slutsky equation | Mathematics | 1,759 |
147,962 | https://en.wikipedia.org/wiki/Stylobate | In classical Greek architecture, a stylobate () is the top step of the crepidoma, the stepped platform upon which colonnades of temple columns are placed (it is the floor of the temple). The platform was built on a leveling course that flattened out the ground immediately beneath the temple.
Etymology
The term stylobate comes from the Ancient Greek , consisting of (stylos), "column", and (bainein), "to stride, walk".
Terminology
Some methodologies use the word stylobate to describe only the topmost step of the temple's base, while stereobate is used to describe the remaining steps of the platform beneath the stylobate and just above the leveling course. Others, like John Lord, use the term to refer to the entire platform.
Architectural use
The stylobate was often designed to relate closely to the dimensions of other elements of the temple. In Greek Doric temples, the length and width of the stylobate were related, and in some early Doric temples the column height was one third the width of the stylobate. The Romans, following Etruscan architectural tradition, took a different approach in using a much higher stylobate that typically had steps only in the front, leading to the portico.
In modern architecture the stylobate is the upper part of the stepped basement of the building, or the common basement floor, combining several buildings. Today, stylobates are popular in use in the construction of high-rise buildings.
See also
Scamilli impares
Notes
References
Architectural elements
Ancient Greek architecture | Stylobate | Technology,Engineering | 337 |
16,835,402 | https://en.wikipedia.org/wiki/ZNF268 | Zinc finger protein 268 is a protein that in humans is encoded by the ZNF268 gene. ZNF268 is associated with cervical cancer.
References
Further reading | ZNF268 | Chemistry | 37 |
38,428,523 | https://en.wikipedia.org/wiki/Sodium%20ricinoleate | Sodium ricinoleate is the sodium salt of ricinoleic acid, the principal fatty acid derived from castor oil. It is used in making soap, where its molecular structure causes it to lather more easily than comparable sodium soaps derived from fatty acids. It is a bactericide. It exhibits several polymorphic structural phases.
As a surfactant, sodium ricinoleate is an irritant to human skin and mucous membranes, causing hypersensitivity responses. These are due to castor bean constituents, which can be removed in order to prepare it as a food-grade ingredient.
Sodium ricinoleate was a constituent in toothpaste and was the 'SR' of Gibbs SR toothpaste, the first product to be advertised on British TV (in 1955).
References
Citations
Organic sodium salts | Sodium ricinoleate | Chemistry | 171 |
19,341,001 | https://en.wikipedia.org/wiki/Dual%20Work%20Exchanger%20Energy%20Recovery | The Dual Work Exchanger Energy Recovery (DWEER) is an energy recovery device. In the 1990s developed by DWEER Bermuda and licensed by Calder AG for use in the Caribbean. Seawater reverse osmosis (SWRO) needs high pressure and some of the reject stream can be reused by using this device. According to Calder AG, 97% of the energy in the reject stream is recovered.
The DWEER system uses a piston doublechamber reciprocating hydraulically driven pump, and a patented valve system in a high pressure batch process with large pressure vessels, similar to a locomotive, to capture and transfer the energy lost in the membrane reject stream. Its advantage is its high efficiency rate, but it suffers from complex and large mechanical components which are susceptible to seawater corrosion due to its metal composition.
References
Water power
Membrane technology | Dual Work Exchanger Energy Recovery | Chemistry | 173 |
78,222,241 | https://en.wikipedia.org/wiki/Hortiboletus%20coccyginus | Hortiboletus coccyginus, commonly known as the sumac-colored bolete, is a species of mushroom in the genus Hortiboletus. It is rare.
Taxonomy
Hortiboletus coccyginus was first described in California in 1975. Back then, it was known as Boletus coccyginus. In 2020, JL Frank transferred it to the genus Hortiboletus.
Description
Hortiboletus coccyginus has a rosy-colored cap that is about wide. The stipe is about tall and about wide.
Habitat and ecology
Hortiboletus coccyginus grows under several different types of trees, including coast live oak, tanoak, and douglas-fir. It is known to grow in mixed forests, and it is known from California and Oregon. Despite being rare, it is listed by the IUCN Red List as Least Concern.
See also
List of North American boletes
References
Boletaceae
Fungi described in 1975
Fungi of North America
Taxa named by Harry Delbert Thiers
Fungus species | Hortiboletus coccyginus | Biology | 226 |
21,397,748 | https://en.wikipedia.org/wiki/ACube%20Systems%20Srl | ACube Systems Srl is a company that started in January 2007 from the synergy of the Italian companies Alternative Holding Group Srl, Soft3 and Virtual Works.
The three companies have been engaged in the areas of sale, distribution and engineering of hardware and software for mainstream systems and alternative platforms for years. They have joined their efforts in the realization of the Sam440ep platform. The ongoing dispute over ownership of AmigaOS cast doubts about the actual release of AmigaOS 4 for this new hardware, support for Sam440ep was later introduced in AmigaOS 4.1. Since November 2007 Acube Systems distributed AmigaOS 4.0 for Amiga computers with PowerPC CPU cards on behalf of Hyperion Entertainment.
They also built the first Amiga redesign in hardware, the Minimig.
In September 2011, Acube Systems introduced AmigaOne 500 based on Sam460ex mainboard.
References
See also
Amiga companies
Italian companies established in 2007
Computer hardware companies
Computer systems companies
AmigaOS 4
Privately held companies of Italy
Technology companies established in 2007
Italian brands | ACube Systems Srl | Technology | 214 |
41,419,956 | https://en.wikipedia.org/wiki/Description-experience%20gap | The description-experience gap is a phenomenon in experimental behavioral studies of decision making. The gap refers to the observed differences in people's behavior depending on whether their decisions are made towards clearly outlined and described outcomes and probabilities or whether they simply experience the alternatives without having any prior knowledge of the consequences of their choices.
In both described and experienced choice tasks, the experimental task usually involves selecting between one of two possible choices that lead to certain outcomes. The outcome could be a gain or a loss and the probabilities of these outcomes vary. Of the two choices, one is probabilistically safer than the other. The other choice, then, offers a comparably improbable outcome. The specific payoffs or outcomes of the choices, in terms of the magnitude of their potential gains and losses, varies from study to study.
Description
Description-based alternatives or prospects are those where much of the information regarding each choice is clearly stated. That is, the participant is shown the potential outcomes for both choices as well as the probabilities of all the outcomes within each choice. Typically, feedback is not given after a choice is selected. That is, the participant is not shown what consequences their selections led to. Prospect theory guides much of what is currently known regarding described choices.
According to prospect theory, the decision weight of described prospects are considered differently depending on whether the prospects have a high or low probability and the nature of the outcomes. Specifically, people's decisions differ depending on whether the described prospects are framed as gains or losses, and whether the outcomes are sure or probable.
Prospects are termed as gains when the two possible choices both offer a chance to receive a certain reward. Losses are those where the two possible choices both result in a reduction of a certain resource. An outcome is said to be sure when its probability is absolutely certain, or very close to 1. A probable outcome is one that is comparably more unlikely than the sure outcome. For described prospects, people tend to assign a higher value to sure or more probable outcomes when the choices involve gains; this is known as the certainty effect. When the choices involve losses, people tend to assign a higher value to the more improbable outcome; this is called the reflection effect because it leads to the opposite result of the certainty effect.
Experience
Previous studies focusing on description-based prospects suffered from one drawback: the lack of external validity. In the natural environment, people's decisions must be made without a clear description of the probabilities of the alternatives. Instead, decisions must be made by drawing upon past experiences. In experience-based studies, then, the outcomes and probabilities of the two possible choices are not initially presented to the participants. Instead, participants must sample from these choices, and they can only learn the outcomes from feedback after making their choices. Participants can only estimate the probabilities of the outcomes based on experiencing the outcomes.
Contrary to the results obtained by prospect theory, people tended to underweight the probabilities of rare outcomes when they made decisions from experience. That is, they in general tended to choose the more probable outcome much more often than the rare outcomes; they behaved as if the rare outcomes were more unlikely than they really were. The effect has been observed in studies involving repeated and small samples of choices. However, people tended to choose the riskier choice when deciding from experience for tasks that are framed in terms of gains, and this, too, is in contrast with decisions made from description.
As demonstrated above, decisions appear to be made very differently depending on whether choices are made from experience or description; that is, a description-experience gap has been demonstrated in decision making studies. The example of the reverse reflection effect aptly demonstrates the nature of the gap. Recall that description-based prospects lead to the reflection effect: people are risk averse for gains and risk seeking for losses. However, experience-based prospects results in a reversal of the reflection effect such that people become risk seeking for gains and risk averse for losses. More specifically, the level of risk-taking behavior towards gains for participants in the experience task is virtually identical to the level of risk-taking towards losses for participants in the description task. The same effect is observed for gains versus losses in experience and description tasks. There are a few explanations and factors that contribute to the gap; some of which will be discussed below.
One factor that may contribute to the gap is the nature of the sampling task. In a sampling paradigm, people are allowed to respond to a number of prospects. Presumably, they form their own estimations for the probabilities of the outcomes through sampling. However, some studies rely on people making decisions for a small sample of prospects. Due to the small samples, people may not even experience the low probability event, and this might influence peoples’ underweighting of the rare events. However, description-based studies involve making the exact probabilities known to the participant. Since the participants here are immediately made aware of the rareness of an event, they are unlikely to undersample rare events.
The results from experience-based studies may be the result of a recency effect. The recency effect shows that greater weight or value is assigned to more recent events. Given that rare events are uncommon, the more common events are more likely to take recency and therefore be weighted more than rare events. The recency effect, then, may be responsible for the underweighting of rare events in decisions made from experience. Given that description-based studies usually involving responding to a limited number of trials or only one trial, recency effects likely do not have as much of an influence on decision making in these studies or may even be entirely irrelevant.
Another variable which may be driving the results for the experience-based decisions paradigm is a basic tendency to avoid delayed outcomes: alternatives with positive rare events are on average advantageous only in the long term; while alternatives with negative rare events are on average disadvantageous in the long term. Hence, focusing on short term outcomes produces underweighting of rare events. Consistent with this notion, it has been found the increasing the short term temptation (e.g., by showing outcomes from all options; or foregone payoffs) increases the underweighting of rare events in decisions from experience
Since experience-based studies include multiple trials, participants must learn about the outcomes of the available choices. The participants must base their decisions on previous outcomes, so they must therefore rely on memory when learning the outcomes and their probabilities. Biases for more salient memories, then, may be the reason for greater risk seeking in gains choices in experience-based studies. The assumption here is that a more improbable but greater reward may produce a more salient memory.
To reiterate, prospect theory offers sound explanations for how people behave towards description-based prospects. However, the results from experience-based prospects tend to show opposite forms of responding. In described prospects, people tend to overweight the extreme outcomes such that they expect these probabilities to be more likely than they really are. Whereas in experienced prospects, people tend to underweight the probability of the extreme outcomes and therefore judge them as being even less likely to occur.
A highly relevant example of the description-experience gap has been illustrated: the difference in opinions on vaccination between doctors and patients. Patients who learn about vaccination are usually exposed to only information regarding the probabilities of the side effects of the vaccines so they are likely to overweight the likelihood of these side effects. Although doctors learn about the same probabilities and descriptions of the side effects, their perspective is also shaped by experience: doctors have the direct experience of vaccinating patients and they are more likely to recognize the unlikelihood of the side effects. Due to the different ways in which doctors and patients learn about the side effects, there is potential disagreement on the necessity and safety of vaccination.
Typically in natural settings, however, peoples’ awareness of the probabilities of certain outcomes and their prior experience cannot be separated when they make decisions that involve risk. In gambling settings, for instance, players can participate in a game with some level of understanding of the probabilities of the possible outcomes and what specifically the outcomes lead to. For example, players know that there are six sides to a die, and that each side has a one in six chance of being rolled. However, a player's decisions in the game must also be influenced by his or her past experiences of playing the game.
See also
Decision theory
Prospect theory
References
External links
An introduction to Prospect Theory (econport.com)
Prospect Theory (behaviouralfiance.net)
Behavioral economics
Prospect theory
Decision theory | Description-experience gap | Biology | 1,775 |
17,548,151 | https://en.wikipedia.org/wiki/Comparison%20of%20real-time%20operating%20systems | This is a list of real-time operating systems (RTOSs). This is an operating system in which the time taken to process an input stimulus is less than the time lapsed until the next input stimulus of the same type.
References
External links
2024 RTOS Performance Report (FreeRTOS / ThreadX / PX5 / Zephyr) - Beningo Embedded Group
2013 RTOS Comparison (Nucleus / ThreadX / ucOS / Unison) - Embedded Magazine
Embedded operating systems
Real-time operating systems | Comparison of real-time operating systems | Technology | 105 |
6,887,590 | https://en.wikipedia.org/wiki/Diosgenin | Diosgenin, a phytosteroid sapogenin, is the product of hydrolysis by acids, strong bases, or enzymes of saponins, extracted from the tubers of Dioscorea wild yam species, such as the Kokoro. It is also present in smaller amounts in a number of other species. The sugar-free (aglycone) product of such hydrolysis, diosgenin is used for the commercial synthesis of cortisone, pregnenolone, progesterone, and other steroid products.
Sources
It is present in detectable amounts in Costus speciosus, Smilax menispermoidea, Helicteres isora, species of Paris, Aletris, Trigonella, and Trillium, and in extractable amounts from many species of Dioscorea – D. althaeoides, D. colletti, D. composita, D. floribunda, D. futschauensis, D. gracillima, D. hispida, D. hypoglauca, D. mexicana, D. nipponica, D. panthaica, D. parviflora, D. septemloba, and D. zingiberensis.
Industrial uses
Diosgenin is a chemical precursor for several hormones, starting with the Marker degradation process, which includes synthesis of progesterone. The process was used in the early manufacturing of combined oral contraceptive pills. Diosgenin in dietary supplements is not a physiological precursor to estradiol or progesterone, and the use of such products as wild yam has no hormonal activity in the human body.
See also
List of neurosteroids
Spirostanes
References
External links
Estrogens
Hormonal contraception
Progestogens
Spiro compounds
Steroids
Tetrahydrofurans
Tetrahydropyrans | Diosgenin | Chemistry | 409 |
41,479,084 | https://en.wikipedia.org/wiki/Altered%20Schaedler%20flora | The altered Schaedler flora (ASF) is a community of eight bacterial species: two lactobacilli, one Bacteroides, one spiral bacterium of the Flexistipes genus, and four extremely oxygen sensitive (EOS) fusiform-shaped species. The bacteria are selected for their dominance and persistence in the normal microflora of mice, and for their ability to be isolated and grown in laboratory settings. Germ-free animals, mainly mice, are colonized with ASF for the purpose of studying the gastrointestinal (GI) tract. Intestinal mutualistic bacteria play an important role in affecting gene expression of the GI tract, immune responses, nutrient absorption, and pathogen resistance. The standardized microbial cocktail enabled the controlled study of microbe and host interactions, role of microbes, pathogen effects, and intestinal immunity and disease association, such as cancer, inflammatory bowel disease, diabetes, and other inflammatory or autoimmune diseases. Also, compared to germfree animals, ASF mice have fully developed immune system, resistance to opportunistic pathogens, and normal GI function and health, and are a great representation of normal mice.
History
The GI tract is particularly difficult to study due to its complex host-pathogen interaction. With 107-1011 bacteria, 400-plus species, and variations between individuals, there are many complications in the study of a normal gastrointestinal system. For example, it is problematic to assign biological function to specific microbes and community structure, and to investigate the respective immune responses. Furthermore, the varying mice microbiome need to be under controlled conditions for repetitions of the experiments. Germfree mice and specific pathogen free (SPF) mice are helpful in addressing some of the issues, but inadequate in many areas. Germfree mice are not a good representation of normal mice, with issues of enlarged cecum, low reproductive rates, poorly developed immune system, and reduced health. SPF mice still contain varying microbiota, just without certain known pathogen species. There is a need in the scientific field for a known bacterial mixture that is necessary and sufficient for healthy mice.
In the mid-1960s, Russell W. Schaedler, M.D., isolated and grew bacteria from conventional and SPF laboratory mice. Aerobic and less oxygen-sensitive anaerobic bacteria are easy to culture. Fusiform-shaped anaerobes and other EOS bacteria are much more difficult to culture, even though they represent the majority of the normal rodent microbiota. He selected for the bacteria that dominated and can be isolated in culture, and then colonized germfree mice with different bacteria combinations. For example, one combination could include Escherichia coli, Streptococcus fecalis, Lactobacillus acidophilus, L. salivarius, Bacteroides distasonis, and an EOS fusiform-shaped Clostridium spp., . Certain defined microflora are able to restore germfree mice to resemble normal mice with reduced cecal volume, restored reproductive ability, colonization resistance, and well developed immune system. So named Schaedler flora, the defined microflora combinations were widely used in gnotobiotic studies.
In 1978, the National Cancer Institute requested Roger Orcutt of Charles River Laboratories, whose Ph.D. mentor was Dr. Schaedler, to revise a new microflora for standardizing all of its isolator-maintained nuclear stocks and strains of mice. In what was named "the altered Schaedler flora", four bacteria of the original mixture were kept from the original "Schaedler cocktail" microflora: the two Lactobacilli, the Bacteroides, and the EOS fusiform-shaped bacterium. Four more bacteria from the microbiome isolates were added: a spirochete-shaped bacterium and three new EOS fusiform-shaped bacteria. Due to the limited technology of the time, not much was known of the specific bacterial Genus and species. These bacteria are persistent and dominant in normal and SPF mice GI tracts. Confirmation of the correct microbiota presence was limited to looking at the cell Morphology (biology), biochemical traits and growth characteristics
Dr. Orcutt lamented that he would have included the Segmented Filamentous Bacterium of the small intestine of mice in the altered Schaedler flora, which is so intimately involved with the host's immune system, if it could have been cultured in vitro. However, to this day, over 40 years later, it still can only be maintained in vivo and eludes being isolated in pure culture.
Bacteria
With the recent advancement in biotechnology, researchers were able to determine the precise Genus and species of the ASF bacteria using sequence analysis of 16S rRNA. The strains identified are different from the presumptive identities. The distribution of the bacteria species in the gut depends on their need of and aversion to oxygen, flow rate, and substrate abundance, with variability based on age, gender and other microorganisms present in the mice.
ASF 360 and ASF 361 are Lactobacilli. Lactobacilli are rod-shaped, Gram-positive, aerotolerant bacteria, and common colonizers of the squamous epithelia of the stomach of mice. ASF 360 was thought to be L. acidophilus. However, 16SrRNA results showed that it is closely related to but distinct from L. acidophilus. ASF 360 is a novel lactobacillus species; clustered with L. acidophilus and L. lactis. ASF 361 has nearly identical 16S rRNA sequences to L. murinus and L. animalis. Both species are routinely found in GI tracts of mice and rats. A thorough examination of the two species and strains is necessary to determine the identity of ASF 361 with more confidence. ASF 361 is completely distinct from the L. salivarius that it was believed to be. ASF 360 and ASF 361 colonize in high numbers in the stomach and then slough off and travel through the small intestine and the cecum.
ASF 519 is related to B. distasonis, the species it was mistaken to be before 16S RNA sequencing was available. However, like the previous bacteria, it is a distinct species by 16S rRNA evidence. Bacteroides species are often found in GI tracts of mammals, and included non-motile, Gram-negative, anaerobic, rod-shaped bacteria. Recently, many of Bacteroides species are being recognized as actually belonging to other genera, like Porphyromonas and Prevotella. In the case of ASF 519, it belongs to the newly named Parabacteroides genus, along with the bacteria formerly known as [B.] distasonis, [B.] merdae, CDC group DF-3, and [B.] forsythus.
The spiral-shaped obligate anaerobe ASF 457 can be found in small amounts in the small intestine, and in high concentration in the large intestine. This bacterium is related to G. ferrireducens, Deferribacter thermophilus, and Flexistipes sinusarabici. ASF 457 is later named Mucispirillum schaedleri. The species is related to the Flexistipes phylum with iron-reducing environmental isolates.
EOS fusiform bacteria make up the great majority of the authocthonous intestinal microbiota, and are mainly found in the large intestine. They vastly outnumber facultative anaerobic and aerobic bacteria. All four fusiform-shaped anaerobes belong to the low G+C content, Gram-positive bacteria group. ASF 356 is of the Clostridium Genus, closely related to Clostridium propionicum. ASF 502 is most related to Ruminococcus gnavus. ASF 492 is confirmed by 16S rRNA sequences as Eubacterium plexicaudatum, and is closely related to Roseburia ceciola. ASF 356, ASF 492, and ASF 502 are all part of the low G+C, Gram-positive bacteria of the Clostridium cluster XIV. ASF 500 is a deeper branch into the low G+C, Gram-positive bacteria of Firmicutes, Bacillus-Clostridium group, but not much can be found in the GenBank database on this branch of Clostridium cluster
Mouse models
Only mice have been colonized with ASF in experiments, since ASF bacteria originate from mice intestinal microbiome. Germfree mice are colonized by ASF through one of two methods. Pure culture of each living ASF bacterium can be grown in anaerobic conditions in laboratory setting. Lactobacilli and Bacteroides are given by gavage to germfree mice first to establish a microbial environment in the GI tract, which then supports the colonization of the spiral-shaped and fusiform bacteria that are given later. An alternative way is to inoculate the drinking water of germfree mice with fresh feces from cecum and colon of gnotobiotic mice (ASF mice), over a period of four days. The establishment and concentration of each bacteria species vary slightly depending on the age, gender, and environmental conditions of the mice.
Experimental results validate the dominance and persistence of the ASF in the colonized mice even after four generations. The mice can be treated in the same standards as germfree mice, such as sterilized water, germfree environment, and careful handling. Although this ensures the definite ASF propagation in mice intestine, it is labor-intensive and not a good representation of physiological conditions. ASF mice can also be raised in the same conditions as normal mice, because they have addressed the immunological, pathological, and physiological weaknesses of the germfree mice. ASF mice can maintain the eight bacteria species under normal conditions. However, variations in strains of the bacteria and introduction of minor amounts of other commensal, mutualist or pathogenic microbes could occur over time. Isogenic mice that cohabit showed little variation in ASF profile, while litter split among different cages showed divergence in bacteria strains. Once the ASF community are established though, it is highly stable over time without environmental or housing perturbation
Uses in research
ASF can be used to study a variety of activities involving the intestinal tract. This includes the study of gut microbiome community, metabolism, immunity, homeostasis, pathogenesis, inflammation, and diseases. Experiments comparing germfree, ASF, and pathogen-infected mice can demonstrate the role of commensals in maintaining the host health.
Intestinal homeostasis is maintained by host-microbe interactions and host immunity. This is critical for digestion of food and protection against pathogens. Bouskra, et al. studied the regulation of intestinal flora and the immune system. They found IgA producing B cells in the Peyer's patches, intestinal lymphoid tissues and follicles, and mesenteric lymph nodes. They used ASF to test the maturation of lymphoid follicles into large B cell clusters by the toll-like receptor signaling. In another study, the innate detection system generates adaptive immune system to maintain intestinal homeostasis. Geuking, et al. examined the role of regulatory T cells in limiting microbe-triggered intestinal inflammation and the T cell compartment. Using ASF, they found intestinal colonization resulted in activation and generation of colonic Treg cells. In germfree mice, Th17 and Th1 response dominate.
Bacteria microenvironment is very important in the pathogenesis of clinical and experimental chronic intestinal inflammation. Whary, et al. examined Helicobacter rodentium infection and the resulting ulcerative typhlocolitis, sepsis, and morbidity. Using ASF mice, they showed a decrease in disease progression due to colonization resistance in the lower bowel from the impacts of normal anaerobic flora. In another summary, Fox examined the relationship between microbiome of the gut and the onset of inflammatory bowel disease (IBD) with the infection of H. bilis. H. bilis is noted to elicit heterologous immune response to lower gut flora, in both activating pro-inflammatory cytokine and dendritic cell activity and probiotic anti-inflammatory activity due to the presentation of mutualist antigens. ASF Lactobacilli and Bacteroides help moderate bowel inflammation in a balanced manner in pathogen infection studies.
Beyond the study of bacterial pathogen, microflora community, intestinal immune system interactions and diseases, ASF has been used in experiments examining the transmission of retrovirus. In the paper by Kane, et al., they found the mouse mammary tumor virus is transmitted most efficiently through bacteria colonized mucosal surfaces. The retrovirus evolved to rely on the interaction with microbiota and toll-like receptor to evade immune pathways.
Problems
ASF is not a comprehensive representation of the over 400 diverse bacteria species that normally occupy the mice GI tract. Even in SPF mice, there are many Helicobacter and Filamentous species not included in ASF1. Not to mention the many bacteria that could not be cultured under laboratory settings due to inadequate environment and symbiosis needs. The gut bacteria make up a complex microbial community that supports each other, and the development of the host GI tract and the immune system.
Many bacteria are associated specifically for the production of certain metabolites or signaling pathway that maintains the survival of the microflora. For example, hippurate and chlorogenic acid metabolite level in mice change due to microflora. The synthesis pathway depends on multiple bacteria species, which are not all present in ASF. This limits the bioavailability of nutrients to both host and microbe.
Additional strains of bacteria might need to be added for certain studies with metabolism, pathogenesis, or microbe interactions. It is impossible to study the complete organization of the gut microbiome and all its contributions to the host system, especially with relations to disease development and nutrition, with only eight microbes. Furthermore, there are differences between mice and human microflora. So there are limitations to studies using ASF mice to depict human inflammatory diseases like IBD, arthritis, and cancer. ASF is only a basis for developing hypotheses for mice with complex microflora.
See also
Microbiome
References
Bacteria | Altered Schaedler flora | Biology | 3,090 |
65,045,843 | https://en.wikipedia.org/wiki/List%20of%20countries%20by%20thorium%20resources | Thorium resources are the estimated mineral reserves of thorium on Earth. Thorium is a future potential source of low-carbon energy. Thorium has been demonstrated to perform as a nuclear fuel in several reactor designs. It is present with a higher abundance than uranium in the crust of the earth. Thorium resources have not been estimated and assessed with a higher level of confidence, as in the case of uranium. Approximately 6 million tonnes of thorium have been estimated globally based on currently limited exploration and mainly on historical data.
Thorium resources are found widely in over 35 countries all over the world. As there is currently negligible commercial use of thorium, the resources should be considered potentially viable according to the United Nations Framework Classification for Resources. Figures are given in metric tonnes of thorium metal.
See also
Thorium
Occurrence of thorium
List of countries by uranium reserves
Thorium fuel cycle
Thorium-based nuclear power
Thorium Energy Alliance
Nuclear power
References
Sources
Nuclear fuels
Nuclear technology
Thorium | List of countries by thorium resources | Physics | 200 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.