text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In philosophy of science, strong inference is a model of scientific inquiry that emphasizes the need for alternative hypotheses, rather than a single hypothesis to avoid confirmation bias.
The term "strong inference" was coined by John R. Platt, a biophysicist at the University of Chicago. Platt notes that some fields, such as molecular biology and high-energy physics, seem to adhere strongly to strong inference, with very beneficial results for the rate of progress in those fields.
== The single hypothesis problem ==
The problem with single hypotheses, confirmation bias, was aptly described by Thomas Chrowder Chamberlin in 1897:
The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for [one’s] intellectual child springs into existence, and as the explanation grows into a definite theory [one’s] parental affections cluster about [the] offspring and it grows more and more dear .... There springs up also unwittingly a pressing of the theory to make it fit the facts and a pressing of the facts to make them fit the theory...
The temptation to misinterpret results that contradict the desired hypothesis is probably irresistible.
Despite the admonitions of Platt, reviewers of grant-applications often require "A Hypothesis" as part of the proposal (note the singular). Peer-review of research can help avoid the mistakes of single-hypotheses, but only so long as the reviewers are not in the thrall of the same hypothesis. If there is a shared enthrallment among the reviewers in a commonly believed hypothesis, then innovation becomes difficult because alternative hypotheses are not seriously considered, and sometimes not even permitted.
== Strong Inference ==
The method, very similar to the scientific method, is described as:
Devising alternative hypotheses;
Devising a crucial experiment (or several of them), with alternative possible outcomes, each of which will, as nearly as possible, exclude one or more of the hypotheses;
Carrying out the experiment(s) so as to get a clean result;
Recycling the procedure, making subhypotheses or sequential hypotheses to refine the possibilities that remain, and so on.
The methods of Grey system theory effectively entertain strong inference. In such methods, the first step is the nullification of the single hypothesis by assuming that the true information of the system under study is only partially known.
== Criticisms ==
The original paper outlining strong inference has been criticized, particularly for overstating the degree that certain fields used this method.
== Strong inference plus ==
The limitations of Strong-Inference can be corrected by having two preceding phases:
An exploratory phase: at this point information is inadequate so observations are chosen randomly or intuitively or based on scientific creativity.
A pilot phase: in this phase statistical power is determined by replicating experiments under identical experimental conditions.
These phases create the critical seed observation (s) upon which one can base alternative hypotheses.
== References == | Wikipedia/Strong_inference |
Unification of theories about observable fundamental phenomena of nature is one of the primary goals of physics. The two great unifications to date are Isaac Newton’s unification of gravity and astronomy, and James Clerk Maxwell’s unification of electromagnetism; the latter has been further unified with the concept of electroweak interaction. This process of "unifying" forces continues today, with the ultimate goal of finding a theory of everything.
== Unification of gravity and astronomy ==
The "first great unification" was Isaac Newton's 17th century unification of gravity, which brought together the understandings of the observable phenomena of gravity on Earth with the observable behaviour of celestial bodies in space.
His work is credited with laying the foundations of future endeavors for a grand unified theory. For example, it has been stated that "If we have to take any single individual as the originator of the quest for a unified theory of physics, and, by implication, the whole of knowledge, it has to be Newton." Physicist Steven Weinberg stated that "It is with Isaac Newton that the modern dream of a final theory really begins".
== Unification of magnetism, electricity, light and related radiation ==
The ancient Chinese people observed that certain rocks such as lodestone and magnetite were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. However, prior to ancient Chinese observations of magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, work in the 19th century revealed that these two forces were just two different aspects of one force – electromagnetism.
The "second great unification" was James Clerk Maxwell's 19th century unification of electromagnetism. It brought together the understandings of the observable phenomena of magnetism, electricity and light (and more broadly, the spectrum of electromagnetic radiation). This was followed in the 20th century by Albert Einstein's unification of space and time, and of mass and energy through his theory of special relativity. Later, Paul Dirac developed quantum field theory, unifying quantum mechanics and special relativity.
A relatively recent unification of electromagnetism and the weak nuclear force now consider them to be two aspects of the electroweak interaction.
== Unification of the remaining fundamental forces: theory of everything ==
This process of "unifying" forces continues today, with the ultimate goal of finding a theory of everything – it remains perhaps the most prominent of the unsolved problems in physics. There remain four fundamental forces which have not been decisively unified: the gravitational and electromagnetic interactions, which produce significant long-range forces whose effects can be seen directly in everyday life, and the strong and weak interactions, which produce forces at minuscule, subatomic distances and govern nuclear interactions. Electromagnetism and the weak interactions are widely considered to be two aspects of the electroweak interaction. Attempts to unify quantum mechanics and general relativity into a single theory of quantum gravity, a program ongoing for over half a century, have not yet been decisively resolved; current leading candidates are M-theory, superstring theory and loop quantum gravity.
== References == | Wikipedia/Unification_of_theories_in_physics |
Open science is the movement to make scientific research (including publications, data, physical samples, and software) and its dissemination accessible to all levels of society, amateur or professional. Open science is transparent and accessible knowledge that is shared and developed through collaborative networks. It encompasses practices such as publishing open research, campaigning for open access, encouraging scientists to practice open-notebook science (such as openly sharing data and code), broader dissemination and engagement in science and generally making it easier to publish, access and communicate scientific knowledge.
Usage of the term varies substantially across disciplines, with a notable prevalence in the STEM disciplines. Open research is often used quasi-synonymously to address the gap that the denotion of "science" might have regarding an inclusion of the Arts, Humanities and Social Sciences. The primary focus connecting all disciplines is the widespread uptake of new technologies and tools, and the underlying ecology of the production, dissemination and reception of knowledge from a research-based point-of-view.
As Tennant et al. (2020) note, the term open science "implicitly seems only to regard ‘scientific’ disciplines, whereas open scholarship can be considered to include research from the Arts and Humanities, as well as the different roles and practices that researchers perform as educators and communicators, and an underlying open philosophy of sharing knowledge beyond research communities."
Open science can be seen as a continuation of, rather than a revolution in, practices begun in the 17th century with the advent of the academic journal, when the societal demand for access to scientific knowledge reached a point at which it became necessary for groups of scientists to share resources with each other. In modern times there is debate about the extent to which scientific information should be shared. The conflict that led to the Open Science movement is between the desire of scientists to have access to shared resources versus the desire of individual entities to profit when other entities take part of their resources. Additionally, the status of open access and resources that are available for its promotion are likely to differ from one field of academic inquiry to another.
== Principles ==
The six principles of open science are:
Open methodology
Open source
Open data
Open access
Open peer review
Open educational resources
== Background ==
Science is broadly understood as collecting, analyzing, publishing, reanalyzing, criticizing, and reusing data. Proponents of open science identify a number of barriers that impede or dissuade the broad dissemination of scientific data.
These include financial paywalls of for-profit research publishers, restrictions on usage applied by publishers of data, poor formatting of data or use of proprietary software that makes it difficult to re-purpose, and cultural reluctance to publish data for fears of losing control of how the information is used.
According to the FOSTER taxonomy Open science can often include aspects of Open access, Open data and the open source movement whereby modern science requires software to process data and information.
Open research computation also addresses the problem of reproducibility of scientific results.
=== Types ===
The term "open science" does not have any one fixed definition or operationalization. On the one hand, it has been referred to as a "puzzling phenomenon". On the other hand, the term has been used to encapsulate a series of principles that aim to foster scientific growth and its complementary access to the public. Two influential sociologists, Benedikt Fecher and Sascha Friesike, have created multiple "schools of thought" that describe the different interpretations of the term.
According to Fecher and Friesike ‘Open Science’ is an umbrella term for various assumptions about the development and dissemination of knowledge. To show the term's multitudinous perceptions, they differentiate between five Open Science schools of thought:
==== Infrastructure School ====
The infrastructure school is founded on the assumption that "efficient" research depends on the availability of tools and applications. Therefore, the "goal" of the school is to promote the creation of openly available platforms, tools, and services for scientists. Hence, the infrastructure school is concerned with the technical infrastructure that promotes the development of emerging and developing research practices through the use of the internet, including the use of software and applications, in addition to conventional computing networks. In that sense, the infrastructure school regards open science as a technological challenge. The infrastructure school is tied closely with the notion of "cyberscience", which describes the trend of applying information and communication technologies to scientific research, which has led to an amicable development of the infrastructure school. Specific elements of this prosperity include increasing collaboration and interaction between scientists, as well as the development of "open-source science" practices. The sociologists discuss two central trends in the infrastructure school:
1. Distributed computing: This trend encapsulates practices that outsource complex, process-heavy scientific computing to a network of volunteer computers around the world. The examples that the sociologists cite in their paper is that of the Open Science Grid, which enables the development of large-scale projects that require high-volume data management and processing, which is accomplished through a distributed computer network. Moreover, the grid provides the necessary tools that the scientists can use to facilitate this process.
2. Social and Collaboration Networks of Scientists: This trend encapsulates the development of software that makes interaction with other researchers and scientific collaborations much easier than traditional, non-digital practices. Specifically, the trend is focused on implementing newer Web 2.0 tools to facilitate research related activities on the internet. De Roure and colleagues (2008) list a series of four key capabilities which they believe define a Social Virtual Research Environment (SVRE):
The SVRE should primarily aid the management and sharing of research objects. The authors define these to be a variety of digital commodities that are used repeatedly by researchers.
Second, the SVRE should have inbuilt incentives for researchers to make their research objects available on the online platform.
Third, the SVRE should be "open" as well as "extensible", implying that different types of digital artifacts composing the SVRE can be easily integrated.
Fourth, the authors propose that the SVRE is more than a simple storage tool for research information. Instead, the researchers propose that the platform should be "actionable". That is, the platform should be built in such a way that research objects can be used in the conduct of research as opposed to simply being stored.
==== Measurement school ====
The measurement school, in the view of the authors, deals with developing alternative methods to determine scientific impact. This school acknowledges that measurements of scientific impact are crucial to a researcher's reputation, funding opportunities, and career development. Hence, the authors argue, that any discourse about Open Science is pivoted around developing a robust measure of scientific impact in the digital age. The authors then discuss other research indicating support for the measurement school. The three key currents of previous literature discussed by the authors are:
The peer-review is described as being time-consuming.
The impact of an article, tied to the name of the authors of the article, is related more to the circulation of the journal rather than the overall quality of the article itself.
New publishing formats that are closely aligned with the philosophy of Open Science are rarely found in the format of a journal that allows for the assignment of the impact factor.
Hence, this school argues that there are faster impact measurement technologies that can account for a range of publication types as well as social media web coverage of a scientific contribution to arrive at a complete evaluation of how impactful the science contribution was. The gist of the argument for this school is that hidden uses like reading, bookmarking, sharing, discussing and rating are traceable activities, and these traces can and should be used to develop a newer measure of scientific impact. The umbrella jargon for this new type of impact measurements is called altmetrics, coined in a 2011 article by Priem et al., (2011). Markedly, the authors discuss evidence that altmetrics differ from traditional webometrics which are slow and unstructured. Altmetrics are proposed to rely upon a greater set of measures that account for tweets, blogs, discussions, and bookmarks. The authors claim that the existing literature has often proposed that altmetrics should also encapsulate the scientific process, and measure the process of research and collaboration to create an overall metric. However, the authors are explicit in their assessment that few papers offer methodological details as to how to accomplish this. The authors use this and the general dearth of evidence to conclude that research in the area of altmetrics is still in its infancy.
==== Public School ====
According to the authors, the central concern of the school is to make science accessible to a wider audience. The inherent assumption of this school, as described by the authors, is that the newer communication technologies such as Web 2.0 allow scientists to open up the research process and also allow scientist to better prepare their "products of research" for interested non-experts. Hence, the school is characterized by two broad streams: one argues for the access of the research process to the masses, whereas the other argues for increased access to the scientific product to the public.
Accessibility to the Research Process: Communication technology allows not only for the constant documentation of research but also promotes the inclusion of many different external individuals in the process itself. The authors cite citizen science – the participation of non-scientists and amateurs in research. The authors discuss instances in which gaming tools allow scientists to harness the brain power of a volunteer workforce to run through several permutations of protein-folded structures. This allows for scientists to eliminate many more plausible protein structures while also "enriching" the citizens about science. The authors also discuss a common criticism of this approach: the amateur nature of the participants threatens to pervade the scientific rigor of experimentation.
Comprehensibility of the Research Result: This stream of research concerns itself with making research understandable for a wider audience. The authors describe a host of authors that promote the use of specific tools for scientific communication, such as microblogging services, to direct users to relevant literature. The authors claim that this school proposes that it is the obligation of every researcher to make their research accessible to the public. The authors then proceed to discuss if there is an emerging market for brokers and mediators of knowledge that is otherwise too complicated for the public to grasp.
==== Democratic school ====
The democratic school concerns itself with the concept of access to knowledge. As opposed to focusing on the accessibility of research and its understandability, advocates of this school focus on the access of products of research to the public. The central concern of the school is with the legal and other obstacles that hinder the access of research publications and scientific data to the public. Proponents assert that any research product should be freely available. and that everyone has the same, equal right of access to knowledge, especially in the instances of state-funded experiments and data. Two central currents characterize this school: Open Access and Open Data.
Open Data: Opposition to the notion that publishing journals should claim copyright over experimental data, which prevents the re-use of data and therefore lowers the overall efficiency of science in general. The claim is that journals have no use of the experimental data and that allowing other researchers to use this data will be fruitful. Only a quarter of researchers agree to share their data with other researchers because of the effort required for compliance.
Open Access to Research Publication: According to this school, there is a gap between the creation and sharing of knowledge. Proponents argue that even though scientific knowledge doubles every 5 years, access to this knowledge remains limited. These proponents consider access to knowledge as a necessity for human development, especially in the economic sense.
==== Pragmatic School ====
The pragmatic school considers Open Science as the possibility to make knowledge creation and dissemination more efficient by increasing the collaboration throughout the research process. Proponents argue that science could be optimized by modularizing the process and opening up the scientific value chain. 'Open' in this sense follows very much the concept of open innovation. Take for instance transfers the outside-in (including external knowledge in the production process) and inside-out (spillovers from the formerly closed production process) principles to science. Web 2.0 is considered a set of helpful tools that can foster collaboration (sometimes also referred to as Science 2.0). Further, citizen science is seen as a form of collaboration that includes knowledge and information from non-scientists. Fecher and Friesike describe data sharing as an example of the pragmatic school as it enables researchers to use other researchers' data to pursue new research questions or to conduct data-driven replications.
== History ==
The widespread adoption of the institution of the scientific journal marks the beginning of the modern concept of open science. Before this time societies pressured scientists into secretive behaviors.
=== Before journals ===
Before the advent of scientific journals, scientists had little to gain and much to lose by publicizing scientific discoveries. Many scientists, including Galileo, Kepler, Isaac Newton, Christiaan Huygens, and Robert Hooke, made claim to their discoveries by describing them in papers coded in anagrams or cyphers and then distributing the coded text. Their intent was to develop their discovery into something off which they could profit, then reveal their discovery to prove ownership when they were prepared to make a claim on it.
The system of not publicizing discoveries caused problems because discoveries were not shared quickly and because it sometimes was difficult for the discoverer to prove priority. Newton and Gottfried Leibniz both claimed priority in discovering calculus. Newton said that he wrote about calculus in the 1660s and 1670s, but did not publish until 1693. Leibniz published "Nova Methodus pro Maximis et Minimis", a treatise on calculus, in 1684. Debates over priority are inherent in systems where science is not published openly, and this was problematic for scientists who wanted to benefit from priority.
These cases are representative of a system of aristocratic patronage in which scientists received funding to develop either immediately useful things or to entertain. In this sense, funding of science gave prestige to the patron in the same way that funding of artists, writers, architects, and philosophers did. Because of this, scientists were under pressure to satisfy the desires of their patrons, and discouraged from being open with research which would bring prestige to persons other than their patrons.
=== Emergence of academies and journals ===
Eventually the individual patronage system ceased to provide the scientific output which society began to demand. Single patrons could not sufficiently fund scientists, who had unstable careers and needed consistent funding. The development which changed this was a trend to pool research by multiple scientists into an academy funded by multiple patrons. In 1660 England established the Royal Society and in 1666 the French established the French Academy of Sciences. Between the 1660s and 1793, governments gave official recognition to 70 other scientific organizations modeled after those two academies. In 1665, Henry Oldenburg became the editor of Philosophical Transactions of the Royal Society, the first academic journal devoted to science, and the foundation for the growth of scientific publishing. By 1699 there were 30 scientific journals; by 1790 there were 1052. Since then publishing has expanded at even greater rates.
=== Popular Science Writing ===
The first popular science periodical of its kind was published in 1872, under a suggestive name that is still a modern portal for the offering science journalism: Popular Science. The magazine claims to have documented the invention of the telephone, the phonograph, the electric light and the onset of automobile technology. The magazine goes so far as to claim that the "history of Popular Science is a true reflection of humankind's progress over the past 129+ years". Discussions of popular science writing most often contend their arguments around some type of "Science Boom". A recent historiographic account of popular science traces mentions of the term "science boom" to Daniel Greenberg's Science and Government Reports in 1979 which posited that "Scientific magazines are bursting out all over. Similarly, this account discusses the publication Time, and its cover story of Carl Sagan in 1980 as propagating the claim that popular science has "turned into enthusiasm". Crucially, this secondary account asks the important question as to what was considered as popular "science" to begin with. The paper claims that any account of how popular science writing bridged the gap between the informed masses and the expert scientists must first consider who was considered a scientist to begin with.
=== Collaboration among academies ===
In modern times many academies have pressured researchers at publicly funded universities and research institutions to engage in a mix of sharing research and making some technological developments proprietary. Some research products have the potential to generate commercial revenue, and in hope of capitalizing on these products, many research institutions withhold information and technology which otherwise would lead to overall scientific advancement if other research institutions had access to these resources. It is difficult to predict the potential payouts of technology or to assess the costs of withholding it, but there is general agreement that the benefit to any single institution of holding technology is not as great as the cost of withholding it from all other research institutions.
=== Coining of term "Open Science" ===
Steve Mann claimed to have coined the term "Open Science" in 1998. He also registered the domain names openscience.com and openscience.org in 1998, which he sold to degruyter.com in 2011. The term was previously used in a manner that refers to today's 'open science' norms by Daryl E. Chubin in his 1985 essay "Open Science and Closed Science: Tradeoffs in a Democracy". Chubin's essay cited Robert K. Merton's 1942 proposal of what we now refer to as Mertonian Norms for ideal science practices and scientific modes of communication. The term was used sporadically in the 1970s and 1980s in various scholarship to refer to different things.
=== Internet and the free access to scientific documents ===
The open science movement, as presented in activist and institutional discourses at the beginning of the 21st century, refers to different ways of opening up science, especially in the Internet age. Its first pillar is free access to scientific publications. The Budapest conference organised by the Open Society Foundations in 2001 was decisive in imposing this issue on the political landscape. The resulting declaration calls for the use of digital tools such as open archives and open access journals, free of charge for the reader.
The idea of open access to scientific publications quickly became inseparable from the question of free licenses to guarantee the right to disseminate and possibly modify shared documents, such as the Creative Commons licenses, created in 2002. In 2011, a new text from the Budapest Open Initiative explicitly refers to the relevance of the CC-BY license to guarantee free dissemination and not only free access to a scientific document.
The openness promise by the Internet is then extended to research data, which underpins scientific studies in different disciplines, as mentioned already in the Berlin Declaration in 2003. In 2007, the Organisation for Economic Co-operation and Development (OECD) published a report on access to publicly funded research data, in which it defined it as the data that validates research results.
Beyond its democratic virtues, open science aims to respond to the replication crisis of research results, notably through the generalization of the opening of data or source code used to produce them or through the dissemination of methodological articles.
The open science movement inspired several regulatory and legislative measures. Thus, in 2007, the University of Liège made the deposit of its researchers’ publications in its institutional open repository (Orbi) compulsory. The next year, the NIH Public Access Policy adopted a similar mandate for every paper funded by the National Institutes of Health. In France, the law for a digital Republic enacted in 2016 creates the right to deposit the validated manuscript of a scientific article in an open archive, with an embargo period following the date of publication in the journal. The law also creates the principle of reuse of public data by default.
== Politics ==
In many countries, governments fund some science research. Scientists often publish the results of their research by writing articles and donating them to be published in scholarly journals, which frequently are commercial. Public entities such as universities and libraries subscribe to these journals. Michael Eisen, a founder of the Public Library of Science, has described this system by saying that "taxpayers who already paid for the research would have to pay again to read the results."
In December 2011, some United States legislators introduced a bill called the Research Works Act, which would prohibit federal agencies from issuing grants with any provision requiring that articles reporting on taxpayer-funded research be published for free to the public online. Darrell Issa, a co-sponsor of the bill, explained the bill by saying that "Publicly funded research is and must continue to be absolutely available to the public. We must also protect the value added to publicly funded research by the private sector and ensure that there is still an active commercial and non-profit research community." One response to this bill was protests from various researchers; among them was a boycott of commercial publisher Elsevier called The Cost of Knowledge.
The Dutch Presidency of the Council of the European Union called out for action in April 2016 to migrate European Commission funded research to Open Science. European Commissioner Carlos Moedas introduced the Open Science Cloud at the Open Science Conference in Amsterdam on 4–5 April. During this meeting also The Amsterdam Call for Action on Open Science was presented, a living document outlining concrete actions for the European Community to move to Open Science. The European Commission continues to be committed to an Open Science policy including developing a repository for research digital objects, European Open Science Cloud (EOSC) and metrics for evaluating quality and impact.
In October 2021, the French Ministry of Higher Education, Research and Innovation released an official translation of its second plan for open science spanning the years 2021–2024.
=== Standard setting instruments ===
There is currently no global normative framework covering all aspects of Open Science. In November 2019, UNESCO was tasked by its 193 Member States, during their 40th General Conference, with leading a global dialogue on Open Science to identify globally-agreed norms and to create a standard-setting instrument. The multistakeholder, consultative, inclusive and participatory process to define a new global normative instrument on Open Science is expected to take two years and to lead to the adoption of a UNESCO Recommendation on Open Science by Member States in 2021.
Two UN frameworks set out some common global standards for application of Open Science and closely related concepts: the UNESCO Recommendation on Science and Scientific Researchers, approved by the General Conference at its 39th session in 2017, and the UNESCO Strategy on Open Access to scientific information and research, approved by the General Conference at its 36th session in 2011.
== Open Science and Research Assessment ==
A central aspect of the Open Science movement is the reform of research assessment. Initiatives such as the Coalition for Advancing Research Assessment (CoARA) and the San Francisco Declaration on Research Assessment (DORA) advocate moving away from traditional quantitative metrics like the Journal Impact Factor (JIF) and the h-Index, as these often exhibit biases and neglect qualitative aspects. Instead, alternative metrics and indicators, such as altmetrics and Open Science indicators, are to be given greater consideration. Open Science indicators include metrics such as the number of open access publications, data management plans, preprints, FAIR-licensed data, and open peer review reports. These approaches aim to promote the transparency and reusability of scientific outcomes, thereby enabling a fairer and more comprehensive evaluation of scientific achievements.While Open Science aims to enhance transparency, accessibility, and collaboration, the introduction of numerous new metrics to measure openness has led to unintended consequences. These metrics often rely on quantitative indicators, which conflict with the holistic and qualitative approaches advocated by initiatives such as CoARA and DORA. The core issue is that these metrics are designed not only to measure but also to influence researchers' behavior. This can result in "metric-driven" practices that undermine research quality. Additionally, Open Science metrics lack standardization and clarity regarding what they truly aim to measure. The risk is that while these metrics may incentivize openness, they could simultaneously distort the overall fairness and effectiveness of research assessment.
== Advantages and disadvantages ==
Arguments in favor of open science generally focus on the value of increased transparency in research, and in the public ownership of science, particularly that which is publicly funded. In January 2014 J. Christopher Bare published a comprehensive "Guide to Open Science". Likewise, in 2017, a group of scholars known for advocating open science published a "manifesto" for open science in the journal Nature.
=== Advantages ===
Open access publication of research reports and data allows for rigorous peer-review
An article published by a team of NASA astrobiologists in 2010 in Science reported a bacterium known as GFAJ-1 that could purportedly metabolize arsenic (unlike any previously known species of lifeform). This finding, along with NASA's claim that the paper "will impact the search for evidence of extraterrestrial life", met with criticism within the scientific community. Much of the scientific commentary and critique around this issue took place in public forums, most notably on Twitter, where hundreds of scientists and non-scientists created a hashtag community around the hashtag #arseniclife. University of British Columbia astrobiologist Rosie Redfield, one of the most vocal critics of the NASA team's research, also submitted a draft of a research report of a study that she and colleagues conducted which contradicted the NASA team's findings; the draft report appeared in arXiv, an open-research repository, and Redfield called in her lab's research blog for peer review both of their research and of the NASA team's original paper. Researcher Jeff Rouder defined Open Science as "endeavoring to preserve the rights of others to reach independent conclusions about your data and work".
Publicly funded science will be publicly available
Public funding of research has long been cited as one of the primary reasons for providing Open Access to research articles. Since there is significant value in other parts of the research such as code, data, protocols, and research proposals a similar argument is made that since these are publicly funded, they should be publicly available under a Creative Commons Licence.
Open science will make science more reproducible and transparent
Increasingly the reproducibility of science is being questioned and for many papers or multiple fields of research was shown to be lacking. This problem has been described as a "reproducibility crisis". For example, psychologist Stuart Vyse notes that "(r)ecent research aimed at previously published psychology studies has demonstrated – shockingly – that a large number of classic phenomena cannot be reproduced, and the popularity of p-hacking is thought to be one of the culprits." Open Science approaches are proposed as one way to help increase the reproducibility of work as well as to help mitigate against manipulation of data.
Open science has more impact
There are several components to impact in research, many of which are hotly debated. However, under traditional scientific metrics parts Open science such as Open Access and Open Data have proved to outperform traditional versions.
Open Science can provide learning opportunities
Open science needs to acknowledge and accommodate the heterogeneity of science. It provides an opportunities for different communities to learn from other communities, as well as to inform learning and practice across fields. For example preregistration in quantitative sciences can benefit qualitative researchers to reduce researcher degrees of freedom, whereas positionality statements have been used to contextual researcher and research environment in qualitative can be used in order to combat reproducibility crisis in quantitative research. In addition, journals should be open to publishing these behaviours, using a guide to ease journal editors into open science.
Open science will help answer uniquely complex questions
Recent arguments in favor of Open Science have maintained that Open Science is a necessary tool to begin answering immensely complex questions, such as the neural basis of consciousness, ecosystem services or pandemics such as the COVID-19 pandemic. The typical argument propagates the fact that these type of investigations are too complex to be carried out by any one individual, and therefore, they must rely on a network of open scientists to be accomplished. By default, the nature of these investigations also makes this "open science" as "big science". It is thought that open science could support innovation and societal benefits, supporting and reinforcing research activities by enabling digital resources that could, for example, use or provide structured open data.
=== Disadvantages ===
Arguments against open science tend to focus on the advantages of data ownership and concerns about the misuse of data, but see
Potential misuse
In 2011, Dutch researchers announced their intention to publish a research paper in the journal Science describing the creation of a strain of H5N1 influenza which can be easily passed between ferrets, the mammals which most closely mimic the human response to the flu. The announcement triggered a controversy in both political and scientific circles about the ethical implications of publishing scientific data which could be used to create biological weapons. These events are examples of how science data could potentially be misused. It has been argued that constraining the dissemination of dual-use knowledge can in certain cases be justified because, for example, "scientists have a responsibility for potentially harmful consequences of their research; the public need not always know of all scientific discoveries [or all its details]; uncertainty about the risks of harm may warrant precaution; and expected benefits do not always outweigh potential harm".
Scientists have collaboratively agreed to limit their own fields of inquiry on occasions such as the Asilomar conference on recombinant DNA in 1975,: 111 and a proposed 2015 worldwide moratorium on a human-genome-editing technique. Differential technological development aims to decrease risks by influencing the sequence in which technologies are developed. Relying only on the established form of legislation and incentives to ensure the right outcomes may not be adequate as these may often be too slow.
The public may misunderstand science data
In 2009 NASA launched the Kepler spacecraft and promised that they would release collected data in June 2010. Later they decided to postpone release so that their scientists could look at it first. Their rationale was that non-scientists might unintentionally misinterpret the data, and NASA scientists thought it would be preferable for them to be familiar with the data in advance so that they could report on it with their level of accuracy.
Low-quality science
Post-publication peer review, a staple of open science, has been criticized as promoting the production of lower quality papers that are extremely voluminous. Specifically, critics assert that as quality is not guaranteed by preprint servers, the veracity of papers will be difficult to assess by individual readers. This will lead to rippling effects of false science, akin to the recent epidemic of false news, propagated with ease on social media websites. Common solutions to this problem have been cited as adaptations of a new format in which everything is allowed to be published but a subsequent filter-curator model is imposed to ensure some basic quality of standards are met by all publications.
Entrapment by platform capitalism
For Philip Mirowski open science runs the risk of continuing a trend of commodification of science which ultimately serves the interests of capital in the guise of platform capitalism.
WEIRD-focus
Open Science is primarily driven by Western, Educated, Industrialized, Rich and Democratic (WEIRD) society that it is challenging for people from the Global South to implement or follow these changes for Open Science. As a result, it perpetuates inequalities found across cultures. However, journal editors have taken note of guidelines for change (e.g.) in order to make sure Open Science is more inclusive with a focus of multi-site studies and value of diversity within Open Science discussion.
== Actions and initiatives ==
=== Open-science projects ===
Different projects conduct, advocate, develop tools for, or fund open science.
The Allen Institute for Brain Science conducts numerous open science projects while the Center for Open Science has projects to conduct, advocate, and create tools for open science. Other workgroups have been created in different fields, such as the Decision Analysis in R for Technologies in Health (DARTH) workgroup], which is a multi-institutional, multi-university collaborative effort by researchers who have a common goal to develop transparent and open-source solutions to decision analysis in health.
Organizations have extremely diverse sizes and structures. The Open Knowledge Foundation (OKF) is a global organization sharing large data catalogs, running face to face conferences, and supporting open source software projects. In contrast, Blue Obelisk is an informal group of chemists and associated cheminformatics projects. The tableau of organizations is dynamic with some organizations becoming defunct, e.g., Science Commons, and new organizations trying to grow, e.g., the Self-Journal of Science. Common organizing forces include the knowledge domain, type of service provided, and even geography, e.g., OCSDNet's concentration on the developing world.
The Allen Brain Atlas maps gene expression in human and mouse brains; the Encyclopedia of Life documents all the terrestrial species; the Galaxy Zoo classifies galaxies; the International HapMap Project maps the haplotypes of the human genome; the Monarch Initiative makes available integrated public model organism and clinical data; and the Sloan Digital Sky Survey which regularizes and publishes data sets from many sources. All these projects accrete information provided by many different researchers with different standards of curation and contribution.
Mathematician Timothy Gowers launched open science journal Discrete Analysis in 2016 to demonstrate that a high-quality mathematics journal could be produced outside the traditional academic publishing industry. The launch followed a boycott of scientific journals that he initiated. The journal is published by a nonprofit which is owned and published by a team of scholars.
Other projects are organized around completion of projects that require extensive collaboration. For example, OpenWorm seeks to make a cellular level simulation of a roundworm, a multidisciplinary project. The Polymath Project seeks to solve difficult mathematical problems by enabling faster communications within the discipline of mathematics. The Collaborative Replications and Education project recruits undergraduate students as citizen scientists by offering funding. Each project defines its needs for contributors and collaboration.
Another practical example for open science project was the first "open" doctoral thesis started in 2012. It was made publicly available as a self-experiment right from the start to examine whether this dissemination is even possible during the productive stage of scientific studies. The goal of the dissertation project: Publish everything related to the doctoral study and research process as soon as possible, as comprehensive as possible and under an open license, online available at all time for everyone. End of 2017, the experiment was successfully completed and published in early 2018 as an open access book.
An example promoting accessibility of open-source code for research papers is CatalyzeX, which finds and links both official implementations by authors and source code independently replicated by other researchers. These code implementations are also surfaced on the preprint server arXiv and open peer-review platform OpenReview.
The ideas of open science have also been applied to recruitment with jobRxiv, a free and international job board that aims to mitigate imbalances in what different labs can afford to spend on hiring.
=== Advocacy ===
Numerous documents, organizations, and social movements advocate wider adoption of open science. Statements of principles include the Budapest Open Access Initiative from a December 2001 conference and the Panton Principles. New statements are constantly developed, such as the Amsterdam Call for Action on Open Science to be presented to the Dutch Presidency of the Council of the European Union in late May 2016. These statements often try to regularize licenses and disclosure for data and scientific literature.
Other advocates concentrate on educating scientists about appropriate open science software tools. Education is available as training seminars, e.g., the Software Carpentry project; as domain specific training materials, e.g., the Data Carpentry project; and as materials for teaching graduate classes, e.g., the Open Science Training Initiative. Many organizations also provide education in the general principles of open science.
Within scholarly societies there are also sections and interest groups that promote open science practices. The Ecological Society of America has an Open Science Section. Similarly, the Society for American Archaeology has an Open Science Interest Group.
=== Journal support ===
Many individual journals are experimenting with the open access model: the Public Library of Science, or PLOS, is creating a library of open access journals and scientific literature. Other publishing experiments include delayed and hybrid models. There are experiments in different fields:
F1000Research provides open publishing and open peer review for the life sciences.
The Open Library of Humanities is a non-profit open access publisher for the humanities and social sciences.
The Journals Library of the National Institute for Health and Care Research (NIHR) publishes all relevant documents and data from the onset of research projects, updating them alongside the progress of the study.
Journal support for open-science does not conflict with preprint servers:
figshare archives and shares images, readings, and other data; and Open Science Framework preprints, arXiv, and HAL Archives Ouvertes provide electronic preprints across many fields.
=== Software ===
A variety of computer resources support open science. These include software like the Open Science Framework from the Center for Open Science to manage project information, data archiving and team coordination; distributed computing services like Ibercivis to use unused CPU time for computationally intensive tasks; and services like Experiment.com to provide crowdsourced funding for research projects.
Blockchain platforms for open science have been proposed. The first such platform is the Open Science Organization, which aims to solve urgent problems with fragmentation of the scientific ecosystem and difficulties of producing validated, quality science. Among the initiatives of Open Science Organization include the Interplanetary Idea System (IPIS), Researcher Index (RR-index), Unique Researcher Identity (URI), and Research Network. The Interplanetary Idea System is a blockchain based system that tracks the evolution of scientific ideas over time. It serves to quantify ideas based on uniqueness and importance, thus allowing the scientific community to identify pain points with current scientific topics and preventing unnecessary re-invention of previously conducted science. The Researcher Index aims to establish a data-driven statistical metric for quantifying researcher impact. The Unique Researcher Identity is a blockchain technology based solution for creating a single unifying identity for each researcher, which is connected to the researcher's profile, research activities, and publications. The Research Network is a social networking platform for researchers. A scientific paper from November 2019 examined the suitability of blockchain technology to support open science.
=== Preprint servers ===
Preprint Servers come in many varieties, but the standard traits across them are stable: they seek to create a quick, free mode of communicating scientific knowledge to the public. Preprint servers act as a venue to quickly disseminate research and vary on their policies concerning when articles may be submitted relative to journal acceptance. Also typical of preprint servers is their lack of a peer-review process – typically, preprint servers have some type of quality check in place to ensure a minimum standard of publication, but this mechanism is not the same as a peer-review mechanism. Some preprint servers have explicitly partnered with the broader open science movement. Preprint servers can offer service similar to those of journals, and Google Scholar indexes many preprint servers and collects information about citations to preprints. The case for preprint servers is often made based on the slow pace of conventional publication formats. The motivation to start SocArXiv, an open-access preprint server for social science research, is the claim that valuable research being published in traditional venues often takes several months to years to get published, which slows down the process of science significantly. Another argument made in favor of preprint servers like SocArXiv is the quality and quickness of feedback offered to scientists on their pre-published work. The founders of SocArXiv claim that their platform allows researchers to gain easy feedback from their colleagues on the platform, thereby allowing scientists to develop their work into the highest possible quality before formal publication and circulation. The founders of SocArXiv further claim that their platform affords the authors the greatest level of flexibility in updating and editing their work to ensure that the latest version is available for rapid dissemination. The founders claim that this is not traditionally the case with formal journals, which instate formal procedures to make updates to published articles. Perhaps the strongest advantage of some preprint servers is their seamless compatibility with Open Science software such as the Open Science Framework. The founders of SocArXiv claim that their preprint server connects all aspects of the research life cycle in OSF with the article being published on the preprint server. According to the founders, this allows for greater transparency and minimal work on the authors' part.
One criticism of pre-print servers is their potential to foster a culture of plagiarism. For example, the popular physics preprint server ArXiv had to withdraw 22 papers when it came to light that they were plagiarized. In June 2002, a high-energy physicist in Japan was contacted by a man called Ramy Naboulsi, a non-institutionally affiliated mathematical physicist. Naboulsi requested Watanabe to upload his papers on ArXiv as he was not able to do so, because of his lack of an institutional affiliation. Later, the papers were realized to have been copied from the proceedings of a physics conference. Preprint servers are increasingly developing measures to circumvent this plagiarism problem. In developing nations like India and China, explicit measures are being taken to combat it. These measures usually involve creating some type of central repository for all available pre-prints, allowing the use of traditional plagiarism detecting algorithms to detect the fraud. Nonetheless, this is a pressing issue in the discussion of pre-print servers, and consequently for open science.
== See also ==
== References ==
== Sources ==
Belhajjame, Khalid; et al. (2014). "The Research Object Suite of Ontologies: Sharing and Exchanging Research Data and Methods on the Open Web". arXiv:1401.4307 [cs.DL].
Nielsen, Michael (2011). Reinventing Discovery: The New Era of Networked Science. Princeton, NJ: Princeton University Press. ISBN 978-0691148908.
Groen, Frances K. (2007). Access to medical knowledge : libraries, digitization, and the public good. Lanham, Mar.: Scarecrow Press. ISBN 978-0810852723.
Kronick, David A. (1976). A history of scientific & technical periodicals : the origins and development of the scientific and technical press, 1665–1790 (2d ed.). Metuchen, NJ: Scarecrow Press. ISBN 978-0810808447.
Price, Derek J. de Solla (1986). Little science, big science – and beyond (2nd ed.). New York: Columbia University Press. ISBN 978-0231049566.
Suber, Peter (2012). Open access (The MIT Press Essential Knowledge Series ed.). Cambridge, MA: MIT Press. ISBN 978-0262517638. Retrieved 28 July 2016.
== External links ==
TED talk video by Michael Nielsen on open science
Cracking Open the Scientific Process | Wikipedia/Open_science |
Acta Crystallographica is a series of peer-reviewed scientific journals, with articles centred on crystallography, published by the International Union of Crystallography (IUCr). Originally established in 1948 as a single journal called Acta Crystallographica, there are now six independent Acta Crystallographica titles:
Acta Crystallographica Section A: Foundations and Advances
Acta Crystallographica Section B: Structural Science, Crystal Engineering and Materials
Acta Crystallographica Section C: Structural Chemistry
Acta Crystallographica Section D: Structural Biology
Acta Crystallographica Section E: Crystallographic Communications
Acta Crystallographica Section F: Structural Biology Communications
Acta Crystallographica has been noted for the high quality of the papers that it produces, as well as the large impact that its papers have had on the field of crystallography. The current six journals form part of the journal portfolio of the IUCr, which is completed by the Journal of Applied Crystallography, the Journal of Synchrotron Radiation, the open-access IUCrJ and the open-access data publication IUCr Data.
== History ==
Acta Crystallographica was established in conjunction with the foundation of the International Union of Crystallography in 1948. Both were established to maintain an international forum for crystallography after the Second World War had led to a loss of international subscription to, and the eventual nine-year closure of, the main pre-war crystallography journal, Zeitschrift für Kristallographie. The founding editor of Acta Crystallographica was P. P. Ewald, who wrote in the preface to the first issue Acta Crystallographica is intended to offer a central place for publication and discussion of all research in this vast and ever-expanding field. It borders, naturally, on pure physics, chemistry, biology, mineralogy, technology and also on mathematics, but is distinguished by being concerned with the methods and results of investigating the arrangement of atoms in matter, particularly when that arrangement has regular features. A steady increase in the number of submitted papers led to the journal being split into Section A, covering fundamental and theoretical studies, and Section B, dedicated to reports of structures, in 1968, together with a new journal, the Journal of Applied Crystallography. In 1983, Section C, devoted to the crystal structures of small molecules, was added, with Section B now focusing on biological, chemical, mineralogical and metallurgical crystallography. The rapid expansion in biological crystallography led to the launch of Section D in 1993. The journals launched online versions in 1999, and in 2000 the journals began to provide electronic article submission and subscription access online. This was followed by the launch of an online-only journal, Section E, for brief reports of new small-molecule structures, in 2001; this journal became fully open access in 2008. A second online-only journal, Section F, dedicated to short reports of macromolecular structures and reports on their crystallization, followed in 2005. The IUCr moved to online-only publication for all its journals from 2014.
== References ==
== External links ==
IUCr journals official site | Wikipedia/Acta_Crystallographica |
Against Method: Outline of an Anarchistic Theory of Knowledge is a 1975 book by Austrian philosopher of science Paul Feyerabend. The central thesis of the book is that science should become an anarchic enterprise. In the context of the work, the term "anarchy" refers to epistemological anarchy, which does not remain within one single prescriptive scientific method on the grounds that any such method would restrict scientific progress. The work is notable in the history and philosophy of science partially due to its detailed case study of Galileo's hypothesis that the earth rotates on its axis and has since become a staple reading in introduction to philosophy of science courses at undergraduate and graduate levels.
Against Method contains many verbatim excerpts from Feyerabend's earlier papers including "Explanation, Reduction, and Empiricism", "How to be a Good Empiricist: A Plea for Tolerance in Matters Epistemological", and "Problems of Empiricism, Part I." Because of this, Feyerabend claims that "[Against Method] is not a book, it is a collage." Later editions of Against Method included passages from Science in a Free Society.
== Publication, Translations, and Editions ==
Feyerabend began writing Against Method in 1968 and it was originally released as a long paper in the Minnesota Studies in the Philosophy of Science series in 1970. At the behest of Lakatos, who originally planned to write For Method in contrast to Against Method but then died, the paper was expanded into a book published in 1975. Lakatos originally encouraged Feyerabend to publish with Cambridge University Press because they would be less concerned with their reputation than smaller presses, but Feyerabend chose to publish with Verso Books (then called New Left Books). Feyerabend came to regret this decision because of their editorial choices. Three more editions were released, in 1988, 1993, and posthumously in 2010. Significant changes were made including removing or adding chapters and appendices with new, updated introductions.
Against Method was an international best seller and, as a result, it has been translated into many languages. This includes:
German translation by Hermann Vetter (revised and enlarged): Wider den Methodenzwang: Skizze einer anarchistischen Erkenntnistheorie, Suhrkamp: Frankfurt am Main 1976, 443 pp.
Dutch translation by Hein Kray: In strijd met de methode: Aanzet tot een anarchistische kennistheorie, Meppel: Boom 1977, 375 pp.
Portuguese translation by Octanny S. da Mota and Leonidas Hegenberg: Contra o método: Esboça de una teoria anárquica da teoria do conhecimento, Livraria Francisco Alves: Rio de Janeiro 1977, 487 pp.
Swedish translation by Thomas Brante: Ned med metodologin! Skiss till en anarkistisk kunskapsteori, Raben and Sjogren: Zenit 1977, 326 pp.
French translation by Baudouin Jurdant and Agnès Schlumberger: Contre la methode: Esquisse d'une théorie anarchiste de la connaissance, Seuil: Paris 1979, 350 pp.
Italian translation by Libero Sosio: Contro il metodo: Abbozzo di una teoria anarchica della conscenza, Feltrinelli: Milan 1979, viii+262 pp.
Spanish translation by Diego Ribes: Tratado contra el método, Tecnos: Madrid 1981, xvii+319 pp.
Japanese translation: Hoho eno chosen: Kagakuteki sozo to chi no anakizumu, Shin'yosha: Tokyo 1981, 13+438 pp.
Turkish translation by Ahmet İnam: Yönteme Hayır: Bir Anarşist Bilgi Kuramının Ana Hatları, Ara: Istanbul 1989, 325 pp.
Chinese translation by Changzhong Zhou: Shanghai Translation Publishing House: Shanghai 1994, 269 pp.
The 4th edition, released after Feyerabend's death on the 35th anniversary of the initial book release, includes an introduction from Ian Hacking.
== Content ==
=== Epistemological anarchism ===
The primary thesis of Against Method is that there is no such thing as the scientific method and that it is not appropriate to impose a single methodological rule upon scientific practices. Rather, 'anything goes', meaning that scientists should be free to pursue whatever research seems interesting to them. The primary target of Against Method is 'rationalism', or the view that there are rational rules that should guide scientific practices. The German title of Against Method, Wider den Methodenzwang translates more directly to "Against the Forced Constraint of Method" emphasizing that it is the imposition of methodological rules that is rejected rather than the uses of methods altogether. Feyerabend offers two parallel arguments for this position, one conceptual and one historical. The conceptual argument aims to establish that it is always legitimate to violate established forms of scientific practice with the hopes of establishing a new form of scientific rationality. The historical argument provides examples of scientists profitably violating rules. Against Method contains dozens of case studies, though the majority of them are relegated to footnotes or passing remarks. The primary case study in Against Method is Galileo's hypothesis that the earth rotates on its axis.
Scholars have disputed the precise meaning of epistemological anarchism. John Preston claims that 'anything goes' signals Feyerabend's abandonment of normative philosophy. In other words, while Feyerabend defended pluralism in his works in the 1950s and 60s, Against Method represents a development in Feyerabend's thought where he abandons pluralism as well as normative theorizing altogether. A more common interpretation is that 'anything goes' does not represent a positive conviction of Feyerabend's but is the conclusion of a reductio ad absurdum. 'Anything goes' is therefore not a methodological prescription but "the terrified exclamation of the rationalist who takes a closer look at history". More recently, it has been argued that epistemological anarchism is a positive methodological proposal but comes in two inconsistent guises. On the one hand, epistemological anarchism means that scientists should be opportunists who adapt their methods to the situation at hand while, on the other hand, anarchism also signifies an unrestricted pluralism and therefore constitutes a radical generalization of his earlier arguments for pluralism.
=== Counterinduction ===
Feyerabend contends that for every methodological rule, there is a 'counter rule' – namely, a methodological rule that recommends the opposite of its counter – which also has value. As an example of this general hypothesis, Feyerabend defends 'counterinduction' as the counter rule to inductivism and "induction by falsification" as a valuable methodological rule. Counterinduction involves developing theories that are inconsistent with currently accepted empirical evidence, which is the opposite of the (then) commonly accepted rule that theories should be developed that are consistent with known facts. Feyerabend argues for counterinduction by showing that theories that conflict with known facts are useful for revealing 'natural interpretations' which must be made explicit so that they can be examined. Natural interpretations are interpretations of experience, expressed in language, that follow automatically and unconsciously from describing observations. After a theory has been accepted for a long period of time, it becomes habit to describe events or processes using certain concepts. Because, Feyerabend argues, observation underdetermines the ways we describe what we observe, theories that redescribe experience in new ways force us to make comparisons between old natural interpretations and new ones. This is the first step to evaluating the plausibility of either and so counterinduction aids in providing a thorough critical assessment of our acceptance of particular theories.
=== Galileo case study ===
The primary case study in Against Method is Galileo's hypothesis that the earth rotates on its axis. According to Feyerabend's reconstruction, Galileo did not justify this hypothesis by reference to known facts, nor did he offer an unfalsified conjecture that had more empirical content than its predecessor. Rather, Galileo's hypothesis would rationally have been considered to be false by the existing evidence at the time, and it is lower in empirical content than Aristotelian theory of motion. Moreover, Galileo did not provide arguments to justify his contention but instead used propaganda.
According to the existing evidence in the early 17th century, the position that the earth rotates on its axis would have rightly been regarded as false. For example, Galileo's theory of the tides suggested by the motion of the earth was inaccurate and the differences "were big enough to be known even to the most bleary-eyed sailor." In addition, the motion of the earth on its axis leads to the wrong predictions of the relative brightness of Mars and Venus when measured with the naked eye. To correct for these mistakes, Galileo introduces new evidence through his telescope. However, the telescope was not theoretically understood at the time. The best theory of optics was Kepler's, which Galileo didn't understand personally, which says nothing about how light reflects off convex lenses. Moreover, there were well-confirmed reasons to think – as the Aristotelians thought – that light behaves differently outside of the sublunar sphere and so telescopic vision would not have any justification for being veridical. In addition, when Galileo tested the telescope with many observational astronomers in Padua on terrestrial objects, it produced indeterminate and double images, optical illusions about the placement and magnification of celestial bodies, and after images even when tested on terrestrial objects. Because of this, Galileo had no new evidence to support his conjecture that the earth completes a diurnal rotation on its axis and the existing evidence suggested that it was false.
Galileo's hypothesis also does not follow Popper's falsificationism, which suggests that we do not use ad hoc hypotheses. Aristotle's theory of motion was a part of a broader theory of change, which included growth, decay and qualitative changes (such as changes in color). Galileo's theory of motion focuses solely on locomotion and, therefore, has less empirical content than Aristotle's theory. This also makes it more ad hoc, because it makes no new predictions and offers only a promissory note that locomotion will eventually explain everything Aristotle's theory was able to explain.
Feyerabend does not just argue that Galileo and his followers acted "irrationally" from the perspective of inductivism and falsificationism, but that it was reasonable that they did so. This is because Galileo's conjecture was able to reveal the natural interpretations that followed from the Aristotelian worldview. Natural interpretations, defined by Feyerabend, are interpretations of phenomena which happen naturally and automatically in our perception and the ways we attach language to what we observe. After accepting a theory for a long period of time, natural interpretations become implicit and forgotten and, therefore, difficult to test. By contrasting natural interpretations with other interpretations, they are made explicit and can be tested. Therefore, to fully scrutinize the Aristotelian worldview, Feyerabend suggests that Galileo was right to conjecture a new theory that revealed its natural interpretations.
The main example of the influence of natural interpretations that Feyerabend provided was the tower argument presented as an objection to the theory of a moving earth. Aristotelians accepted the proposition that a stone, or any solid body made of earth, dropped from a tower lands directly beneath it shows that the earth is stationary. They thought that, if the earth moved while the stone was falling, the stone would have been "left behind." Objects would fall in a parabola instead of vertically. Since this does not happen, Aristotelians thought the earth did not move. Galileo's hypothesis reveals that this assumes that all motion is "operative" (i.e., noticeable in perception). Galileo denies this assumption and argues that the stone falls in a parabola relative to absolute space, although the notion of absolute space was not made explicit and coherent until Newton.
However, Galileo did not present his work in this vein. If he did, Feyerabend conjectures that his new theory would have received little attention and would not have stimulated further inquiry into the Copernican system. Because of this, Galileo uses propaganda to make it seem as if his theories are implicit in the Aristotelian worldview. Specifically, Galileo makes it seem as if his conception of relative motion is embedded in Aristotelian common sense when it isn't (Aristotelian relative motion involves many moving bodies with dynamic effects noticeable in perception). According to Feyerabend, Galileo uses the technique of anamnēsis where he invites readers to "remember" that they already believed in relation motion in Galileo's sense. Using this method, he disguises how radical of a break his new theory is from then common sense.
=== Discovery/justification distinction ===
Herbert Feigl criticizes Feyerabend's earlier work, including the paper edition of Against Method, for conflating the distinction between the context of discovery and the context of justification. According to this distinction, formulated by Hans Reichenbach and Karl Popper, there is no logic about how scientists develop scientific theories but there should be a logic of confirming or disconfirming scientific theories. Once this distinction is accepted, then Feyerabend's claim that 'anything goes' would be a truism and would not run against logical empiricism. Feyerabend's response in Against Method is to reject the validity of the discovery/justification distinction. He argues that while the distinction can be maintained abstractly, it does not find a "point of attack" in scientific practice. This is because the two contexts are not separated in different phases of scientific research but are always comingled. Discounting evidence, for example is often necessary for scientific discovery but is rejected when seeking justification. Justifying scientific theories has implications for what research is conducted and, therefore, questions about what is justified also affects the paths open to discovery.
=== Criticism of Lakatos ===
The first edition of Against Method contains a chapter devoted to critically discussing Lakatos' methodological of research programs, although this chapter was removed in subsequent editions. Feyerabend offers several criticisms. Lakatos claims that research programs should be permitted 'breathing space' where research programs are allowed to be pursued regardless of their lack of empirical content, internal inconsistency, or conflicts with experimental results. Feyerabend agrees with this claim but argues that applying it consistently entails that we cannot cease the pursuit of research programs after they have been degenerating (i.e., becoming increasingly ad hoc) (regardless of how long they've been degenerating for). Feyerabend uses the example of Boltzmann's atomism as a theory that was degenerating in the 19th century as a result of the Zermelo-Poincaré recurrence objection and Loschmidt’s reversibility objection but was then vindicated in the early 20th century with Einstein's development of statistical mechanics to illustrate this point. Because of this, Feyerabend claims that although Lakatos insists that he has provided rational rules for the elimination of research programs, these rules are empty because they do not forbid any kind of behavior. Therefore, Lakatos is an 'anarchist in disguise' since it provides methodological rules that do not need to be followed.
Feyerabend provides a second criticism that ends with the same conclusion. According to Lakatos, his theory of scientific rationality only contains heuristics for its implementation rather than direct advice. Because of this, Lakatos' theory on its own provides no advice and the specific advice follows from considerations of concrete research practices. His third criticism concerns Lakatos' argument that theories of rationality should be tested against the value judgments of the 'scientific elite' in specific historical episodes. First, Feyerabend claims that the value judgments of the scientific elite are rarely uniform and so they will not uniquely choose a particular theory of scientific rationality. Second, the value judgments of scientific elites are often made on the basis of ignorance. Therefore, there seem to be strong reasons to not accept those value judgments. Third, Lakatos assumes that the standards of the scientific elite are superior to other value judgments (e.g., of witches) and therefore does not provide an argument against relativism. Finally, Feyerabend provides a 'cosmological' criticism of Lakatos' theory of rationality. Lakatos claims that theories of scientific rationality reconstruct the 'internal' growth of knowledge and ignore the 'external' (e.g., sociological, psychological, political) features of scienfic practice. However, without knowledge of the external features of scientific practice, Feyerabend claims that we cannot know whether a theory of scientific rationality will actually succeed in practice.
=== Scientific education ===
Feyerabend provides numerous criticisms of scientific education in his time. He claims that the primary role of education was to stunt individual creativity by forcing them to accept and research on topics that students did not choose for themselves. He also claims that education is responsible for what he calls "intellectual pollution" where "illiterate and incompetent books flood the market, empty verbiage full of strange and esoteric terms claims to express profound insights, 'experts' without brains, without character, and without even a modicum of intellectual, stylistic, emotional temperament tell us about our 'condition' and the means of improving it." He distinguishes between a general education, which is focused on the development of free individuals, and professionalization where one learns the ideology of a specific trade. In a general education, pupils are introduced to many intellectual and cultural traditions which they then engage with critically to make free choices about how they want to live their lives. Professionalization, by contrast, introduces pupils to a single tradition and often involves teaching this tradition as epistemically superior to its rivals. Feyerabend claims that increasing pushes for professionalization were coming at the expense of a general education. Feyerabend criticizes this on ethical grounds, as it reduces students to intellectual slaves, and on the grounds that a general education is more conductive to the development of knowledge.
== Scholarly reception ==
The immediate reaction to Against Method was largely negative amongst philosophers of science, with a few notable exceptions. Most of the commentary focused on Feyerabend's philosophical arguments rather than the Galileo case study. The primary criticisms were that epistemological anarchism is nothing but a repetition of Pyrrhonian skepticism or relativism, that Feyerabend is inconsistent with himself by arguing against method while arguing for methods (like counterinduction), and that he criticizes a strawman. One positive review came from Arne Naess, who had sympathies for epistemological anarchism.
Despite this, Against Method has remained one of the classic texts of 20th century philosophy of science and has been influential on subsequent philosophers of science (especially the Stanford School).
== Aftermath ==
Feyerabend responded to these criticisms in several follow-up publications, many of which he collected in Science in a Free Society. He was extremely frustrated by the quality of the reviews of Against Method, leading him to accuse them of illiteracy and a lack of competence. In his autobiography, he writes that he sometimes wishes that "he had never written that fucking book." This response led to Feyerabend's gradual removal from the academic community which also corresponded to changes of research topics in his work in the 1980s.
== References ==
== Further reading ==
The first, 1970 edition, is available for download in pdf form from the Minnesota Center for Philosophy of Science (part of the University of Minnesota). Follow this link path: Minnesota Studies in the Philosophy of Science > 4. Analyses of Theories & Methods of Physics and Psychology. 1970. Editors: M. Radner and S. Winokur > Open Access > Under the "Whoops!" message click 'Download' From the resulting file '4_Theories&Methods.zip' you need the three Feyerabend sections, 4_2_1_Feyerabend.pdf, 4_2_2_Feyerabend.pdf, 4_2_3_Feyerabend.pdf and the immediate following article on A Picture Theory of Theory Meaning (sic) (4_3_Hanson.pdf) in order to get the complete set of footnotes.
Discussion of the book in John Preston, "Paul Feyerabend", The Stanford Encyclopedia of Philosophy (Winter 2009 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/win2009/entries/feyerabend/>
Paul Tibbetts, Tomas Kulka, J N Hattiangadi, "Feyerabend's 'Against Method': The Case for Methodological Pluralism", Philosophy of the Social Sciences 7:3 (1977), 265–275. DOI 10.1177/004839317700700306 | Wikipedia/Against_Method |
Physics First is an educational program in the United States, that teaches a basic physics course in the ninth grade (usually 14-year-olds), rather than the biology course which is more standard in public schools. This course relies on the limited math skills that the students have from pre-algebra and algebra I. With these skills students study a broad subset of the introductory physics canon with an emphasis on topics which can be experienced kinesthetically or without deep mathematical reasoning. Furthermore, teaching physics first is better suited for English Language Learners, who would be overwhelmed by the substantial vocabulary requirements of Biology.
Physics First began as an organized movement among educators around 1990, and has been slowly catching on throughout the United States. The most prominent movement championing Physics First is Leon Lederman's ARISE (American Renaissance in Science Education).
Many proponents of Physics First argue that turning this order around lays the foundations for better understanding of chemistry, which in turn will lead to more comprehension of biology. Due to the tangible nature of most introductory physics experiments, Physics First also lends itself well to an introduction to inquiry-based science education, where students are encouraged to probe the workings of the world in which they live.
The majority of high schools which have implemented "physics first" do so by way of offering two separate classes, at two separate levels: simple physics concepts in 9th grade, followed by more advanced physics courses in 11th or 12th grade. In schools with this curriculum, nearly all 9th grade students take a "Physical Science", or "Introduction to Physics Concepts" course. These courses focus on concepts that can be studied with skills from pre-algebra and algebra I. With these ideas in place, students then can be exposed to ideas with more physics related content in chemistry, and other science electives. After this, students are then encouraged to take an 11th or 12th grade course in physics, which does use more advanced math, including vectors, geometry, and more involved algebra.
There is a large overlap between the Physics First movement, and the movement towards teaching conceptual physics - teaching physics in a way that emphasizes a strong understanding of physical principles over problem-solving ability.
== Criticism ==
American public schools traditionally teach biology in the first year of high school, chemistry in the second, and physics in the third. The belief is that this order is more accessible, largely because biology can be taught with less mathematics, and will do the most toward providing some scientific literacy for the largest number of students.
In addition, many scientists and educators argue that freshmen do not have an adequate background in mathematics to be able to fully comprehend a complete physics curriculum, and that therefore quality of a physics education is lost. While physics requires knowledge of vectors and some basic trigonometry, many students in the Physics First program take the course in conjunction with geometry. They suggest that instead students first take biology and chemistry which are less mathematics-intensive so that by the time they are in their junior year, students will be advanced enough in mathematics with either an algebra 2 or pre-calculus education to be able to fully grasp the concepts presented in physics. Some argue this even further, saying that at least calculus should be a prerequisite for physics.
Others point out that, for example, secondary school students will never study the advanced physics that underlies chemistry in the first place. "[I]nclined planes (frictionless or not) didn't come up in ... high school chemistry class ... and the same can be said for some of the chemistry that really makes sense of biological phenomena." For physics to be relevant to a chemistry course, students have to develop a truly fundamental understanding of the concepts of energy, force, and matter, beyond the context of specific applications like the inclined plane.
== Footnotes ==
== External links ==
American Association of Physics Teachers Listservs, incl. Physics First
A Closer Look at Cross-Disciplinary Educational Sequences
American Association of Physics Teachers on Physics First
Project ARISE (American Renaissance in Science Education)
AAPT Physics First Informational Guide (pdf file) | Wikipedia/Physics_first |
Emission theory or extramission theory (variants: extromission) or extromissionism is the proposal that visual perception is accomplished by eye beams emitted by the eyes. This theory has been replaced by intromission theory (or intromissionism), which is that visual perception comes from something representative of the object (later established to be rays of light reflected from it) entering the eyes. Modern physics has confirmed that light is physically transmitted by photons from a light source, such as the sun, to visible objects, and finishing with the detector, such as a human eye or camera.
== History ==
In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth, and water. He believed that Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye, making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated that there were two different types of emanations that interacted in some way: one that emanated from an object to the eye, and another that emanated from the eye to an object. He compared these outward-flowing emanations to the emission of light from a lantern.
Around 400 BC, emission theory was held by Plato.
Around 300 BC, Euclid wrote Optics and Catoptrics, in which he studied the properties of sight. Euclid postulated that the visual ray emitted from the eye travelled in straight lines, described the laws of reflection, and mathematically studied the appearance of objects by direct vision and by reflection.
Ptolemy (c. 2nd century) wrote Optics, a work marking the culmination of the ancient Greek optics, in which he developed theories of direct vision (optics proper), vision by reflection (catoptics), and, notably, vision by refraction (dioptrics).
Galen, also in the 2nd century, likewise endorsed the extramission theory (De Usu Partium Corporis Humani). His theory contained anatomical and physiological details which could not be found in the works of mathematicians and philosophers. Due to this feature and his medical authority, his view held considerable influence in the pre-modern Middle East and Europe, especially among medical doctors in these regions.
== Evidence for the theory ==
Adherents of emission theory cited at least two lines of evidence for it.
The light from the eyes of some animals (such as cats, which modern science has determined have highly reflective eyes) could also be seen in "darkness". Adherents of intromission theory countered by saying that if emission theory were true, then someone with weak eyes should have their vision improved when someone with good eyes looks at the same objects.
Some argued that Euclid's version of emission theory was purely metaphorical, highlighting mainly the geometrical relations between eyes and objects. The geometry of classical optics is equivalent no matter which direction light is considered to move because light is modeled by its path, not as a moving object. However, his theory of clarity of vision (the circular appearance of far rectangular objects) makes sense only if the ray emits from eyes. Alternatively, Euclid's can be interpreted as a mathematical model whose only constraint was to save the phenomena, without the need of a strict correspondence between each theoretical entity and a physical counterpart.
Measuring the speed of light was one line of evidence that spelled the end of emission theory as anything other than a metaphor.
== Refutation ==
Alhazen was the first person to explain that vision occurs when light reflects from an object into one's eyes.
The rise of rationalist physics in the 17th century led to a novel version of the intromissionist theory that proved extremely influential and displaced any legacies of the old emissive theories. In Cartesian physics, light was the sensation of pressure emitted by surrounding objects that sought to move, as transmitted through the rotatory motion of material corpuscles. These views extended to Isaac Newton's corpuscular theory of light, and would be adopted by John Locke and other the 18th-century luminaries.
== Persistence of the theory ==
Winer et al. (2002) have found evidence that as many as 50% of adults believe in emission theory.
Rupert Sheldrake claims to have found evidence for emission theory through his experiments in the sense of being stared at.
== Relationship with echolocation ==
Sometimes, the emission theory is explained by analogy with echolocation and sonar. For example, in explaining Ptolemy's theory, a psychologist stated:
"Ptolemy’s ‘extramission’ theory of vision proposed scaling the angular size of objects using light rays that were emitted by the eyes and reflected back by objects. In practice some animals (bats, dolphins, whales, and even some birds and rodents) have evolved what is effectively an ‘extramission’ theory of audition to address this very concern. "
Note this account of the Ptolemaic theory ('bouncing back of visual ray') differs from ones found in other sources.
== References == | Wikipedia/Emission_theory_(vision) |
In this statistics, quality assurance, and survey methodology, sampling is the selection of a subset or a statistical sample (termed sample for short) of individuals from within a statistical population to estimate characteristics of the whole population. The subset is meant to reflect the whole population, and statisticians attempt to collect samples that are representative of the population. Sampling has lower costs and faster data collection compared to recording data from the entire population (in many cases, collecting the whole population is impossible, like getting sizes of all stars in the universe), and thus, it can provide insights in cases where it is infeasible to measure an entire population.
Each observation measures one or more properties (such as weight, location, colour or mass) of independent objects or individuals. In survey sampling, weights can be applied to the data to adjust for the sample design, particularly in stratified sampling. Results from probability theory and statistical theory are employed to guide the practice. In business and medical research, sampling is widely used for gathering information about a population. Acceptance sampling is used to determine if a production lot of material meets the governing specifications.
== History ==
Random sampling by using lots is an old idea, mentioned several times in the Bible. In 1786, Pierre Simon Laplace estimated the population of France by using a sample, along with ratio estimator. He also computed probabilistic estimates of the error. These were not expressed as modern confidence intervals but as the sample size that would be needed to achieve a particular upper bound on the sampling error with probability 1000/1001. His estimates used Bayes' theorem with a uniform prior probability and assumed that his sample was random. Alexander Ivanovich Chuprov introduced sample surveys to Imperial Russia in the 1870s.
In the US, the 1936 Literary Digest prediction of a Republican win in the presidential election went badly awry, due to severe bias [1]. More than two million people responded to the study with their names obtained through magazine subscription lists and telephone directories. It was not appreciated that these lists were heavily biased towards Republicans and the resulting sample, though very large, was deeply flawed.
Elections in Singapore have adopted this practice since the 2015 election, also known as the sample counts, whereas according to the Elections Department (ELD), their country's election commission, sample counts help reduce speculation and misinformation, while helping election officials to check against the election result for that electoral division. While the reported sample counts yield a fairly accurate indicative result with a 4% margin of error at a 95% confidence interval, ELD reminded the public that sample counts are separate from official results, and only the returning officer will declare the official results once vote counting is complete.
== Population definition ==
Successful statistical practice is based on focused problem definition. In sampling, this includes defining the "population" from which our sample is drawn. A population can be defined as including all people or items with the characteristics one wishes to understand. Because there is very rarely enough time or money to gather information from everyone or everything in a population, the goal becomes finding a representative sample (or subset) of that population.
Sometimes what defines a population is obvious. For example, a manufacturer needs to decide whether a batch of material from production is of high enough quality to be released to the customer or should be scrapped or reworked due to poor quality. In this case, the batch is the population.
Although the population of interest often consists of physical objects, sometimes it is necessary to sample over time, space, or some combination of these dimensions. For instance, an investigation of supermarket staffing could examine checkout line length at various times, or a study on endangered penguins might aim to understand their usage of various hunting grounds over time. For the time dimension, the focus may be on periods or discrete occasions.
In other cases, the examined 'population' may be even less tangible. For example, Joseph Jagger studied the behaviour of roulette wheels at a casino in Monte Carlo, and used this to identify a biased wheel. In this case, the 'population' Jagger wanted to investigate was the overall behaviour of the wheel (i.e. the probability distribution of its results over infinitely many trials), while his 'sample' was formed from observed results from that wheel. Similar considerations arise when taking repeated measurements of properties of materials such as the electrical conductivity of copper.
This situation often arises when seeking knowledge about the cause system of which the observed population is an outcome. In such cases, sampling theory may treat the observed population as a sample from a larger 'superpopulation'. For example, a researcher might study the success rate of a new 'quit smoking' program on a test group of 100 patients, in order to predict the effects of the program if it were made available nationwide. Here the superpopulation is "everybody in the country, given access to this treatment" – a group that does not yet exist since the program is not yet available to all.
The population from which the sample is drawn may not be the same as the population from which information is desired. Often there is a large but not complete overlap between these two groups due to frame issues etc. (see below). Sometimes they may be entirely separate – for instance, one might study rats in order to get a better understanding of human health, or one might study records from people born in 2008 in order to make predictions about people born in 2009.
Time spent in making the sampled population and population of concern precise is often well spent because it raises many issues, ambiguities, and questions that would otherwise have been overlooked at this stage.
== Sampling frame ==
In the most straightforward case, such as the sampling of a batch of material from production (acceptance sampling by lots), it would be most desirable to identify and measure every single item in the population and to include any one of them in our sample. However, in the more general case this is not usually possible or practical. There is no way to identify all rats in the set of all rats. Where voting is not compulsory, there is no way to identify which people will vote at a forthcoming election (in advance of the election). These imprecise populations are not amenable to sampling in any of the ways below and to which we could apply statistical theory.
As a remedy, we seek a sampling frame which has the property that we can identify every single element and include any in our sample. The most straightforward type of frame is a list of elements of the population (preferably the entire population) with appropriate contact information. For example, in an opinion poll, possible sampling frames include an electoral register and a telephone directory.
A probability sample is a sample in which every unit in the population has a chance (greater than zero) of being selected in the sample, and this probability can be accurately determined. The combination of these traits makes it possible to produce unbiased estimates of population totals, by weighting sampled units according to their probability of selection.
Example: We want to estimate the total income of adults living in a given street. We visit each household in that street, identify all adults living there, and randomly select one adult from each household. (For example, we can allocate each person a random number, generated from a uniform distribution between 0 and 1, and select the person with the highest number in each household). We then interview the selected person and find their income.
People living on their own are certain to be selected, so we simply add their income to our estimate of the total. But a person living in a household of two adults has only a one-in-two chance of selection. To reflect this, when we come to such a household, we would count the selected person's income twice towards the total. (The person who is selected from that household can be loosely viewed as also representing the person who isn't selected.)
In the above example, not everybody has the same probability of selection; what makes it a probability sample is the fact that each person's probability is known. When every element in the population does have the same probability of selection, this is known as an 'equal probability of selection' (EPS) design. Such designs are also referred to as 'self-weighting' because all sampled units are given the same weight.
Probability sampling includes: simple random sampling, systematic sampling, stratified sampling, probability-proportional-to-size sampling, and cluster or multistage sampling. These various ways of probability sampling have two things in common:
Every element has a known nonzero probability of being sampled and
involves random selection at some point.
=== Nonprobability sampling ===
Nonprobability sampling is any sampling method where some elements of the population have no chance of selection (these are sometimes referred to as 'out of coverage'/'undercovered'), or where the probability of selection cannot be accurately determined. It involves the selection of elements based on assumptions regarding the population of interest, which forms the criteria for selection. Hence, because the selection of elements is nonrandom, nonprobability sampling does not allow the estimation of sampling errors. These conditions give rise to exclusion bias, placing limits on how much information a sample can provide about the population. Information about the relationship between sample and population is limited, making it difficult to extrapolate from the sample to the population.
Example: We visit every household in a given street, and interview the first person to answer the door. In any household with more than one occupant, this is a nonprobability sample, because some people are more likely to answer the door (e.g. an unemployed person who spends most of their time at home is more likely to answer than an employed housemate who might be at work when the interviewer calls) and it's not practical to calculate these probabilities.
Nonprobability sampling methods include convenience sampling, quota sampling, and purposive sampling. In addition, nonresponse effects may turn any probability design into a nonprobability design if the characteristics of nonresponse are not well understood, since nonresponse effectively modifies each element's probability of being sampled.
== Sampling methods ==
Within any of the types of frames identified above, a variety of sampling methods can be employed individually or in combination. Factors commonly influencing the choice between these designs include:
Nature and quality of the frame
Availability of auxiliary information about units on the frame
Accuracy requirements, and the need to measure accuracy
Whether detailed analysis of the sample is expected
Cost/operational concerns
=== Simple random sampling ===
In a simple random sample (SRS) of a given size, all subsets of a sampling frame have an equal probability of being selected. Each element of the frame thus has an equal probability of selection: the frame is not subdivided or partitioned. Furthermore, any given pair of elements has the same chance of selection as any other such pair (and similarly for triples, and so on). This minimizes bias and simplifies analysis of results. In particular, the variance between individual results within the sample is a good indicator of variance in the overall population, which makes it relatively easy to estimate the accuracy of results.
Simple random sampling can be vulnerable to sampling error because the randomness of the selection may result in a sample that does not reflect the makeup of the population. For instance, a simple random sample of ten people from a given country will on average produce five men and five women, but any given trial is likely to over represent one sex and underrepresent the other. Systematic and stratified techniques attempt to overcome this problem by "using information about the population" to choose a more "representative" sample.
Also, simple random sampling can be cumbersome and tedious when sampling from a large target population. In some cases, investigators are interested in research questions specific to subgroups of the population. For example, researchers might be interested in examining whether cognitive ability as a predictor of job performance is equally applicable across racial groups. Simple random sampling cannot accommodate the needs of researchers in this situation, because it does not provide subsamples of the population, and other sampling strategies, such as stratified sampling, can be used instead.
=== Systematic sampling ===
Systematic sampling (also known as interval sampling) relies on arranging the study population according to some ordering scheme, and then selecting elements at regular intervals through that ordered list. Systematic sampling involves a random start and then proceeds with the selection of every kth element from then onwards. In this case, k=(population size/sample size). It is important that the starting point is not automatically the first in the list, but is instead randomly chosen from within the first to the kth element in the list. A simple example would be to select every 10th name from the telephone directory (an 'every 10th' sample, also referred to as 'sampling with a skip of 10').
As long as the starting point is randomized, systematic sampling is a type of probability sampling. It is easy to implement and the stratification induced can make it efficient, if the variable by which the list is ordered is correlated with the variable of interest. 'Every 10th' sampling is especially useful for efficient sampling from databases.
For example, suppose we wish to sample people from a long street that starts in a poor area (house No. 1) and ends in an expensive district (house No. 1000). A simple random selection of addresses from this street could easily end up with too many from the high end and too few from the low end (or vice versa), leading to an unrepresentative sample. Selecting (e.g.) every 10th street number along the street ensures that the sample is spread evenly along the length of the street, representing all of these districts. (If we always start at house #1 and end at #991, the sample is slightly biased towards the low end; by randomly selecting the start between #1 and #10, this bias is eliminated.)
However, systematic sampling is especially vulnerable to periodicities in the list. If periodicity is present and the period is a multiple or factor of the interval used, the sample is especially likely to be unrepresentative of the overall population, making the scheme less accurate than simple random sampling.
For example, consider a street where the odd-numbered houses are all on the north (expensive) side of the road, and the even-numbered houses are all on the south (cheap) side. Under the sampling scheme given above, it is impossible to get a representative sample; either the houses sampled will all be from the odd-numbered, expensive side, or they will all be from the even-numbered, cheap side, unless the researcher has previous knowledge of this bias and avoids it by a using a skip which ensures jumping between the two sides (any odd-numbered skip).
Another drawback of systematic sampling is that even in scenarios where it is more accurate than SRS, its theoretical properties make it difficult to quantify that accuracy. (In the two examples of systematic sampling that are given above, much of the potential sampling error is due to variation between neighbouring houses – but because this method never selects two neighbouring houses, the sample will not give us any information on that variation.)
As described above, systematic sampling is an EPS method, because all elements have the same probability of selection (in the example given, one in ten). It is not 'simple random sampling' because different subsets of the same size have different selection probabilities – e.g. the set {4,14,24,...,994} has a one-in-ten probability of selection, but the set {4,13,24,34,...} has zero probability of selection.
Systematic sampling can also be adapted to a non-EPS approach; for an example, see discussion of PPS samples below.
=== Stratified sampling ===
When the population embraces a number of distinct categories, the frame can be organized by these categories into separate "strata." Each stratum is then sampled as an independent sub-population, out of which individual elements can be randomly selected. The ratio of the size of this random selection (or sample) to the size of the population is called a sampling fraction. There are several potential benefits to stratified sampling.
First, dividing the population into distinct, independent strata can enable researchers to draw inferences about specific subgroups that may be lost in a more generalized random sample.
Second, utilizing a stratified sampling method can lead to more efficient statistical estimates (provided that strata are selected based upon relevance to the criterion in question, instead of availability of the samples). Even if a stratified sampling approach does not lead to increased statistical efficiency, such a tactic will not result in less efficiency than would simple random sampling, provided that each stratum is proportional to the group's size in the population.
Third, it is sometimes the case that data are more readily available for individual, pre-existing strata within a population than for the overall population; in such cases, using a stratified sampling approach may be more convenient than aggregating data across groups (though this may potentially be at odds with the previously noted importance of utilizing criterion-relevant strata).
Finally, since each stratum is treated as an independent population, different sampling approaches can be applied to different strata, potentially enabling researchers to use the approach best suited (or most cost-effective) for each identified subgroup within the population.
There are, however, some potential drawbacks to using stratified sampling. First, identifying strata and implementing such an approach can increase the cost and complexity of sample selection, as well as leading to increased complexity of population estimates. Second, when examining multiple criteria, stratifying variables may be related to some, but not to others, further complicating the design, and potentially reducing the utility of the strata. Finally, in some cases (such as designs with a large number of strata, or those with a specified minimum sample size per group), stratified sampling can potentially require a larger sample than would other methods (although in most cases, the required sample size would be no larger than would be required for simple random sampling).
A stratified sampling approach is most effective when three conditions are met
Variability within strata are minimized
Variability between strata are maximized
The variables upon which the population is stratified are strongly correlated with the desired dependent variable.
Advantages over other sampling methods
Focuses on important subpopulations and ignores irrelevant ones.
Allows use of different sampling techniques for different subpopulations.
Improves the accuracy/efficiency of estimation.
Permits greater balancing of statistical power of tests of differences between strata by sampling equal numbers from strata varying widely in size.
Disadvantages
Requires selection of relevant stratification variables which can be difficult.
Is not useful when there are no homogeneous subgroups.
Can be expensive to implement.
Poststratification
Stratification is sometimes introduced after the sampling phase in a process called "poststratification". This approach is typically implemented due to a lack of prior knowledge of an appropriate stratifying variable or when the experimenter lacks the necessary information to create a stratifying variable during the sampling phase. Although the method is susceptible to the pitfalls of post hoc approaches, it can provide several benefits in the right situation. Implementation usually follows a simple random sample. In addition to allowing for stratification on an ancillary variable, poststratification can be used to implement weighting, which can improve the precision of a sample's estimates.
Oversampling
Choice-based sampling or oversampling is one of the stratified sampling strategies. In choice-based sampling, the data are stratified on the target and a sample is taken from each stratum so that rarer target classes will be more represented in the sample. The model is then built on this biased sample. The effects of the input variables on the target are often estimated with more precision with the choice-based sample even when a smaller overall sample size is taken, compared to a random sample. The results usually must be adjusted to correct for the oversampling.
=== Probability-proportional-to-size sampling ===
In some cases the sample designer has access to an "auxiliary variable" or "size measure", believed to be correlated to the variable of interest, for each element in the population. These data can be used to improve accuracy in sample design. One option is to use the auxiliary variable as a basis for stratification, as discussed above.
Another option is probability proportional to size ('PPS') sampling, in which the selection probability for each element is set to be proportional to its size measure, up to a maximum of 1. In a simple PPS design, these selection probabilities can then be used as the basis for Poisson sampling. However, this has the drawback of variable sample size, and different portions of the population may still be over- or under-represented due to chance variation in selections.
Systematic sampling theory can be used to create a probability proportionate to size sample. This is done by treating each count within the size variable as a single sampling unit. Samples are then identified by selecting at even intervals among these counts within the size variable. This method is sometimes called PPS-sequential or monetary unit sampling in the case of audits or forensic sampling.
Example: Suppose we have six schools with populations of 150, 180, 200, 220, 260, and 490 students respectively (total 1500 students), and we want to use student population as the basis for a PPS sample of size three. To do this, we could allocate the first school numbers 1 to 150, the second school 151 to 330 (= 150 + 180), the third school 331 to 530, and so on to the last school (1011 to 1500). We then generate a random start between 1 and 500 (equal to 1500/3) and count through the school populations by multiples of 500. If our random start was 137, we would select the schools which have been allocated numbers 137, 637, and 1137, i.e. the first, fourth, and sixth schools.
The PPS approach can improve accuracy for a given sample size by concentrating sample on large elements that have the greatest impact on population estimates. PPS sampling is commonly used for surveys of businesses, where element size varies greatly and auxiliary information is often available – for instance, a survey attempting to measure the number of guest-nights spent in hotels might use each hotel's number of rooms as an auxiliary variable. In some cases, an older measurement of the variable of interest can be used as an auxiliary variable when attempting to produce more current estimates.
=== Cluster sampling ===
Sometimes it is more cost-effective to select respondents in groups ('clusters'). Sampling is often clustered by geography, or by time periods. (Nearly all samples are in some sense 'clustered' in time – although this is rarely taken into account in the analysis.) For instance, if surveying households within a city, we might choose to select 100 city blocks and then interview every household within the selected blocks.
Clustering can reduce travel and administrative costs. In the example above, an interviewer can make a single trip to visit several households in one block, rather than having to drive to a different block for each household.
It also means that one does not need a sampling frame listing all elements in the target population. Instead, clusters can be chosen from a cluster-level frame, with an element-level frame created only for the selected clusters. In the example above, the sample only requires a block-level city map for initial selections, and then a household-level map of the 100 selected blocks, rather than a household-level map of the whole city.
Cluster sampling (also known as clustered sampling) generally increases the variability of sample estimates above that of simple random sampling, depending on how the clusters differ between one another as compared to the within-cluster variation. For this reason, cluster sampling requires a larger sample than SRS to achieve the same level of accuracy – but cost savings from clustering might still make this a cheaper option.
Cluster sampling is commonly implemented as multistage sampling. This is a complex form of cluster sampling in which two or more levels of units are embedded one in the other. The first stage consists of constructing the clusters that will be used to sample from. In the second stage, a sample of primary units is randomly selected from each cluster (rather than using all units contained in all selected clusters). In following stages, in each of those selected clusters, additional samples of units are selected, and so on. All ultimate units (individuals, for instance) selected at the last step of this procedure are then surveyed. This technique, thus, is essentially the process of taking random subsamples of preceding random samples.
Multistage sampling can substantially reduce sampling costs, where the complete population list would need to be constructed (before other sampling methods could be applied). By eliminating the work involved in describing clusters that are not selected, multistage sampling can reduce the large costs associated with traditional cluster sampling. However, each sample may not be a full representative of the whole population.
=== Quota sampling ===
In quota sampling, the population is first segmented into mutually exclusive sub-groups, just as in stratified sampling. Then judgement is used to select the subjects or units from each segment based on a specified proportion. For example, an interviewer may be told to sample 200 females and 300 males between the age of 45 and 60.
It is this second step which makes the technique one of non-probability sampling. In quota sampling the selection of the sample is non-random. For example, interviewers might be tempted to interview those who look most helpful. The problem is that these samples may be biased because not everyone gets a chance of selection. This random element is its greatest weakness and quota versus probability has been a matter of controversy for several years.
=== Minimax sampling ===
In imbalanced datasets, where the sampling ratio does not follow the population statistics, one can resample the dataset in a conservative manner called minimax sampling. The minimax sampling has its origin in Anderson minimax ratio whose value is proved to be 0.5: in a binary classification, the class-sample sizes should be chosen equally. This ratio can be proved to be minimax ratio only under the assumption of LDA classifier with Gaussian distributions. The notion of minimax sampling is recently developed for a general class of classification rules, called class-wise smart classifiers. In this case, the sampling ratio of classes is selected so that the worst case classifier error over all the possible population statistics for class prior probabilities, would be the best.
=== Accidental sampling ===
Accidental sampling (sometimes known as grab, convenience or opportunity sampling) is a type of nonprobability sampling which involves the sample being drawn from that part of the population which is close to hand. That is, a population is selected because it is readily available and convenient. It may be through meeting the person or including a person in the sample when one meets them or chosen by finding them through technological means such as the internet or through phone. The researcher using such a sample cannot scientifically make generalizations about the total population from this sample because it would not be representative enough. For example, if the interviewer were to conduct such a survey at a shopping center early in the morning on a given day, the people that they could interview would be limited to those given there at that given time, which would not represent the views of other members of society in such an area, if the survey were to be conducted at different times of day and several times per week. This type of sampling is most useful for pilot testing. Several important considerations for researchers using convenience samples include:
Are there controls within the research design or experiment which can serve to lessen the impact of a non-random convenience sample, thereby ensuring the results will be more representative of the population?
Is there good reason to believe that a particular convenience sample would or should respond or behave differently than a random sample from the same population?
Is the question being asked by the research one that can adequately be answered using a convenience sample?
In social science research, snowball sampling is a similar technique, where existing study subjects are used to recruit more subjects into the sample. Some variants of snowball sampling, such as respondent driven sampling, allow calculation of selection probabilities and are probability sampling methods under certain conditions.
=== Voluntary sampling ===
The voluntary sampling method is a type of non-probability sampling. Volunteers choose to complete a survey.
Volunteers may be invited through advertisements in social media. The target population for advertisements can be selected by characteristics like location, age, sex, income, occupation, education, or interests using tools provided by the social medium. The advertisement may include a message about the research and link to a survey. After following the link and completing the survey, the volunteer submits the data to be included in the sample population. This method can reach a global population but is limited by the campaign budget. Volunteers outside the invited population may also be included in the sample.
It is difficult to make generalizations from this sample because it may not represent the total population. Often, volunteers have a strong interest in the main topic of the survey.
=== Line-intercept sampling ===
Line-intercept sampling is a method of sampling elements in a region whereby an element is sampled if a chosen line segment, called a "transect", intersects the element.
=== Panel sampling ===
Panel sampling is the method of first selecting a group of participants through a random sampling method and then asking that group for (potentially the same) information several times over a period of time. Therefore, each participant is interviewed at two or more time points; each period of data collection is called a "wave". The method was developed by sociologist Paul Lazarsfeld in 1938 as a means of studying political campaigns. This longitudinal sampling-method allows estimates of changes in the population, for example with regard to chronic illness to job stress to weekly food expenditures. Panel sampling can also be used to inform researchers about within-person health changes due to age or to help explain changes in continuous dependent variables such as spousal interaction. There have been several proposed methods of analyzing panel data, including MANOVA, growth curves, and structural equation modeling with lagged effects.
=== Snowball sampling ===
Snowball sampling involves finding a small group of initial respondents and using them to recruit more respondents. It is particularly useful in cases where the population is hidden or difficult to enumerate.
=== Theoretical sampling ===
Theoretical sampling occurs when samples are selected on the basis of the results of the data collected so far with a goal of developing a deeper understanding of the area or develop theories. An initial, general sample is first collected with the goal of investigating general trends, where further sampling may consist of extreme or very specific cases might be selected in order to maximize the likelihood a phenomenon will actually be observable.
=== Active sampling ===
In active sampling, the samples which are used for training a machine learning algorithm are actively selected, also compare active learning (machine learning).
=== Judgmental selection ===
Judgement sampling, also known as expert or purposive sampling, is a type non-random sampling where samples are selected based on the opinion of an expert, who can select participants based on how valuable the information they provide is.
=== Haphazard sampling ===
Haphazard sampling refers to the idea of using human judgement to simulate randomness. Despite samples being hand-picked, the goal is to ensure that no conscious bias exists within the choice of samples, but often fails due to selection bias. Haphazard sampling is generally opted for due to its convenience, when the tools or capacity to perform other sampling methods may not exist.
The major weakness of such samples is that they often do not represent the characteristics of the entire population, but just a segment of the population. Because of this unbalanced representation, results from haphazard sampling are often biased.
== Replacement of selected units ==
Sampling schemes may be without replacement ('WOR' – no element can be selected more than once in the same sample) or with replacement ('WR' – an element may appear multiple times in the one sample). For example, if we catch fish, measure them, and immediately return them to the water before continuing with the sample, this is a WR design, because we might end up catching and measuring the same fish more than once. However, if we do not return the fish to the water or tag and release each fish after catching it, this becomes a WOR design.
== Sample size determination ==
Formulas, tables, and power function charts are well known approaches to determine sample size.
Steps for using sample size tables:
Postulate the effect size of interest, α, and β.
Check sample size table
Select the table corresponding to the selected α
Locate the row corresponding to the desired power
Locate the column corresponding to the estimated effect size.
The intersection of the column and row is the minimum sample size required.
== Sampling and data collection ==
Good data collection involves:
Following the defined sampling process
Keeping the data in time order
Noting comments and other contextual events
Recording non-responses
== Applications of sampling ==
Sampling enables the selection of right data points from within the larger data set to estimate the characteristics of the whole population. For example, there are about 600 million tweets produced every day. It is not necessary to look at all of them to determine the topics that are discussed during the day, nor is it necessary to look at all the tweets to determine the sentiment on each of the topics. A theoretical formulation for sampling Twitter data has been developed.
In manufacturing different types of sensory data such as acoustics, vibration, pressure, current, voltage, and controller data are available at short time intervals. To predict down-time it may not be necessary to look at all the data but a sample may be sufficient.
== Errors in sample surveys ==
Survey results are typically subject to some error. Total errors can be classified into sampling errors and non-sampling errors. The term "error" here includes systematic biases as well as random errors.
=== Sampling errors and biases ===
Sampling errors and biases are induced by the sample design. They include:
Selection bias: When the true selection probabilities differ from those assumed in calculating the results.
Random sampling error: Random variation in the results due to the elements in the sample being selected at random.
=== Non-sampling error ===
Non-sampling errors are other errors which can impact final survey estimates, caused by problems in data collection, processing, or sample design. Such errors may include:
Over-coverage: inclusion of data from outside of the population
Under-coverage: sampling frame does not include elements in the population.
Measurement error: e.g. when respondents misunderstand a question, or find it difficult to answer
Processing error: mistakes in data coding
Non-response or Participation bias: failure to obtain complete data from all selected individuals
After sampling, a review is held of the exact process followed in sampling, rather than that intended, in order to study any effects that any divergences might have on subsequent analysis.
A particular problem involves non-response. Two major types of non-response exist:
unit nonresponse (lack of completion of any part of the survey)
item non-response (submission or participation in survey but failing to complete one or more components/questions of the survey)
In survey sampling, many of the individuals identified as part of the sample may be unwilling to participate, not have the time to participate (opportunity cost), or survey administrators may not have been able to contact them. In this case, there is a risk of differences between respondents and nonrespondents, leading to biased estimates of population parameters. This is often addressed by improving survey design, offering incentives, and conducting follow-up studies which make a repeated attempt to contact the unresponsive and to characterize their similarities and differences with the rest of the frame. The effects can also be mitigated by weighting the data (when population benchmarks are available) or by imputing data based on answers to other questions. Nonresponse is particularly a problem in internet sampling. Reasons for this problem may include improperly designed surveys, over-surveying (or survey fatigue),
and the fact that potential participants may have multiple e-mail addresses, which they do not use anymore or do not check regularly.
== Survey weights ==
In many situations, the sample fraction may be varied by stratum and data will have to be weighted to correctly represent the population. Thus for example, a simple random sample of individuals in the United Kingdom might not include some in remote Scottish islands who would be inordinately expensive to sample. A cheaper method would be to use a stratified sample with urban and rural strata. The rural sample could be under-represented in the sample, but weighted up appropriately in the analysis to compensate.
More generally, data should usually be weighted if the sample design does not give each individual an equal chance of being selected. For instance, when households have equal selection probabilities but one person is interviewed from within each household, this gives people from large households a smaller chance of being interviewed. This can be accounted for using survey weights. Similarly, households with more than one telephone line have a greater chance of being selected in a random digit dialing sample, and weights can adjust for this.
Weights can also serve other purposes, such as helping to correct for non-response.
== Methods of producing random samples ==
Random number table
Mathematical algorithms for pseudo-random number generators
Physical randomization devices such as coins, playing cards or sophisticated devices such as ERNIE
== See also ==
== Notes ==
The textbook by Groves et alia provides an overview of survey methodology, including recent literature on questionnaire development (informed by cognitive psychology) :
Robert Groves, et alia. Survey methodology (2010 2nd ed. [2004]) ISBN 0-471-48348-6.
The other books focus on the statistical theory of survey sampling and require some knowledge of basic statistics, as discussed in the following textbooks:
David S. Moore and George P. McCabe (February 2005). "Introduction to the practice of statistics" (5th edition). W.H. Freeman & Company. ISBN 0-7167-6282-X.
Freedman, David; Pisani, Robert; Purves, Roger (2007). Statistics (4th ed.). New York: Norton. ISBN 978-0-393-92972-0.
The elementary book by Scheaffer et alia uses quadratic equations from high-school algebra:
Scheaffer, Richard L., William Mendenhal and R. Lyman Ott. Elementary survey sampling, Fifth Edition. Belmont: Duxbury Press, 1996.
More mathematical statistics is required for Lohr, for Särndal et alia, and for Cochran:
Cochran, William G. (1977). Sampling techniques (Third ed.). Wiley. ISBN 978-0-471-16240-7.
Lohr, Sharon L. (1999). Sampling: Design and analysis. Duxbury. ISBN 978-0-534-35361-2.
Särndal, Carl-Erik; Swensson, Bengt; Wretman, Jan (1992). Model assisted survey sampling. Springer-Verlag. ISBN 978-0-387-40620-6.
The historically important books by Deming and Kish remain valuable for insights for social scientists (particularly about the U.S. census and the Institute for Social Research at the University of Michigan):
Deming, W. Edwards (1966). Some Theory of Sampling. Dover Publications. ISBN 978-0-486-64684-8. OCLC 166526.
Kish, Leslie (1995) Survey Sampling, Wiley, ISBN 0-471-10949-5
== References ==
== Further reading ==
Singh, G N, Jaiswal, A. K., and Pandey A. K. (2021), Improved Imputation Methods for Missing Data in Two-Occasion Successive Sampling, Communications in Statistics: Theory and Methods. DOI:10.1080/03610926.2021.1944211
Chambers, R L, and Skinner, C J (editors) (2003), Analysis of Survey Data, Wiley, ISBN 0-471-89987-9
Deming, W. Edwards (1975) On probability as a basis for action, The American Statistician, 29(4), pp. 146–152.
Gy, P (2012) Sampling of Heterogeneous and Dynamic Material Systems: Theories of Heterogeneity, Sampling and Homogenizing, Elsevier Science, ISBN 978-0444556066
Korn, E.L., and Graubard, B.I. (1999) Analysis of Health Surveys, Wiley, ISBN 0-471-13773-1
Lucas, Samuel R. (2012). doi:10.1007/s11135-012-9775-3 "Beyond the Existence Proof: Ontological Conditions, Epistemological Implications, and In-Depth Interview Research."], Quality & Quantity, doi:10.1007/s11135-012-9775-3.
Stuart, Alan (1962) Basic Ideas of Scientific Sampling, Hafner Publishing Company, New York
Smith, T. M. F. (1984). "Present Position and Potential Developments: Some Personal Views: Sample surveys". Journal of the Royal Statistical Society, Series A. 147 (The 150th Anniversary of the Royal Statistical Society, number 2): 208–221. doi:10.2307/2981677. JSTOR 2981677.
Smith, T. M. F. (1993). "Populations and Selection: Limitations of Statistics (Presidential address)". Journal of the Royal Statistical Society, Series A. 156 (2): 144–166. doi:10.2307/2982726. JSTOR 2982726. (Portrait of T. M. F. Smith on page 144)
Smith, T. M. F. (2001). "Centenary: Sample surveys". Biometrika. 88 (1): 167–243. doi:10.1093/biomet/88.1.167.
Smith, T. M. F. (2001). "Biometrika centenary: Sample surveys". In D. M. Titterington and D. R. Cox (ed.). Biometrika: One Hundred Years. Oxford University Press. pp. 165–194. ISBN 978-0-19-850993-6.
Whittle, P. (May 1954). "Optimum preventative sampling". Journal of the Operations Research Society of America. 2 (2): 197–203. doi:10.1287/opre.2.2.197. JSTOR 166605.
== Standards ==
=== ISO ===
ISO 2859 series
ISO 3951 series
=== ASTM ===
ASTM E105 Standard Practice for Probability Sampling Of Materials
ASTM E122 Standard Practice for Calculating Sample Size to Estimate, With a Specified Tolerable Error, the Average for Characteristic of a Lot or Process
ASTM E141 Standard Practice for Acceptance of Evidence Based on the Results of Probability Sampling
ASTM E1402 Standard Terminology Relating to Sampling
ASTM E1994 Standard Practice for Use of Process Oriented AOQL and LTPD Sampling Plans
ASTM E2234 Standard Practice for Sampling a Stream of Product by Attributes Indexed by AQL
=== ANSI, ASQ ===
ANSI/ASQ Z1.4
=== U.S. federal and military standards ===
MIL-STD-105
MIL-STD-1916
== External links ==
Media related to Sampling (statistics) at Wikimedia Commons | Wikipedia/Sampling_method |
A theory of art is intended to contrast with a definition of art. Traditionally, definitions are composed of necessary and sufficient conditions, and a single counterexample overthrows such a definition. Theorizing about art, on the other hand, is analogous to a theory of a natural phenomenon like gravity. In fact, the intent behind a theory of art is to treat art as a natural phenomenon that should be investigated like any other. The question of whether one can speak of a theory of art without employing a concept of art is also discussed below.
The motivation behind seeking a theory, rather than a definition, is that our best minds have not been able to find definitions without counterexamples. The term "definition" assumes there are concepts, in something along Platonic lines, and a definition is an attempt to reach in and pluck out the essence of the concept and also assumes that at least some people have intellectual access to these concepts. In contrast, a 'conception' is an individual attempt to grasp at the putative essence behind this common term while nobody has "access" to the concept.
A theory of art presumes that each of us employs different conceptions of this unattainable art concept and as a result we must resort to worldly human investigation.
== Aesthetic response ==
Theories of aesthetic response or functional theories of art are in many ways the most intuitive theories of art. At its base, the term "aesthetic" refers to a type of phenomenal experience, and aesthetic definitions identify artworks with artifacts intended to produce aesthetic experiences. Nature can be beautiful and it can produce aesthetic experiences, but nature does not possess the intentional function of producing those experiences. For such a function, an intention is necessary, and thus agency – the artist.
Monroe Beardsley is commonly associated with aesthetic definitions of art. In Beardsley's words, something is art just in case it is "either an arrangement of conditions intended to be capable of affording an experience with marked aesthetic character or (incidentally) an arrangement belonging to a class or type of arrangements that is typically intended to have this capacity" (The aesthetic point of view: selected essays, 1982, 299). Painters arrange "conditions" in the paint/canvas medium, and dancers arrange the "conditions" of their bodily medium, for example. According to Beardsley's first disjunct, art has an intended aesthetic function, but not all artworks succeed in producing aesthetic experiences. The second disjunct allows for artworks that were intended to have this capacity, but failed at it (bad art).
Marcel Duchamp's Fountain is the paradigmatic counterexample to aesthetic definitions of art. Such works are said to be counterexamples because they are artworks that do not possess an intended aesthetic function. Beardsley replies that either such works are not art or they are "comments on art" (1983): "To classify them [Fountain and the like] as artworks just because they make comments on art would be to classify a lot of dull and sometimes unintelligible magazine articles and newspaper reviews as artworks" (p. 25). This response has been widely considered inadequate (REF). It is either question-begging or it relies on an arbitrary distinction between artworks and commentaries on artworks. A great many art theorists today consider aesthetic definitions of art to be extensionally inadequate, primarily because of artworks in the style of Duchamp.
== Formalist ==
The formalist theory of art asserts that we should focus only on the formal properties of art—the "form", not the "content". Those formal properties might include, for the visual arts, color, shape, and line, and, for the musical arts, rhythm and harmony. Formalists do not deny that works of art might have content, representation, or narrative--rather, they deny that those things are relevant in our appreciation or understanding of art.
== Institutional ==
The institutional theory of art is a theory about the nature of art that holds that an object can only become art in the context of the institution known as "the art world".
Addressing the issue of what makes, for example, Marcel Duchamp's "readymades" art, or why a pile of Brillo cartons in a supermarket is not art, whereas Andy Warhol's famous Brillo Boxes (a pile of Brillo carton replicas) is, the art critic and philosopher Arthur Danto wrote in his 1964 essay "The Artworld":
To see something as art requires something the eye cannot decry—an atmosphere of artistic theory, a knowledge of the history of art: an artworld.
According to Robert J. Yanal, Danto's essay, in which he coined the term artworld, outlined the first institutional theory of art.
Versions of the institutional theory were formulated more explicitly by George Dickie in his article "Defining Art" (American Philosophical Quarterly, 1969) and his books Aesthetics: An Introduction (1971) and Art and the Aesthetic: An Institutional Analysis (1974). An early version of Dickie's institutional theory can be summed up in the following definition of work of art from Aesthetics: An Introduction:
A work of art in the classificatory sense is 1) an artifact 2) on which some person or persons acting on behalf of a certain social institution (the artworld) has conferred the status of candidate for appreciation.
Dickie has reformulated his theory in several books and articles. Other philosophers of art have criticized his definitions as being circular.
== Historical ==
Historical theories of art hold that for something to be art, it must bear some relation to existing works of art. For new works to be art, they must be similar or relate to previously established artworks. Such a definition raises the question of where this inherited status originated. That is why historical definitions of art must also include a disjunct for first art: Something is art if it possesses a historical relation to previous artworks, or is first art.
The philosopher primarily associated with the historical definition of art is Jerrold Levinson (1979). For Levinson, "a work of art is a thing intended for regard-as-a-work-of-art: regard in any of the ways works of art existing prior to it have been correctly regarded" (1979, p. 234). Levinson further clarifies that by "intends for" he means: "[M]akes, appropriates or conceives for the purpose of'" (1979, p. 236). Some of these manners for regard (at around the present time) are: to be regarded with full attention, to be regarded contemplatively, to be regarded with special notice to appearance, to be regarded with "emotional openness" (1979, p. 237). If an object is not intended for regard in any of the established ways, then it is not art.
== Anti-essentialist ==
Some art theorists have proposed that the attempt to define art must be abandoned and have instead urged an anti-essentialist theory of art. In 'The Role of Theory in Aesthetics' (1956), Morris Weitz famously argues that individually necessary and jointly sufficient conditions will never be forthcoming for the concept 'art' because it is an "open concept". Weitz describes open concepts as those whose "conditions of application are emendable and corrigible" (1956, p. 31). In the case of borderline cases of art and prima facie counterexamples, open concepts "call for some sort of decision on our part to extend the use of the concept to cover this, or to close the concept and invent a new one to deal with the new case and its new property" (p. 31 ital. in original). The question of whether a new artifact is art "is not factual, but rather a decision problem, where the verdict turns on whether or not we enlarge our set of conditions for applying the concept" (p. 32). For Weitz, it is "the very expansive, adventurous character of art, its ever-present changes and novel creations", that makes the concept impossible to capture in a classical definition (as some static univocal essence).
While anti-essentialism was never formally defeated, it was challenged, and the debate over anti-essentialist theories was subsequently swept away by seemingly better essentialist definitions. Commenting after Weitz, Berys Gaut revived anti-essentialism in the philosophy of art with his paper '"Art" as a Cluster Concept' (2000). Cluster concepts are composed of criteria that contribute to art status but are not individually necessary for art status. There is one exception: Artworks are created by agents, and so being an artifact is a necessary property for being an artwork. Gaut (2005) offers a set of ten criteria that contribute to art status:
(i) possessing positive aesthetic qualities (I employ the notion of positive aesthetic qualities here in a narrow sense, comprising beauty and its subspecies);
(ii) being expressive of emotion;
(iii) being intellectually challenging;
(iv) being formally complex and coherent;
(v) having a capacity to convey complex meanings;
(vi) exhibiting an individual point of view;
(vii) being an exercise of creative imagination;
(viii) being an artifact or performance that is the product of a high degree of skill;
(ix) belonging to an established artistic form; and
(x) being the product of an intention to make a work of art. (274)
Satisfying all ten criteria would be sufficient for art, as might any subset formed by nine criteria (this is a consequence of the fact that none of the ten properties is necessary). For example, consider two of Gaut's criteria: "possessing aesthetic merit" and "being expressive of emotion" (200, p. 28). Neither of these criteria is necessary for art status, but both are parts of subsets of these ten criteria that are sufficient for art status. Gaut's definition also allows for many subsets with less than nine criteria to be sufficient for art status, which leads to a highly pluralistic theory of art.
In 2021, the philosopher Jason Josephson Storm defended anti-essentialist definitions of art as part of a broader analysis of the role of macro-categories in the human sciences. Specifically, he argued that most essentialist attempts to answer Weitz's original argument fail because the criteria they propose to define art are not themselves present or identical across cultures.: 64 Storm went further and argued that Weitz's appeal to family resemblance to define art without essentialism is ultimately circular because it does not explain why similarities between "art" across cultures are relevant to defining it even anti-essentially.: 77–82 Instead, Storm applied a theory of social kinds to the category "art" that emphasized how different forms of art fulfill different "cultural niches.": 124
The theory of art is also impacted by a philosophical turn in thinking, not only exemplified by the aesthetics of Kant but is tied more closely to ontology and metaphysics in terms of the reflections of Heidegger on the essence of modern technology and the implications it has on all beings that are reduced to what he calls 'standing reserve', and it is from this perspective on the question of being that he explored art beyond the history, theory, and criticism of artistic production as embodied for instance in his influential opus: The Origin of the Work of Art. This has had also an impact on architectural thinking in its philosophical roots.
== Aesthetic creation ==
Zangwill describes the aesthetic-creation theory of art as a theory of "how art comes to be produced" (p. 167) and an "artist-based" theory. Zangwill distinguishes three phases in the production of a work of art:
[F]irst, there is the insight that by creating certain nonaesthetic properties, certain aesthetic properties will be realized; second, there is the intention to realize the aesthetic properties in the nonaesthetic properties, as envisaged in the insight; and, third, there is the more or less successful action of realizing the aesthetic properties in the nonaesthetic properties, an envisaged in the insight and intention. (45)
In the creation of an artwork, the insight plays a causal role in bringing about actions sufficient for realizing particular aesthetic properties. Zangwill does not describe this relation in detail, but only says it is "because of" this insight that the aesthetic properties are created.
Aesthetic properties are instantiated by nonaesthetic properties that "include physical properties, such as shape and size, and secondary qualities, such as colours or sounds." (37) Zangwill says that aesthetic properties supervene on the nonaesthetic properties: it is because of the particular nonaesthetic properties it has that the work possesses certain aesthetic properties (and not the other way around).
== What is "art"? ==
Since art often depicts functional purposes and sometimes has no function other than to convey or communicate an idea, then how best to define the term "art" is a subject of constant contention; many books and journal articles have been published arguing over even the basics of what we mean by the term "art". Theodor Adorno claimed in his Aesthetic Theory (1969), "It is self-evident that nothing concerning art is self-evident." Artists, philosophers, anthropologists, psychologists, and programmers all use the notion of art in their respective fields and give it operational definitions that vary considerably. Furthermore, it is clear that even the basic meaning of the term "art" has changed several times over the centuries, and has continued to evolve during the 20th century as well.
The main recent sense of the word "art" is roughly as an abbreviation for "fine art". Here we mean that skill is being used to express the artist's creativity, engage the audience's aesthetic sensibilities, or draw the audience toward consideration of the "finer" things. Often, if the skill is being used in a functional object, people will consider it a craft instead of art, a suggestion that is highly disputed by many contemporary craft thinkers. Likewise, if the skill is being used in a commercial or industrial way, it may be considered design instead of art, or contrariwise, these may be defended as art forms, perhaps called applied art. Some thinkers, for instance, have argued that the difference between fine art and applied art has more to do with the actual function of the object than any clear definitional difference.
Even as late as 1912, it was normal in the West to assume that all art aims at beauty, and thus that anything that was not trying to be beautiful could not count as art. The cubists, dadaists, Stravinsky, and many later art movements struggled against this conception that beauty was central to the definition of art, with such success that, according to Danto, "Beauty had disappeared not only from the advanced art of the 1960s but from the advanced philosophy of art of that decade as well." Perhaps some notion like "expression" (in Croce's theories) or "counter-environment" (in McLuhan's theory) can replace the previous role of beauty. Brian Massumi brought back "beauty" into consideration together with "expression". Another view, as important to the philosophy of art as "beauty", is that of the "sublime", elaborated upon in the twentieth century by the postmodern philosopher Jean-François Lyotard. A further approach, elaborated by André Malraux in works such as The Voices of Silence, is that art is fundamentally a response to a metaphysical question ("Art", he writes, "is an 'anti-destiny'"). Malraux argues that, while art has sometimes been oriented toward beauty and the sublime (principally in post-Renaissance European art), these qualities, as the wider history of art demonstrates, are by no means essential to it.
Perhaps (as in Kennick's theory) no definition of art is possible anymore. Perhaps art should be thought of as a cluster of related concepts in a Wittgensteinian fashion (as in Weitz or Beuys). Another approach is to say that "art" is basically a sociological category, that whatever art schools, museums, and artists define as art is considered art regardless of formal definitions. This "institutional definition of art" (see also Institutional Critique) has been championed by George Dickie. Most people did not consider the depiction of a store-bought urinal or Brillo Box to be art until Marcel Duchamp and Andy Warhol (respectively) placed them in the context of art (i.e., the art gallery), which then provided the association of these objects with the associations that define art.
Proceduralists often suggest that it is the process by which a work of art is created or viewed that makes it art, not any inherent feature of an object, or how well received it is by the institutions of the art world after its introduction to society at large. If a poet writes down several lines, intending them as a poem, the very procedure by which it is written makes it a poem. Whereas if a journalist writes exactly the same set of words, intending them as shorthand notes to help him write a longer article later, these would not be a poem. Leo Tolstoy, on the other hand, claims in his What is art? (1897) that what decides whether something is art is how it is experienced by its audience, not by the intention of its creator. Functionalists like Monroe Beardsley argue that whether a piece counts as art depends on what function it plays in a particular context; the same Greek vase may play a nonartistic function in one context (carrying wine) and an artistic function in another context (helping us appreciate the beauty of the human figure).
Marxist attempts to define art focus on its place in the mode of production, such as in Walter Benjamin's essay The Author as Producer, and/or its political role in class struggle. Revising some concepts of the Marxist philosopher Louis Althusser, Gary Tedman defines art in terms of social reproduction of the relations of production on the aesthetic level.
== What should art be like? ==
Many goals have been argued for art, and aestheticians often argue that some goal or another is superior in some way. Clement Greenberg, for instance, argued in 1960 that each artistic medium should seek that which makes it unique among the possible mediums and then purify itself of anything other than expression of its own uniqueness as a form. The Dadaist Tristan Tzara on the other hand saw the function of art in 1918 as the destruction of a mad social order. "We must sweep and clean. Affirm the cleanliness of the individual after the state of madness, aggressive complete madness of a world abandoned to the hands of bandits." Formal goals, creative goals, self-expression, political goals, spiritual goals, philosophical goals, and even more perceptual or aesthetic goals have all been popular pictures of what art should be like.
== The value of art ==
Tolstoy defined art as the following: "Art is a human activity consisting in this, that one man consciously, by means of certain external signs, hands on to others feelings he has lived through, and that other people are infected by these feelings and also experience them." However, this definition is merely a starting point for his theory of art's value. To some extent, the value of art, for Tolstoy, is one with the value of empathy. However, sometimes empathy is not of value. In chapter fifteen of What Is Art?, Tolstoy says that some feelings are good, but others are bad, and so art is only valuable when it generates empathy or shared feeling for good feelings. For example, Tolstoy asserts that empathy for decadent members of the ruling class makes society worse, rather than better. In chapter sixteen, he asserts that the best art is "universal art" that expresses simple and accessible positive feeling.
An argument for the value of art, used in the fictional work The Hitchhikers Guide to the Galaxy, proceeds that, if some external force presenting imminent destruction of Earth asked humanity what its value was—what should humanity's response be? The argument continues that the only justification humanity could give for its continued existence would be the past creation and continued creation of things like a Shakespeare play, a Rembrandt painting or a Bach concerto. The suggestion is that these are the things of value that define humanity. Whatever one might think of this claim — and it does seem to undervalue the many other achievements of which human beings have shown themselves capable, both individually and collectively — it is true that art appears to possess a special capacity to endure ("live on") beyond the moment of its birth, in many cases for centuries or millennia. This capacity of art to endure over time — what precisely it is and how it operates — has been widely neglected in modern aesthetics.
== Set theory of art ==
A set theory of art has been underlined in according to the notion that everything is art. Here - higher than such states is proposed while lower than such states is developed for reference; thus showing that art theory is sprung up to guard against complacency.
Everything is art.
A set example of this would be an eternal set large enough to incorporate everything; with a work of art-example given as Ben Vautier's 'Universe'.
Everything and then some more is art (Everything+)
A set of this would be an eternal set incorporated in it a small circle; with a work of art-example given as Aronsson's 'Universe Orange' (which consists of a starmap of the universe bylining a natural-sized physical orange).
Everything that can be created (without practical use) is art (Everything-)
A set of this would be a shadow set (universe) much to the likelihood of a negative universe.
Everything that can be experienced is art (Everything--)
A set of this would be a finite set legally interacting with other sets without losing its position as premier set (the whole); with a work of art-example given as a picture of the 'Orion Nebula' (Unknown Artist).
Everything that exists, have been existing, and will ever exist is art (Everything++)
A set of this would be an infinite set consisting of every parallel universe; with a work of art-example given as Marvels 'Omniverse'.
== See also ==
Aesthetics, the philosophy of art
Poetics, the theory of poetry
== References == | Wikipedia/Theory_of_art |
Behavioral neuroscience, also known as biological psychology, biopsychology, or psychobiology, is part of the broad, interdisciplinary field of neuroscience, with its primary focus being on the biological and neural substrates underlying human experiences and behaviors, as in our psychology. Derived from an earlier field known as physiological psychology, behavioral neuroscience applies the principles of biology to study the physiological, genetic, and developmental mechanisms of behavior in humans and other animals. Behavioral neuroscientists examine the biological bases of behavior through research that involves neuroanatomical substrates, environmental and genetic factors, effects of lesions and electrical stimulation, developmental processes, recording electrical activity, neurotransmitters, hormonal influences, chemical components, and the effects of drugs. Important topics of consideration for neuroscientific research in behavior include learning and memory, sensory processes, motivation and emotion, as well as genetic and molecular substrates concerning the biological bases of behavior. Subdivisions of behavioral neuroscience include the field of cognitive neuroscience, which emphasizes the biological processes underlying human cognition. Behavioral and cognitive neuroscience are both concerned with the neuronal and biological bases of psychology, with a particular emphasis on either cognition or behavior depending on the field.
== History ==
Behavioral neuroscience as a scientific discipline emerged from a variety of scientific and philosophical traditions in the 18th and 19th centuries. René Descartes proposed physical models to explain animal as well as human behavior. Descartes suggested that the pineal gland, a midline unpaired structure in the brain of many organisms, was the point of contact between mind and body. Descartes also elaborated on a theory in which the pneumatics of bodily fluids could explain reflexes and other motor behavior. This theory was inspired by moving statues in a garden in Paris.
Other philosophers also helped give birth to psychology. One of the earliest textbooks in the new field, The Principles of Psychology by William James, argues that the scientific study of psychology should be grounded in an understanding of biology.
The emergence of psychology and behavioral neuroscience as legitimate sciences can be traced from the emergence of physiology from anatomy, particularly neuroanatomy. Physiologists conducted experiments on living organisms, a practice that was distrusted by the dominant anatomists of the 18th and 19th centuries. The influential work of Claude Bernard, Charles Bell, and William Harvey helped to convince the scientific community that reliable data could be obtained from living subjects.
Even before the 18th and 19th centuries, behavioral neuroscience was beginning to take form as far back as 1700 B.C. The question that seems to continually arise is: what is the connection between the mind and body? The debate is formally referred to as the mind-body problem. There are two major schools of thought that attempt to resolve the mind–body problem; monism and dualism. Plato and Aristotle are two of several philosophers who participated in this debate. Plato believed that the brain was where all mental thought and processes happened. In contrast, Aristotle believed the brain served the purpose of cooling down the emotions derived from the heart. The mind-body problem was a stepping stone toward attempting to understand the connection between the mind and body.
Another debate arose about localization of function or functional specialization versus equipotentiality which played a significant role in the development in behavioral neuroscience. As a result of localization of function research, many famous people found within psychology have come to various different conclusions. Wilder Penfield was able to develop a map of the cerebral cortex through studying epileptic patients along with Rassmussen. Research on localization of function has led behavioral neuroscientists to a better understanding of which parts of the brain control behavior. This is best exemplified through the case study of Phineas Gage.
The term "psychobiology" has been used in a variety of contexts, emphasizing the importance of biology, which is the discipline that studies organic, neural and cellular modifications in behavior, plasticity in neuroscience, and biological diseases in all aspects, in addition, biology focuses and analyzes behavior and all the subjects it is concerned about, from a scientific point of view. In this context, psychology helps as a complementary, but important discipline in the neurobiological sciences. The role of psychology in this questions is that of a social tool that backs up the main or strongest biological science. The term "psychobiology" was first used in its modern sense by Knight Dunlap in his book An Outline of Psychobiology (1914). Dunlap also was the founder and editor-in-chief of the journal Psychobiology. In the announcement of that journal, Dunlap writes that the journal will publish research "...bearing on the interconnection of mental and physiological functions", which describes the field of behavioral neuroscience even in its modern sense.
Neuroscience is considered a relatively new discipline, with the first conference for the Society of Neuroscience occurring in 1971. The meeting was held to merge different fields focused on studying the nervous system (ex. neuroanatomy, neurochemistry, physiological psychology, neuroendocrinology, clinical neurology, neurophysiology, neuropharmacology, etc.) by creating one interdisciplinary field. In 1983, the Journal of Comparative and Physiological Psychology, published by the American Psychological Association, was split into two separate journals: Behavioral Neuroscience and the Journal of Comparative Psychology. The author of the journal at the time gave reasoning for this separation, with one being that behavioral neuroscience is the broader contemporary advancement of physiological psychology. Furthermore, in all animals, the nervous system is the organ of behavior. Therefore, every biological and behavioral variable that influences behavior must go through the nervous system to do so. Present-day research in behavioral neuroscience studies all biological variables which act through the nervous system and relate to behavior.
== Relationship to other fields of psychology and biology ==
In many cases, humans may serve as experimental subjects in behavioral neuroscience experiments; however, a great deal of the experimental literature in behavioral neuroscience comes from the study of non-human species, most frequently rats, mice, and monkeys. As a result, a critical assumption in behavioral neuroscience is that organisms share biological and behavioral similarities, enough to permit extrapolations across species. This allies behavioral neuroscience closely with comparative psychology, ethology, evolutionary biology, and neurobiology. Behavioral neuroscience also has paradigmatic and methodological similarities to neuropsychology, which relies heavily on the study of the behavior of humans with nervous system dysfunction (i.e., a non-experimentally based biological manipulation). Synonyms for behavioral neuroscience include biopsychology, biological psychology, and psychobiology. Physiological psychology is a subfield of behavioral neuroscience, with an appropriately narrower definition.
== Research methods ==
The distinguishing characteristic of a behavioral neuroscience experiment is that either the independent variable of the experiment is biological, or some dependent variable is biological. In other words, the nervous system of the organism under study is permanently or temporarily altered, or some aspect of the nervous system is measured (usually to be related to a behavioral variable).
=== Disabling or decreasing neural function ===
Lesions – A classic method in which a brain-region of interest is naturally or intentionally destroyed to observe any resulting changes such as degraded or enhanced performance on some behavioral measure. Lesions can be placed with relatively high accuracy "Thanks to a variety of brain 'atlases' which provide a map of brain regions in 3-dimensional" stereotactic coordinates.
Surgical lesions – Neural tissue is destroyed by removing it surgically.
Electrolytic lesions – Neural tissue is destroyed through the application of electrical shock trauma.
Chemical lesions – Neural tissue is destroyed by the infusion of a neurotoxin.
Temporary lesions – Neural tissue is temporarily disabled by cooling or by the use of anesthetics such as tetrodotoxin.
Transcranial magnetic stimulation – A new technique usually used with human subjects in which a magnetic coil applied to the scalp causes unsystematic electrical activity in nearby cortical neurons which can be experimentally analyzed as a functional lesion.
Synthetic ligand injection – A receptor activated solely by a synthetic ligand (RASSL) or Designer Receptor Exclusively Activated by Designer Drugs (DREADD), permits spatial and temporal control of G protein signaling in vivo. These systems utilize G protein-coupled receptors (GPCR) engineered to respond exclusively to synthetic small molecules ligands, like clozapine N-oxide (CNO), and not to their natural ligand(s). RASSL's represent a GPCR-based chemogenetic tool. These synthetic ligands upon activation can decrease neural function by G-protein activation. This can with Potassium attenuating neural activity.
Optogenetic inhibition – A light activated inhibitory protein is expressed in cells of interest. Powerful millisecond timescale neuronal inhibition is instigated upon stimulation by the appropriate frequency of light delivered via fiber optics or implanted LEDs in the case of vertebrates, or via external illumination for small, sufficiently translucent invertebrates. Bacterial Halorhodopsins or Proton pumps are the two classes of proteins used for inhibitory optogenetics, achieving inhibition by increasing cytoplasmic levels of halides (Cl−) or decreasing the cytoplasmic concentration of protons, respectively.
=== Enhancing neural function ===
Electrical stimulation – A classic method in which neural activity is enhanced by application of a small electric current (too small to cause significant cell death).
Psychopharmacological manipulations – A chemical receptor antagonist induces neural activity by interfering with neurotransmission. Antagonists can be delivered systemically (such as by intravenous injection) or locally (intracerebrally) during a surgical procedure into the ventricles or into specific brain structures. For example, NMDA antagonist AP5 has been shown to inhibit the initiation of long term potentiation of excitatory synaptic transmission (in rodent fear conditioning) which is believed to be a vital mechanism in learning and memory.
Synthetic Ligand Injection – Likewise, Gq-DREADDs can be used to modulate cellular function by innervation of brain regions such as Hippocampus. This innervation results in the amplification of γ-rhythms, which increases motor activity.
Transcranial magnetic stimulation – In some cases (for example, studies of motor cortex), this technique can be analyzed as having a stimulatory effect (rather than as a functional lesion).
Optogenetic excitation – A light activated excitatory protein is expressed in select cells. Channelrhodopsin-2 (ChR2), a light activated cation channel, was the first bacterial opsin shown to excite neurons in response to light, though a number of new excitatory optogenetic tools have now been generated by improving and imparting novel properties to ChR2.
=== Measuring neural activity ===
Optical techniques – Optical methods for recording neuronal activity rely on methods that modify the optical properties of neurons in response to the cellular events associated with action potentials or neurotransmitter release.
Voltage sensitive dyes (VSDs) were among the earliest method for optically detecting neuronal activity. VSDs commonly changed their fluorescent properties in response to a voltage change across the neuron's membrane, rendering membrane sub-threshold and supra-threshold (action potentials) electrical activity detectable. Genetically encoded voltage sensitive fluorescent proteins have also been developed.
Calcium imaging relies on dyes or genetically encoded proteins that fluoresce upon binding to the calcium that is transiently present during an action potential.
Synapto-pHluorin is a technique that relies on a fusion protein that combines a synaptic vesicle membrane protein and a pH sensitive fluorescent protein. Upon synaptic vesicle release, the chimeric protein is exposed to the higher pH of the synaptic cleft, causing a measurable change in fluorescence.
Single-unit recording – A method whereby an electrode is introduced into the brain of a living animal to detect electrical activity that is generated by the neurons adjacent to the electrode tip. Normally this is performed with sedated animals but sometimes it is performed on awake animals engaged in a behavioral event, such as a thirsty rat whisking a particular sandpaper grade previously paired with water in order to measure the corresponding patterns of neuronal firing at the decision point.
Multielectrode recording – The use of a bundle of fine electrodes to record the simultaneous activity of up to hundreds of neurons.
Functional magnetic resonance imaging – fMRI, a technique most frequently applied on human subjects, in which changes in cerebral blood flow can be detected in an MRI apparatus and are taken to indicate relative activity of larger scale brain regions (i.e., on the order of hundreds of thousands of neurons).
Positron emission tomography - PET detects particles called photons using a 3-D nuclear medicine examination. These particles are emitted by injections of radioisotopes such as fluorine. PET imaging reveal the pathological processes which predict anatomic changes making it important for detecting, diagnosing and characterising many pathologies.
Electroencephalography – EEG, and the derivative technique of event-related potentials, in which scalp electrodes monitor the average activity of neurons in the cortex (again, used most frequently with human subjects). This technique uses different types of electrodes for recording systems such as needle electrodes and saline-based electrodes. EEG allows for the investigation of mental disorders, sleep disorders and physiology. It can monitor brain development and cognitive engagement.
Functional neuroanatomy – A more complex counterpart of phrenology. The expression of some anatomical marker is taken to reflect neural activity. For example, the expression of immediate early genes is thought to be caused by vigorous neural activity. Likewise, the injection of 2-deoxyglucose prior to some behavioral task can be followed by anatomical localization of that chemical; it is taken up by neurons that are electrically active.
Magnetoencephalography – MEG shows the functioning of the human brain through the measurement of electromagnetic activity. Measuring the magnetic fields created by the electric current flowing within the neurons identifies brain activity associated with various human functions in real time, with millimeter spatial accuracy. Clinicians can noninvasively obtain data to help them assess neurological disorders and plan surgical treatments.
=== Genetic techniques ===
QTL mapping – The influence of a gene in some behavior can be statistically inferred by studying inbred strains of some species, most commonly mice. The recent sequencing of the genome of many species, most notably mice, has facilitated this technique.
Selective breeding – Organisms, often mice, may be bred selectively among inbred strains to create a recombinant congenic strain. This might be done to isolate an experimentally interesting stretch of DNA derived from one strain on the background genome of another strain to allow stronger inferences about the role of that stretch of DNA.
Genetic engineering – The genome may also be experimentally-manipulated; for example, knockout mice can be engineered to lack a particular gene, or a gene may be expressed in a strain which does not normally do so (the 'transgenic'). Advanced techniques may also permit the expression or suppression of a gene to occur by injection of some regulating chemical.
=== Quantifying behavior ===
Markerless pose estimation – The advancement of computer vision techniques in recent years have allowed for precise quantifications of animal movements without needing to fit physical markers onto the subject. On high-speed video captured in a behavioral assay, keypoints from the subject can be extracted frame-by-frame, which is often useful to analyze in tandem with neural recordings/manipulations. Analyses can be conducted on how keypoints (i.e. parts of the animal) move within different phases of a particular behavior (on a short timescale), or throughout an animal's behavioral repertoire (longer timescale). These keypoint changes can be compared with corresponding changes in neural activity. A machine learning approach can also be used to identify specific behaviors (e.g. forward walking, turning, grooming, courtship, etc.), and quantify the dynamics of transitions between behaviors.
== Other research methods ==
Computational models - Using a computer to formulate real-world problems to develop solutions. Although this method is often focused in computer science, it has begun to move towards other areas of study. For example, psychology is one of these areas. Computational models allow researchers in psychology to enhance their understanding of the functions and developments in nervous systems. Examples of methods include the modelling of neurons, networks and brain systems and theoretical analysis. Computational methods have a wide variety of roles including clarifying experiments, hypothesis testing and generating new insights. These techniques play an increasing role in the advancement of biological psychology.
=== Limitations and advantages ===
Different manipulations have advantages and limitations. Neural tissue destroyed as a primary consequence of a surgery, electric shock or neurotoxin can confound the results so that the physical trauma masks changes in the fundamental neurophysiological processes of interest.
For example, when using an electrolytic probe to create a purposeful lesion in a distinct region of the rat brain, surrounding tissue can be affected: so, a change in behavior exhibited by the experimental group post-surgery is to some degree a result of damage to surrounding neural tissue, rather than by a lesion of a distinct brain region. Most genetic manipulation techniques are also considered permanent. Temporary lesions can be achieved with advanced in genetic manipulations, for example, certain genes can now be switched on and off with diet. Pharmacological manipulations also allow blocking of certain neurotransmitters temporarily as the function returns to its previous state after the drug has been metabolized.
== Topic areas ==
In general, behavioral neuroscientists study various neuronal and biological processes underlying behavior, though limited by the need to use nonhuman animals. As a result, the bulk of literature in behavioral neuroscience deals with experiences and mental processes that are shared across different animal models such as:
Sensation and perception
Motivated behavior (hunger, thirst, sex)
Control of movement
Learning and memory
Sleep and biological rhythms
Emotion
However, with increasing technical sophistication and with the development of more precise noninvasive methods that can be applied to human subjects, behavioral neuroscientists are beginning to contribute to other classical topic areas of psychology, philosophy, and linguistics, such as:
Language
Reasoning and decision making
Consciousness
Behavioral neuroscience has also had a strong history of contributing to the understanding of medical disorders, including those that fall under the purview of clinical psychology and biological psychopathology (also known as abnormal psychology). Although animal models do not exist for all mental illnesses, the field has contributed important therapeutic data on a variety of conditions, including:
Parkinson's disease, a degenerative disorder of the central nervous system that often impairs motor skills and speech.
Huntington's disease, a rare inherited neurological disorder whose most obvious symptoms are abnormal body movements and a lack of coordination. It also affects a number of mental abilities and some aspects of personality.
Alzheimer's disease, a neurodegenerative disease that, in its most common form, is found in people over the age of 65 and is characterized by progressive cognitive deterioration, together with declining activities of daily living and by neuropsychiatric symptoms or behavioral changes.
Clinical depression, a common psychiatric disorder, characterized by a persistent lowering of mood, loss of interest in usual activities and diminished ability to experience pleasure.
Schizophrenia, a psychiatric diagnosis that describes a mental illness characterized by impairments in the perception or expression of reality, most commonly manifesting as auditory hallucinations, paranoid or bizarre delusions or disorganized speech and thinking in the context of significant social or occupational dysfunction.
Autism, a brain development disorder that impairs social interaction and communication, and causes restricted and repetitive behavior, all starting before a child is three years old.
Anxiety, a physiological state characterized by cognitive, somatic, emotional, and behavioral components. These components combine to create the feelings that are typically recognized as fear, apprehension, or worry.
Drug abuse, including alcoholism.
=== Research on topic areas ===
==== Cognition ====
Behavioral neuroscientists conduct research on various cognitive processes through the use of different neuroimaging techniques. Examples of cognitive research might involve examination of neural correlates during emotional information processing, such as one study that analyzed the relationship between subjective affect and neural reactivity during sustained processing of positive (savoring) and negative (rumination) emotion. The aim of the study was to analyze whether repetitive positive thinking (seen as being beneficial) and repetitive negative thinking (significantly related to worse mental health) would have similar underlying neural mechanisms. Researchers found that the individuals who had a more intense positive affect during savoring, were also the same individuals who had a more intense negative affect during rumination. fMRI data showed similar activations in brain regions during both rumination and savoring, suggesting shared neural mechanisms between the two types of repetitive thinking. The results of the study suggest there are similarities, both subjectively and mechanistically, with repetitive thinking about positive and negative emotions. This overall suggests shared neural mechanisms by which sustained emotional processing of both positive and negative information occurs.
==== Stress ====
Research within the field of behavioral neuroscience involves looking at the complex neuroanatomy underlying different emotional processes, such as stress. Godoy et al. (2018) did so by providing an in-depth analyzation of the neurobiological underpinnings of the stress response. The article features on an overview on the historical development of stress research and its importance leading up to research related to both physical and psychological stressors today. The authors explored various significators of stress and their corresponding neuroanatomical processing, along with the temporal dynamics of both acute and chronic stress and its effects on the brain. Overall, the article provides a comprehensive scientific overview of stress through a neurobiological lens, highlighting the importance of our current knowledge in stress-related research areas today.
== Awards ==
Nobel Laureates
The following Nobel Prize winners could reasonably be considered behavioral neuroscientists or neurobiologists. (This list omits winners who were almost exclusively neuroanatomists or neurophysiologists; i.e., those that did not measure behavioral or neurobiological variables.)
Kavli Prize in Neuroscience
Ann Graybiel (1942)
Cornelia Bargmann (1961)
Winfried Denk (1957)
== See also ==
== References ==
== External links ==
Biological Psychology Links
Theory of Biological Psychology (Documents No. 9 and 10 in English)
IBRO (International Brain Research Organization) | Wikipedia/Behavioral_neuroscience |
The neuroscience of sleep is the study of the neuroscientific and physiological basis of the nature of sleep and its functions. Traditionally, sleep has been studied as part of psychology and medicine. The study of sleep from a neuroscience perspective grew to prominence with advances in technology and the proliferation of neuroscience research from the second half of the twentieth century.
The importance of sleep is demonstrated by the fact that organisms daily spend hours of their time in sleep, and that sleep deprivation can have disastrous effects ultimately leading to death in animals. For a phenomenon so important, the purposes and mechanisms of sleep are only partially understood, so much so that as recently as the late 1990s it was quipped: "The only known function of sleep is to cure sleepiness". However, the development of improved imaging techniques like EEG, PET and fMRI, along with faster computers have led to an increasingly greater understanding of the mechanisms underlying sleep.
The fundamental questions in the neuroscientific study of sleep are:
What are the correlates of sleep i.e. what are the minimal set of events that could confirm that the organism is sleeping?
How is sleep triggered and regulated by the brain and the nervous system?
What happens in the brain during sleep?
How can we understand sleep function based on physiological changes in the brain?
What causes various sleep disorders and how can they be treated?
Other areas of modern neuroscience sleep research include the evolution of sleep, sleep during development and aging, animal sleep, mechanism of effects of drugs on sleep, dreams and nightmares, and stages of arousal between sleep and wakefulness.
== Introduction ==
Rapid eye movement sleep (REM), non-rapid eye movement sleep (NREM or non-REM), and waking represent the three major modes of consciousness, neural activity, and physiological regulation. NREM sleep itself is divided into multiple stages: N1, N2 and N3. Sleep proceeds in 90-minute cycles of REM and NREM, the order normally being N1 → N2 → N3 → N2 → REM. As humans fall asleep, body activity slows down. Body temperature, heart rate, breathing rate, and energy use all decrease. Brain waves slow down. The excitatory neurotransmitter acetylcholine becomes less available in the brain. Humans often maneuver to create a thermally friendly environment—for example, by curling up into a ball if cold. Reflexes remain fairly active.
REM sleep is considered closer to wakefulness and is characterized by rapid eye movement and muscle atonia. NREM is considered to be deep sleep (the deepest part of NREM is called slow wave sleep), and is characterized by lack of prominent eye movement, or muscle paralysis. Especially during non-REM sleep, the brain uses significantly less energy during sleep than it does in waking. In areas with reduced activity, the brain restores its supply of adenosine triphosphate (ATP), the molecule used for short-term storage and transport of energy. (Since in quiet waking the brain is responsible for 20% of the body's energy use, this reduction has an independently noticeable impact on overall energy consumption.) During slow-wave sleep, humans secrete bursts of growth hormone. All sleep, even during the day, is associated with the secretion of prolactin.
According to the Hobson & McCarley activation-synthesis hypothesis, proposed in 1975–1977, the alternation between REM and non-REM can be explained in terms of cycling, reciprocally influential neurotransmitter systems. Sleep timing is controlled by the circadian clock, and in humans, to some extent by willed behavior. The term circadian comes from the Latin circa, meaning "around" (or "approximately"), and diem or dies, meaning "day". The circadian clock refers to a biological mechanism that governs multiple biological processes causing them to display an endogenous, entrainable oscillation of about 24 hours. These rhythms have been widely observed in plants, animals, fungi and cyanobacteria.
== Correlates of sleep ==
One of the important questions in sleep research is clearly defining the sleep state. This problem arises because sleep was traditionally defined as a state of consciousness and not as a physiological state, thus there was no clear definition of what minimum set of events constitute sleep and distinguish it from other states of partial or no consciousness. The problem of making such a definition is complicated because it needs to include a variety of modes of sleep found across different species.
At a symptomatic level, sleep is characterized by lack of reactivity to sensory inputs, low motor output, diminished conscious awareness and rapid reversibility to wakefulness. However, to translate these into a biological definition is difficult because no single pathway in the brain is responsible for the generation and regulation of sleep. One of the earliest proposals was to define sleep as the deactivation of the cerebral cortex and the thalamus because of near lack of response to sensory inputs during sleep. However, this was invalidated because both regions are active in some phases of sleep. In fact, it appears that the thalamus is only deactivated in the sense of transmitting sensory information to the cortex.
Some of the other observations about sleep included decrease of sympathetic activity and increase of parasympathetic activity in non-REM sleep, and increase of heart rate and blood pressure accompanied by decrease in homeostatic response and muscle tone during REM sleep. However, these symptoms are not limited to sleep situations and do not map to specific physiological definitions.
More recently, the problem of definition has been addressed by observing overall brain activity in the form of characteristic EEG patterns. Each stage of sleep and wakefulness has a characteristic pattern of EEG which can be used to identify the stage of sleep. Waking is usually characterized by beta (12–30 Hz) and gamma (25–100 Hz) depending on whether there was a peaceful or stressful activity. The onset of sleep involves slowing down of this frequency to the drowsiness of alpha (8–12 Hz) and finally to theta (4–10 Hz) of Stage 1 NREM sleep. This frequency further decreases progressively through the higher stages of NREM and REM sleep. On the other hand, the amplitude of sleep waves is lowest during wakefulness (10–30μV) and shows a progressive increase through the various stages of sleep. Stage 2 is characterized by sleep spindles (intermittent clusters of waves at sigma frequency i.e. 12–14 Hz) and K complexes (sharp upward deflection followed by slower downward deflection). Stage 3 sleep has more sleep spindles. Stage 3 has very high amplitude delta waves (0–4 Hz) and is known as slow wave sleep. REM sleep is characterized by low amplitude, mixed frequency waves. A sawtooth wave pattern is often present.
== Ontogeny and phylogeny of sleep ==
The questions of how sleep evolved in the animal kingdom and how it developed in humans are especially important because they might provide a clue to the functions and mechanisms of sleep respectively.
=== Sleep evolution ===
The evolution of different types of sleep patterns is influenced by a number of selective pressures, including body size, relative metabolic rate, predation, type and location of food sources, and immune function. Sleep (especially deep SWS and REM) is tricky behavior because it steeply increases predation risk. This means that, for sleep to have evolved, the functions of sleep should have provided a substantial advantage over the risk it entails. In fact, studying sleep in different organisms shows how they have balanced this risk by evolving partial sleep mechanisms or by having protective habitats. Thus, studying the evolution of sleep might give a clue not only to the developmental aspects and mechanisms, but also to an adaptive justification for sleep.
One challenge studying sleep evolution is that adequate sleep information is known only for two phyla of animals- chordata and arthropoda. With the available data, comparative studies have been used to determine how sleep might have evolved. One question that scientists try to answer through these studies is whether sleep evolved only once or multiple times. To understand this, they look at sleep patterns in different classes of animals whose evolutionary histories are fairly well-known and study their similarities and differences.
Humans possess both slow wave and REM sleep, in both phases both eyes are closed and both hemispheres of the brain involved. Sleep has also been recorded in mammals other than humans. One study showed that echidnas possess only slow wave sleep (non-REM). This seems to indicate that REM sleep appeared in evolution only after therians. But this has later been contested by studies that claim that sleep in echidna combines both modes into a single sleeping state. Other studies have shown a peculiar form of sleep in odontocetes (like dolphins and porpoises). This is called the unihemispherical slow wave sleep (USWS). At any time during this sleep mode, the EEG of one brain hemisphere indicates sleep while that of the other is equivalent to wakefulness. In some cases, the corresponding eye is open. This might allow the animal to reduce predator risk and sleep while swimming in water, though the animal may also be capable of sleeping at rest.
The correlates of sleep found for mammals are valid for birds as well i.e. bird sleep is very similar to mammals and involves both SWS and REM sleep with similar features, including closure of both eyes, lowered muscle tone, etc. However, the proportion of REM sleep in birds is much lower. Also, some birds can sleep with one eye open if there is high predation risk in the environment. This gives rise to the possibility of sleep in flight; considering that sleep is very important and some bird species can fly for weeks continuously, this seems to be the obvious result. However, sleep in flight has not been recorded, and is so far unsupported by EEG data. Further research may explain whether birds sleep during flight or if there are other mechanisms which ensure their remaining healthy during long flights in the absence of sleep.
Unlike in birds, very few consistent features of sleep have been found among reptile species. The only common observation is that reptiles do not have REM sleep.
Sleep in some invertebrates has also been extensively studied, e.g., sleep in fruitflies (Drosophila) and honeybees. Some of the mechanisms of sleep in these animals have been discovered while others remain quite obscure. The features defining sleep have been identified for the most part, and like mammals, this includes reduced reaction to sensory input, lack of motor response in the form of antennal immobility, etc.
The fact that both the forms of sleep are found in mammals and birds, but not in reptiles (which are considered to be an intermediate stage) indicates that sleep might have evolved separately in both. Substantiating this might be followed by further research on whether the EEG correlates of sleep are involved in its functions or if they are merely a feature. This might further help in understanding the role of sleep in long term plasticity.
According to Tsoukalas (2012), REM sleep is an evolutionary transformation of a well-known defensive mechanism, the tonic immobility reflex. This reflex, also known as animal hypnosis or death feigning, functions as the last line of defense against an attacking predator and consists of the total immobilization of the animal: the animal appears dead (cf. "playing possum"). The neurophysiology and phenomenology of this reaction show striking similarities to REM sleep, a fact which betrays a deep evolutionary kinship. For example, both reactions exhibit brainstem control, paralysis, sympathetic activation, and thermoregulatory changes. This theory integrates many earlier findings into a unified, and evolutionary well informed, framework.
=== Sleep development and aging ===
The ontogeny of sleep is the study of sleep across different age groups of a species, particularly during development and aging. Among mammals, infants sleep the longest. Human babies have 8 hours of REM sleep and 8 hours of NREM sleep on an average. The percentage of time spent on each mode of sleep varies greatly in the first few weeks of development and some studies have correlated this to the degree of precociality of the child. Within a few months of postnatal development, there is a marked reduction in percentage of hours spent in REM sleep. By the time the child becomes an adult, she or he spends about 6–7 hours in NREM sleep and only about an hour in REM sleep. This is true not only of humans, but of many animals dependent on their parents for food. The observation that the percentage of REM sleep is very high in the first stages of development has led to the hypothesis that REM sleep might facilitate early brain development. However, this theory has been contested by other studies.
Sleep behavior undergoes substantial changes during adolescence. Some of these changes may be societal in humans, but other changes are hormonal. Another important change is the decrease in the number of hours of sleep, as compared to childhood, which gradually becomes identical to an adult. It is also being speculated that homeostatic regulation mechanisms may be altered during adolescence. Apart from this, the effect of changing routines of adolescents on other behavior such as cognition and attention is yet to be studied. Ohayon et al., for example, have stated that the decline in total sleep time from childhood to adolescence seems to be more associated with environmental factors rather than biological feature.
In adulthood, the sleep architecture has been showing that the sleep latency and the time spent in NREM stages 1 and 2 may increase with aging, while the time spent in REM and SWS sleep seem to decrease. These changes have been frequently associated with brain atrophy, cognitive impairment and neurodegenerative disorders in old age. For instance, Backhaus et al. have pointed out that a decline in declarative memory consolidation in midlife (in their experiment: 48 to 55 years old) is due to a lower amount of SWS, which might already start to decrease around age of 30 years old. According to Mander et al., atrophy in the medial prefrontal cortex (mPFC) gray matter is a predictor of disruption in slow activity during NREM sleep that may impair memory consolidation in older adults. And sleep disturbances, such as excessive daytime sleepiness and nighttime insomnia, have been often referred as factor risk of progressive functional impairment in Alzheimer's disease (AD) or Parkinson's disease (PD).
Therefore, sleep in aging is another equally important area of research. A common observation is that many older adults spend time awake in bed after sleep onset in an inability to fall asleep and experience marked decrease in sleep efficiency. There may also be some changes in circadian rhythms. Studies are ongoing about what causes these changes and how they may be reduced to ensure comfortable sleep of old adults.
== Brain activity during sleep ==
Understanding the activity of different parts of the brain during sleep can give a clue to the functions of sleep. It has been observed that mental activity is present during all stages of sleep, though from different regions in the brain. So, contrary to popular understanding, the brain never completely shuts down during sleep. Also, sleep intensity of a particular region is homeostatically related to the corresponding amount of activity before sleeping. The use of imaging modalities like PET, fMRI and MEG, combined with EEG recordings, gives a clue to which brain regions participate in creating the characteristic wave signals and what their functions might be.
=== Historical development of the stages model ===
The stages of sleep were first described in 1937 by Alfred Lee Loomis and his coworkers, who separated the different electroencephalography (EEG) features of sleep into five levels (A to E), representing the spectrum from wakefulness to deep sleep. In 1953, REM sleep was discovered as distinct, and thus William C. Dement and Nathaniel Kleitman reclassified sleep into four NREM stages and REM. The staging criteria were standardized in 1968 by Allan Rechtschaffen and Anthony Kales in the "R&K sleep scoring manual."
In the R&K standard, NREM sleep was divided into four stages, with slow-wave sleep comprising stages 3 and 4. In stage 3, delta waves made up less than 50% of the total wave patterns, while they made up more than 50% in stage 4. Furthermore, REM sleep was sometimes referred to as stage 5. In 2004, the AASM commissioned the AASM Visual Scoring Task Force to review the R&K scoring system. The review resulted in several changes, the most significant being the combination of stages 3 and 4 into Stage N3. The revised scoring was published in 2007 as The AASM Manual for the Scoring of Sleep and Associated Events. Arousals, respiratory, cardiac, and movement events were also added.
=== NREM sleep activity ===
NREM sleep is characterized by decreased global and regional cerebral blood flow. It constitutes ~80% of all sleep in adult humans. Initially, it was expected that the brainstem, which was implicated in arousal would be inactive, but this was later on found to have been due to low resolution of PET studies and it was shown that there is some slow wave activity in the brainstem as well. However, other parts of the brain, including the precuneus, basal forebrain and basal ganglia are deactivated during sleep. Many areas of the cortex are also inactive, but to different levels. For example, the ventromedial prefrontal cortex is considered the least active area while the primary cortex, the least deactivated.
NREM sleep is characterized by slow oscillations, spindles and delta waves. The slow oscillations have been shown to be from the cortex, as lesions in other parts of the brain do not affect them, but lesions in the cortex do. The delta waves have been shown to be generated by recurrent connections within the cerebral cortex. During slow wave sleep, the cortex generates brief periods of activity and inactivity at 0.5–4 Hz, resulting in the generation of the delta waves of slow wave sleep. During this period, the thalamus stops relaying sensory information to the brain, however it continues to produce signals, such as spindle waves, that are sent to its cortical projections. Sleep spindles of slow wave sleep are generated as an interaction of the thalamic reticular nucleus with thalamic relay neurons. The sleep spindles have been predicted to play a role in disconnecting the cortex from sensory input and allowing entry of calcium ions into cells, thus potentially playing a role in plasticity.
==== NREM 1 ====
NREM Stage 1 (N1 – light sleep, somnolence, drowsy sleep – 5–10% of total sleep in adults): This is a stage of sleep that usually occurs between sleep and wakefulness, and sometimes occurs between periods of deeper sleep and periods of REM. The muscles are active, and the eyes roll slowly, opening and closing moderately. The brain transitions from alpha waves having a frequency of 8–13 Hz (common in the awake state) to theta waves having a frequency of 4–7 Hz. Sudden twitches and hypnic jerks, also known as positive myoclonus, may be associated with the onset of sleep during N1. Some people may also experience hypnagogic hallucinations during this stage. During Non-REM1, humans lose some muscle tone and most conscious awareness of the external environment.
==== NREM 2 ====
NREM Stage 2 (N2 – 45–55% of total sleep in adults): In this stage, theta activity is observed and sleepers become gradually harder to awaken; the alpha waves of the previous stage are interrupted by abrupt activity called sleep spindles (or thalamocortical spindles) and K-complexes. Sleep spindles range from 11 to 16 Hz (most commonly 12–14 Hz). During this stage, muscular activity as measured by EMG decreases, and conscious awareness of the external environment disappears.
==== NREM 3 ====
NREM Stage 3 (N3 – 15–25% of total sleep in adults): Formerly divided into stages 3 and 4, this stage is called slow-wave sleep (SWS) or deep sleep. SWS is initiated in the preoptic area and consists of delta activity, high amplitude waves at less than 3.5 Hz. The sleeper is less responsive to the environment; many environmental stimuli no longer produce any reactions. Slow-wave sleep is thought to be the most restful form of sleep, the phase which most relieves subjective feelings of sleepiness and restores the body.
This stage is characterized by the presence of a minimum of 20% delta waves ranging from 0.5–2 Hz and having a peak-to-peak amplitude >75 μV. (EEG standards define delta waves to be from 0 to 4 Hz, but sleep standards in both the original R&K model (Allan Rechtschaffen and Anthony Kales in the "R&K sleep scoring manual."), as well as the new 2007 AASM guidelines have a range of 0.5–2 Hz.) This is the stage in which parasomnias such as night terrors, nocturnal enuresis, sleepwalking, and somniloquy occur. Many illustrations and descriptions still show a stage N3 with 20–50% delta waves and a stage N4 with greater than 50% delta waves; these have been combined as stage N3.
=== REM sleep activity ===
REM Stage (REM Sleep – 20–25% of total sleep in adults): REM sleep is where most muscles are paralyzed, and heart rate, breathing and body temperature become unregulated. REM sleep is turned on by acetylcholine secretion and is inhibited by neurons that secrete monoamines including serotonin. REM is also referred to as paradoxical sleep because the sleeper, although exhibiting high-frequency EEG waves similar to a waking state, is harder to arouse than at any other sleep stage. Vital signs indicate arousal and oxygen consumption by the brain is higher than when the sleeper is awake. REM sleep is characterized by high global cerebral blood flow, comparable to wakefulness. In fact, many areas in the cortex have been recorded to have more blood flow during REM sleep than even wakefulness- this includes the hippocampus, temporal-occipital areas, some parts of the cortex, and basal forebrain. The limbic and paralimbic system including the amygdala are other active regions during REM sleep. Though the brain activity during REM sleep appears very similar to wakefulness, the main difference between REM and wakefulness is that, arousal in REM is more effectively inhibited. This, along with the virtual silence of monoaminergic neurons in the brain, may be said to characterize REM.
A newborn baby spends 8 to 9 hours a day just in REM sleep. By the age of five or so, only slightly over two hours is spent in REM. The function of REM sleep is uncertain but a lack of it impairs the ability to learn complex tasks. Functional paralysis from muscular atonia in REM may be necessary to protect organisms from self-damage through physically acting out scenes from the often-vivid dreams that occur during this stage.
In EEG recordings, REM sleep is characterized by high frequency, low amplitude activity and spontaneous occurrence of beta and gamma waves. The best candidates for generation of these fast frequency waves are fast rhythmic bursting neurons in corticothalamic circuits. Unlike in slow wave sleep, the fast frequency rhythms are synchronized over restricted areas in specific local circuits between thalamocortical and neocortical areas. These are said to be generated by cholinergic processes from brainstem structures.
Apart from this, the amygdala plays a role in REM sleep modulation, supporting the hypothesis that REM sleep allows internal information processing. The high amygdalar activity may also cause the emotional responses during dreams. Similarly, the bizarreness of dreams may be due to the decreased activity of prefrontal regions, which are involved in integrating information as well as episodic memory.
=== Ponto-geniculo-occipital waves ===
REM sleep is also related to the firing of ponto-geniculo-occipital waves (also called phasic activity or PGO waves) and activity in the cholinergic ascending arousal system. PGO waves have been recorded in the lateral geniculate nucleus and occipital cortex during the pre-REM period and are thought to represent dream content. The greater signal-to-noise ratio in the LG cortical channel suggests that visual imagery in dreams may appear before full development of REM sleep, but this has not yet been confirmed. PGO waves may also play a role in development and structural maturation of brain, as well as long term potentiation in immature animals, based on the fact that there is high PGO activity during sleep in the developmental brain.
=== Network reactivation ===
The other form of activity during sleep is reactivation. Some electrophysiological studies have shown that neuronal activity patterns found during a learning task before sleep are reactivated in the brain during sleep. This, along with the coincidence of active areas with areas responsible for memory have led to the theory that sleep might have some memory consolidation functions. In this relation, some studies have shown that after a sequential motor task, the pre-motor and visual cortex areas involved are most active during REM sleep, but not during NREM. Similarly, the hippocampal areas involved in spatial learning tasks are reactivated in NREM sleep, but not in REM. Such studies suggest a role of sleep in consolidation of specific memory types. It is, however, still unclear whether other types of memory are also consolidated by these mechanisms.
=== Hippocampal neocortical dialog ===
The hippocampal neocortical dialog refers to the very structured interactions during SWS between groups of neurons called ensembles in the hippocampus and neocortex. Sharp wave patterns (SPW) dominate the hippocampus during SWS and neuron populations in the hippocampus participate in organized bursts during this phase. This is done in synchrony with state changes in the cortex (DOWN/UP state) and coordinated by the slow oscillations in cortex. These observations, coupled with the knowledge that the hippocampus plays a role in short to medium term memory whereas the cortex plays a role in long-term memory, have led to the hypothesis that the hippocampal neocortical dialog might be a mechanism through which the hippocampus transfers information to the cortex. Thus, the hippocampal neocortical dialog is said to play a role in memory consolidation.
== Sleep regulation ==
Sleep regulation refers to the control of when an organism transitions between sleep and wakefulness. The key questions here are to identify which parts of the brain are involved in sleep onset and what their mechanisms of action are. In humans and most animals sleep and wakefulness seems to follow an electronic flip-flop model i.e. both states are stable, but the intermediate states are not. Of course, unlike in the flip-flop, in the case of sleep, there seems to be a timer ticking away from the minute of waking so that after a certain period one must sleep, and in such a case even waking becomes an unstable state. The reverse may also be true to a lesser extent.
=== Sleep onset ===
Sleep onset can be negatively influenced from lesions in the preoptic area and anterior hypothalamus leading to insomnia while lesions in the posterior hypothalamus lead to sleepiness. This was further narrowed down to show that the central midbrain tegmentum is the region that plays a role in cortical activation. Thus, sleep onset seems to arise from activation of the anterior hypothalamus along with inhibition of the posterior regions and the central midbrain tegmentum. Further research has shown that the hypothalamic region called ventrolateral preoptic nucleus produces the inhibitory neurotransmitter GABA that inhibits the arousal system during sleep onset.
=== Models of sleep regulation ===
Sleep is regulated by two parallel mechanisms, homeostatic regulation and circadian regulation, controlled by the hypothalamus and the suprachiasmatic nucleus (SCN), respectively. Although the exact nature of sleep drive is unknown, homeostatic pressure builds up during wakefulness and this continues until the person goes to sleep. Adenosine is thought to play a critical role in this and many people have proposed that the pressure build-up is partially due to adenosine accumulation. However, some researchers have shown that accumulation alone does not explain this phenomenon completely. The circadian rhythm is a 24-hour cycle in the body, which has been shown to continue even in the absence of environmental cues. This is caused by projections from the SCN to the brain stem.
This two process model was first proposed in 1982 by Borbely, who called them Process S (homeostatic) and Process C (Circadian) respectively. He showed how the slow wave density increases through the night and then drops off at the beginning of the day while the circadian rhythm is like a sinusoid. He proposed that the pressure to sleep was the maximum when the difference between the two was highest.
In 1993, a different model called the opponent process model was proposed. This model explained that these two processes opposed each other to produce sleep, as against Borbely's model. According to this model, the SCN, which is involved in the circadian rhythm, enhances wakefulness and opposes the homeostatic rhythm. In opposition is the homeostatic rhythm, regulated via a complex multisynaptic pathway in the hypothalamus that acts like a switch and shuts off the arousal system. Both effects together produce a see-saw like effect of sleep and wakefulness. More recently, it has been proposed that both models have some validity to them, while new theories hold that inhibition of NREM sleep by REM could also play a role. In any case, the two process mechanism adds flexibility to the simple circadian rhythm and could have evolved as an adaptive measure.
=== Thalamic regulation ===
Much of the brain activity in sleep has been attributed to the thalamus and it appears that the thalamus may play a critical role in SWS. The two primary oscillations in slow wave sleep, delta and the slow oscillation, can be generated by both the thalamus and the cortex. However, sleep spindles can only be generated by the thalamus, making its role very important. The thalamic pacemaker hypothesis holds that these oscillations are generated by the thalamus but the synchronization of several groups of thalamic neurons firing simultaneously depends on the thalamic interaction with the cortex. The thalamus also plays a critical role in sleep onset when it changes from tonic to phasic mode, thus acting like a mirror for both central and decentral elements and linking distant parts of the cortex to co-ordinate their activity.
=== Ascending reticular activating system ===
The ascending reticular activating system consists of a set of neural subsystems that project from various thalamic nuclei and a number of dopaminergic, noradrenergic, serotonergic, histaminergic, cholinergic, and glutamatergic brain nuclei. When awake, it receives all kinds of non-specific sensory information and relays them to the cortex. It also modulates fight or flight responses and is hence linked to the motor system. During sleep onset, it acts via two pathways: a cholinergic pathway that projects to the cortex via the thalamus and a set of monoaminergic pathways that projects to the cortex via the hypothalamus. During NREM sleep this system is inhibited by GABAergic neurons in the ventrolateral preoptic area and parafacial zone, as well as other sleep-promoting neurons in distinct brain regions.
== Sleep function ==
Sleep deprivation studies show that sleep is particularly important to normal brain function. Sleep is needed to remove reactive oxygen species caused by oxidative stress (and generally autophagy) and to repair DNA. REM sleep also decrease concentration of noradrenaline, which when in excess amount causes the cell to undergo apoptosis.
It is likely that sleep evolved to fulfill some primeval function and took on multiple functions over time (analogous to the larynx, which controls the passage of food and air, but descended over time to develop speech capabilities).
The multiple hypotheses proposed to explain the function of sleep reflect the incomplete understanding of the subject. While some functions of sleep are known, others have been proposed but not completely substantiated or understood. Some of the early ideas about sleep function were based on the fact that most (if not all) external activity is stopped during sleep. Initially, it was thought that sleep was simply a mechanism for the body to "take a break" and reduce wear. Later observations of the low metabolic rates in the brain during sleep seemed to indicate some metabolic functions of sleep. This theory is not fully adequate as sleep only decreases metabolism by about 5–10%. With the development of EEG, it was found that the brain has almost continuous internal activity during sleep, leading to the idea that the function could be that of reorganization or specification of neuronal circuits or strengthening of connections. These hypotheses are still being explored. Other proposed functions of sleep include- maintaining hormonal balance, temperature regulation and maintaining heart rate.
According to a recent sleep disruption and insomnia review study, there are short-term and long-term negative consequences on healthy individuals. The short term consequences include increased stress responsivity and psychosocial issues such as impaired cognitive or academic performance and depression. Experiments indicated that, in healthy children and adults, episodes of fragmented sleep or insomnia increased sympathetic activation, which can disrupt mood and cognition. The long term consequences include metabolic issues such as glucose homeostasis disruption and even tumor formation and increased risks of cancer.
=== Preservation ===
The "Preservation and Protection" theory holds that sleep serves an adaptive function. It protects the animal during that portion of the 24-hour day in which being awake, and hence roaming around, would place the individual at greatest risk. Organisms do not require 24 hours to feed themselves and meet other necessities. From this perspective of adaptation, organisms are safer by staying out of harm's way, where potentially they could be prey to other, stronger organisms. They sleep at times that maximize their safety, given their physical capacities and their habitats.
This theory fails to explain why the brain disengages from the external environment during normal sleep. However, the brain consumes a large proportion of the body's energy at any one time and preservation of energy could only occur by limiting its sensory inputs. Another argument against the theory is that sleep is not simply a passive consequence of removing the animal from the environment, but is a "drive"; animals alter their behaviors in order to obtain sleep.
Therefore, circadian regulation is more than sufficient to explain periods of activity and quiescence that are adaptive to an organism, but the more peculiar specializations of sleep probably serve different and unknown functions. Moreover, the preservation theory needs to explain why carnivores like lions, which are on top of the food chain and thus have little to fear, sleep the most. It has been suggested that they need to minimize energy expenditure when not hunting.
=== Waste clearance from the brain ===
During sleep, metabolic waste products, such as immunoglobulins, protein fragments or intact proteins like beta-amyloid, may be cleared from the interstitium via a glymphatic system of lymph-like channels coursing along perivascular spaces and the astrocyte network of the brain. According to this model, hollow tubes between the blood vessels and astrocytes act like a spillway allowing drainage of cerebrospinal fluid carrying wastes out of the brain into systemic blood. Such mechanisms, which remain under preliminary research as of 2017, indicate potential ways in which sleep is a regulated maintenance period for brain immune functions and clearance of beta-amyloid, a risk factor for Alzheimer's disease.
=== Restoration ===
Wound healing has been shown to be affected by sleep.
It has been shown that sleep deprivation affects the immune system. It is now possible to state that "sleep loss impairs immune function and immune challenge alters sleep," and it has been suggested that sleep increases white blood cell counts. A 2014 study found that depriving mice of sleep increased cancer growth and dampened the immune system's ability to control cancers.
The effect of sleep duration on somatic growth is not completely known. One study recorded growth, height, and weight, as correlated to parent-reported time in bed in 305 children over a period of nine years (age 1–10). It was found that "the variation of sleep duration among children does not seem to have an effect on growth." It is well established that slow-wave sleep affects growth hormone levels in adult men. During eight hours' sleep, Van Cauter, Leproult, and Plat found that the men with a high percentage of SWS (average 24%) also had high growth hormone secretion, while subjects with a low percentage of SWS (average 9%) had low growth hormone secretion.
There is some supporting evidence of the restorative function of sleep. The sleeping brain has been shown to remove metabolic waste products at a faster rate than during an awake state. While awake, metabolism generates reactive oxygen species, which are damaging to cells. In sleep, metabolic rates decrease and reactive oxygen species generation is reduced allowing restorative processes to take over. It is theorized that sleep helps facilitate the synthesis of molecules that help repair and protect the brain from these harmful elements generated during waking. The metabolic phase during sleep is anabolic; anabolic hormones such as growth hormones (as mentioned above) are secreted preferentially during sleep.
Energy conservation could as well have been accomplished by resting quiescent without shutting off the organism from the environment, potentially a dangerous situation. A sedentary nonsleeping animal is more likely to survive predators, while still preserving energy. Sleep, therefore, seems to serve another purpose, or other purposes, than simply conserving energy. Another potential purpose for sleep could be to restore signal strength in synapses that are activated while awake to a "baseline" level, weakening unnecessary connections that to better facilitate learning and memory functions again the next day; this means the brain is forgetting some of the things we learn each day.
=== Entropy reduction ===
Source:
This theory is related to the restorative role of sleep but distinct enough since it deals with a very specific quantify: entropy. In a very simplified way, wakefulness can be associated with increased disorder in the nervous system and this disorder can threaten the high order that is needed for proper function of the nervous system. Entropy is related to order and disorder, but it is not necessarily the same. Cortical activity gets progressively disrupted during wakefulness and sleep restores the levels of cortical activity close to criticality. Signal noise affects many aspect of the central nervous system. Understanding the relationship between wakefulness and entropy can be approached from the field of statistical mechanics. At a substratum level, interactions with the environment increase the number of possible micro states of the nervous system and this leads to an increase in entropy.
The reduction in entropy can also be approached from the perspective of classic and non-equilibrium thermodynamics. The central nervous system uses a disproportionate amount of the available energy supply. Most of the energy usage of the nervous system is devoted to electric neuronal activity and synaptic processes. Energy is used in large amounts by the Na+/K + -ATPase pump to move sodium and potassium in the generation of action potentials; this process is highly efficient but entropy is still generated.
=== Endocrine function ===
The secretion of many hormones is affected by sleep-wake cycles. For example, melatonin, a hormonal timekeeper, is considered a strongly circadian hormone, whose secretion increases at dim light and peaks during nocturnal sleep, diminishing with bright light to the eyes. In some organisms melatonin secretion depends on sleep, but in humans it is independent of sleep and depends only on light level. Of course, in humans as well as other animals, such a hormone may facilitate coordination of sleep onset. Similarly, cortisol and thyroid stimulating hormone (TSH) are strongly circadian and diurnal hormones, mostly independent of sleep. In contrast, other hormones like growth hormone (GH) & prolactin are critically sleep-dependent, and are suppressed in the absence of sleep. GH has maximum increase during SWS while prolactin is secreted early after sleep onset and rises through the night. In some hormones whose secretion is controlled by light level, sleep seems to increase secretion. Almost in all cases, sleep deprivation has detrimental effects. For example, cortisol, which is essential for metabolism (it is so important that animals can die within a week of its deficiency) and affects the ability to withstand noxious stimuli, is increased by waking and during REM sleep. Similarly, TSH increases during nocturnal sleep and decreases with prolonged periods of reduced sleep, but increases during total acute sleep deprivation.
Because hormones play a major role in energy balance and metabolism, and sleep plays a critical role in the timing and amplitude of their secretion, sleep has a sizable effect on metabolism. This could explain some of the early theories of sleep function that predicted that sleep has a metabolic regulation role.
=== Memory processing ===
According to Plihal & Born, sleep generally increases recalling of previous learning and experiences, and its benefit depends on the phase of sleep and the type of memory. For example, studies based on declarative and procedural memory tasks applied over early and late nocturnal sleep, as well as wakefulness controlled conditions, have been shown that declarative memory improves more during early sleep (dominated by SWS) while procedural memory during late sleep (dominated by REM sleep).
Regarding to declarative memory, the functional role of SWS has been associated with hippocampal replays of previously encoded neural patterns that seem to facilitate long-term memories consolidation. This assumption is based on the active system consolidation hypothesis, which states that repeated reactivations of newly encoded information in hippocampus during slow oscillations in NREM sleep mediate the stabilization and gradually integration of declarative memory with pre-existing knowledge networks on the cortical level. It assumes the hippocampus might hold information only temporarily and in fast-learning rate, whereas the neocortex is related to long-term storage and slow-learning rate. This dialogue between hippocampus and neocortex occurs in parallel with hippocampal sharp-wave ripples and thalamo-cortical spindles, synchrony that drives the formation of spindle-ripple event which seems to be a prerequisite for the formation of long-term memories.
Reactivation of memory also occurs during wakefulness and its function is associated with serving to update the reactivated memory with new encoded information, whereas reactivations during SWS are presented as crucial for memory stabilization. Based on targeted memory reactivation (TMR) experiments that use associated memory cues to triggering memory traces during sleep, several studies have been reassuring the importance of nocturnal reactivations for the formation of persistent memories in neocortical networks, as well as highlighting the possibility of increasing people's memory performance at declarative recalls.
Furthermore, nocturnal reactivation seems to share the same neural oscillatory patterns as reactivation during wakefulness, processes which might be coordinated by theta activity. During wakefulness, theta oscillations have been often related to successful performance in memory tasks, and cued memory reactivations during sleep have been showing that theta activity is significantly stronger in subsequent recognition of cued stimuli as compared to uncued ones, possibly indicating a strengthening of memory traces and lexical integration by cuing during sleep. However, the beneficial effect of TMR for memory consolidation seems to occur only if the cued memories can be related to prior knowledge.
Other studies have been also looking at the specific effects of different stages of sleep on different types of memory. For example, it has been found that sleep deprivation does not significantly affect recognition of faces, but can produce a significant impairment of temporal memory (discriminating which face belonged to which set shown). Sleep deprivation was also found to increase beliefs of being correct, especially if they were wrong. Another study reported that the performance on free recall of a list of nouns is significantly worse when sleep deprived (an average of 2.8 ± 2 words) compared to having a normal night of sleep (4.7 ± 4 words). These results reinforce the role of sleep on declarative memory formation. This has been further confirmed by observations of low metabolic activity in the prefrontal cortex and temporal and parietal lobes for the temporal learning and verbal learning tasks respectively. Data analysis has also shown that the neural assemblies during SWS correlated significantly more with templates than during waking hours or REM sleep. Also, post-learning, post-SWS reverberations lasted 48 hours, much longer than the duration of novel object learning (1 hour), indicating long term potentiation.
Moreover, observations include the importance of napping: improved performance in some kinds of tasks after a 1-hour afternoon nap; studies of performance of shift workers, showing that an equal number of hours of sleep in the day is not the same as in the night. Current research studies look at the molecular and physiological basis of memory consolidation during sleep. These, along with studies of genes that may play a role in this phenomenon, together promise to give a more complete picture of the role of sleep in memory.
=== Renormalizing the synaptic strength ===
Sleep can also serve to weaken synaptic connections that were acquired over the course of the day but which are not essential to optimal functioning. In doing so, the resource demands can be lessened, since the upkeep and strengthening of synaptic connections constitutes a large portion of energy consumption by the brain and tax other cellular mechanisms such as protein synthesis for new channels. Without a mechanism like this taking place during sleep, the metabolic needs of the brain would increase over repeated exposure to daily synaptic strengthening, up to a point where the strains become excessive or untenable.
=== Behavior change with sleep deprivation ===
One approach to understanding the role of sleep is to study the deprivation of it. Sleep deprivation is common and sometimes even necessary in modern societies because of occupational and domestic reasons like round-the-clock service, security or media coverage, cross-time-zone projects etc. This makes understanding the effects of sleep deprivation very important.
Many studies have been done from the early 1900s to document the effect of sleep deprivation. The study of REM deprivation began with William C. Dement around 1960. He conducted a sleep and dream research project on eight subjects, all male. For a span of up to 7 days, he deprived the participants of REM sleep by waking them each time they started to enter the stage. He monitored this with small electrodes attached to their scalp and temples. As the study went on, he noticed that the more he deprived the men of REM sleep, the more often he had to wake them. Afterwards, they showed more REM sleep than usual, later named REM rebound.
The neurobehavioral basis for these has been studied only recently. Sleep deprivation has been strongly correlated with increased probability of accidents and industrial errors. Many studies have shown the slowing of metabolic activity in the brain with many hours of sleep debt. Some studies have also shown that the attention network in the brain is particularly affected by lack of sleep, and though some of the effects on attention may be masked by alternate activities (like standing or walking) or caffeine consumption, attention deficit cannot be completely avoided.
Sleep deprivation has been shown to have a detrimental effect on cognitive tasks, especially involving divergent functions or multitasking. It also has effects on mood and emotion, and there have been multiple reports of increased tendency for rage, fear or depression with sleep debt. However, some of the higher cognitive functions seem to remain unaffected albeit slower. Many of these effects vary from person to person i.e. while some individuals have high degrees of cognitive impairment with lack of sleep, in others, it has minimal effects. The exact mechanisms for the above are still unknown and the exact neural pathways and cellular mechanisms of sleep debt are still being researched.
== Sleep disorders ==
A sleep disorder, or somnipathy, is a medical disorder of the sleep patterns of a person or animal. Polysomnography is a test commonly used for diagnosing some sleep disorders. Sleep disorders are broadly classified into dyssomnias, parasomnias, circadian rhythm sleep disorders (CRSD), and other disorders including ones caused by medical or psychological conditions and sleeping sickness. Some common sleep disorders include insomnia (chronic inability to sleep), sleep apnea (abnormally low breathing during sleep), narcolepsy (excessive sleepiness at inappropriate times), cataplexy (sudden and transient loss of muscle tone), and sleeping sickness (disruption of sleep cycle due to infection). Other disorders that are being studied include sleepwalking, sleep terror and bed wetting.
Studying sleep disorders is particularly useful as it gives some clues as to which parts of the brain may be involved in the modified function. This is done by comparing the imaging and histological patterns in normal and affected subjects. Treatment of sleep disorders typically involves behavioral and psychotherapeutic methods though other techniques may also be used. The choice of treatment methodology for a specific patient depends on the patient's diagnosis, medical and psychiatric history, and preferences, as well as the expertise of the treating clinician. Often, behavioral or psychotherapeutic and pharmacological approaches are compatible and can effectively be combined to maximize therapeutic benefits.
Frequently, sleep disorders have been also associated with neurodegenerative diseases, mainly when they are characterized by abnormal accumulation of alpha-synuclein, such as multiple system atrophy (MSA), Parkinson's disease (PD) and Lewy body disease (LBD). For instance, people diagnosed with PD have often presented different kinds of sleep concerns, commonly regard to insomnia (around 70% of the PD population), hypersomnia (more than 50% of the PD population), and REM sleep behavior disorder (RBD) - that may affect around 40% of the PD population and it is associated with increased motor symptoms. Furthermore, RBD has been also highlighted as a strong precursor of future development of those neurodegenerative diseases over several years in prior, which seems to be a great opportunity for improving treatments.
Sleep disturbances have been also observed in Alzheimer's disease (AD), affecting about 45% of its population. Moreover, when it is based on caregiver reports this percentage is even higher, about 70%. As well as in PD population, insomnia and hypersomnia are frequently recognized in AD patients, which are associated with accumulation of Beta-amyloid, circadian rhythm sleep disorders (CRSD) and melatonin alteration. Additionally, changes in sleep architecture are observed in AD too. Even though with ageing the sleep architecture seems to change naturally, in AD patients it is aggravated. SWS is potentially decreased (sometimes totally absent), spindles and the time spent in REM sleep are also reduced, while its latency is increased. The poorly sleep onset in AD has been also associated with dream-related hallucination, increased restlessness, wandering and agitation, that seem to be related with sundowning - a typical chronobiological phenomenon presented in the disease.
The neurodegenerative conditions are commonly related to brain structures impairments, which might disrupt the states of sleep and wakefulness, circadian rhythm, motor or non motor functioning. On the other hand, sleep disturbances are also frequently related to worsening patient's cognitive functioning, emotional state and quality of life. Furthermore, these abnormal behavioural symptoms negatively contribute to overwhelming their relatives and caregivers. Therefore, a deeper understanding of the relationship between sleep disorders and neurodegenerative diseases seems to be extremely important, mainly considering the limited research related to it and the increasing expectancy of life.
A related field is that of sleep medicine which involves the diagnosis and therapy of sleep disorders and sleep deprivation, which is a major cause of accidents. This involves a variety of diagnostic methods including polysomnography, sleep diary, multiple sleep latency test, etc. Similarly, treatment may be behavioral such as cognitive behavioral therapy or may include pharmacological medication or bright light therapy.
== Dreaming ==
Dreams are successions of images, ideas, emotions, and sensations that occur involuntarily in the mind during certain stages of sleep (mainly the REM stage). The content and purpose of dreams are not yet clearly understood though various theories have been proposed. The scientific study of dreams is called oneirology.
There are many theories about the neurological basis of dreaming. This includes the activation synthesis theory—the theory that dreams result from brain stem activation during REM sleep; the continual activation theory—the theory that dreaming is a result of activation and synthesis but dreams and REM sleep are controlled by different structures in the brain; and dreams as excitations of long-term memory—a theory which claims that long-term memory excitations are prevalent during waking hours as well but are usually controlled and become apparent only during sleep.
There are multiple theories about dream function as well. Some studies claim that dreams strengthen semantic memories. This is based on the role of hippocampal neocortical dialog and general connections between sleep and memory. One study surmises that dreams erase junk data in the brain. Emotional adaptation and mood regulation are other proposed functions of dreaming.
From an evolutionary standpoint, dreams might simulate and rehearse threatening events, that were common in the organism's ancestral environment, hence increasing a person's ability to tackle everyday problems and challenges in the present. For this reason these threatening events may have been passed on in the form of genetic memories. This theory accords well with the claim that REM sleep is an evolutionary transformation of a well-known defensive mechanism, the tonic immobility reflex.
Most theories of dream function appear to be conflicting, but it is possible that many short-term dream functions could act together to achieve a bigger long-term function. It may be noted that evidence for none of these theories is entirely conclusive.
The incorporation of waking memory events into dreams is another area of active research and some researchers have tried to link it to the declarative memory consolidation functions of dreaming.
A related area of research is the neuroscience basis of nightmares. Many studies have confirmed a high prevalence of nightmares and some have correlated them with high stress levels. Multiple models of nightmare production have been proposed including neo-Freudian models as well as other models such as image contextualization model, boundary thickness model, threat simulation model etc. Neurotransmitter imbalance has been proposed as a cause of nightmares, as also affective network dysfunction- a model which claims that nightmare is a product of dysfunction of circuitry normally involved in dreaming. As with dreaming, none of the models have yielded conclusive results and studies continue about these questions.
== See also ==
NPSR mutations
== References == | Wikipedia/Neuroscience_of_sleep |
In modal logic and the philosophy of language, a term is said to be a rigid designator or absolute substantial term when it designates (picks out, denotes, refers to) the same thing in all possible worlds in which that thing exists. A designator is persistently rigid if it also designates nothing in all other possible worlds. A designator is obstinately rigid if it designates the same thing in every possible world, period, whether or not that thing exists in that world. Rigid designators are contrasted with connotative terms, non-rigid or flaccid designators, which may designate different things in different possible worlds.
== History ==
The Scholastic philosophers in the Middle Ages developed a theory of properties of terms in which different classifications of concepts feature prominently.
Concepts, and the terms that signify them, can be divided into absolute or connotative, according to the mode in which they signify. If they signify something absolutely, that is, after the manner of substance, they are absolute, for example rock, lion, man, whiteness, wisdom, tallness. If they signify something connotatively, that is, with reference to a subject of inherence, i.e., after the manner of accidents, they are connotative, for example, white, wise, tall.
Both connotative and absolute concepts can be used to signify accidents, but since connotative concepts signify with a reference to a subject of inherence, they can refer to object with different definitions and properties (i.e. with different essences). For example, large, as a connotative concept, can signify objects with many distinct essences: a man, a lion, a triangle can be large.
On the other hand, absolute concepts signify objects that have the same definitions and properties. For example, the concept of gold, as an absolute concept, can signify only objects with the same definitions and properties (i.e. with the same essence).
== Proper names and definite descriptions ==
The notion of absolute concepts was then revived by Saul Kripke, with the name “rigid designation”, in the lectures that became Naming and Necessity, in the course of his argument against descriptivist theories of reference, building on the work of Ruth Barcan Marcus. At the time of Kripke's lectures, the dominant theory of reference in analytic philosophy (associated with the theories of Gottlob Frege and Bertrand Russell) was that the meaning of sentences involving proper names could be given by substituting a contextually appropriate description for the name. Russell, for example, famously held that someone who had never met Otto von Bismarck might know of him as the first Chancellor of the German Empire, and if so, his statement that (say) "Bismarck was a ruthless politician" should be understood to mean "The first Chancellor of the German Empire was a ruthless politician" (which could in turn be analysed into a series of more basic statements according to the method Russell introduced in his theory of definite descriptions). Kripke argued—against both the Russellian analysis and several attempted refinements of it—that such descriptions could not possibly mean the same thing as the name "Bismarck," on the grounds that proper names such as "Bismarck" always designate rigidly, whereas descriptions such as "the first Chancellor of the German Empire" do not. Thus, for example, it might have been the case that Bismarck died in infancy. If so, he would not have ever satisfied the description "the first Chancellor of the German Empire," and (indeed) someone else probably would have. It does not follow that the first Chancellor of the German Empire may not have been the first Chancellor of the German Empire—that is (at least according to its surface-structure) a contradiction. Kripke argues that the way that proper names work is that when we make statements about what might or might not have been true of Bismarck, we are talking about what might or might not have been true of that particular person in various situations, whereas when we make statements about what might or might not have been true of, say, the first Chancellor of the German Empire we could be talking about what might or might not have been true of whoever would have happened to fill that office in those situations.
The "could" here is important to note: rigid designation is a property of the way terms are used, not a property of the terms themselves, and some philosophers, following Keith Donnellan, have argued that a phrase such as "the first Chancellor of the German Empire" could be used rigidly, in sentences such as "the first Chancellor of the German Empire could have decided never to go into politics." Kripke himself doubted that there was any need to recognize rigid uses of definite descriptions, and argued that Russell's notion of scope offered all that was needed to account for such sentences. But in either case, Kripke argued, nothing important in his account depends on the question. Whether definite descriptions can be used rigidly or not, they can at least sometimes be used non-rigidly, but a proper name can only be used rigidly; the asymmetry, Kripke argues, demonstrates that no definite description could give the meaning of a proper name—although it might be used to explain who a name refers to (that is, to "fix the referent" of the name).
== Essentialism ==
In Naming and Necessity, Kripke argues that proper names and certain natural kind terms—including biological taxa and types of natural substances (most famously, "water" and "H2O") designate rigidly. He argues for a form of scientific essentialism not unlike Aristotelian essentialism. Essential properties are common to an object in all possible worlds, and so they pick out the same objects in all possible worlds - they rigidly designate.
== Causal-historical theory of reference ==
Proper names rigidly designate for reasons that differ from natural kinds terms. The reason 'Johnny Depp' refers to one particular person in all possible worlds is because some person initially gave the name to him by saying something like "Let's call our baby 'Johnny Depp'". This is called the initial baptism. This usage of 'Johnny Depp' for referring to some particular baby got passed on from person-to-person in a giant causal and historical chain of events. That is why everybody calls Johnny Depp 'Johnny Depp'. Johnny's mother passed it onto her friends who passed it onto their friends who passed it onto their friends, and so on.
== Necessary identities ==
One puzzling consequence of Kripke semantics is that identities involving rigid designators are necessary. If water is H2O, then water is necessarily H2O. Since the terms 'water' and 'H2O' pick out the same object in every possible world, there is no possible world in which 'water' picks out something different from 'H2O'. Therefore, water is necessarily H2O. It is possible, of course, that we are mistaken about the chemical composition of water, but that does not affect the necessity of identities. What is not being claimed is that water is necessarily H2O, but conditionally, if water is H2O (though we may not know this, it does not change the fact if it is true), then water is necessarily H2O.
== See also ==
Causal theory of reference
Class versus instance
Counterpart theory
Direct reference theory
Non-rigid designator
Vivid designator
Scientific essentialism
== References == | Wikipedia/Rigid_designator |
Automatic and controlled processes (ACP) are the two categories of cognitive processing. All cognitive processes fall into one or both of those two categories. The amounts of "processing power", attention, and effort a process requires is the primary factor used to determine whether it's a controlled or an automatic process. An automatic process is capable of occurring without the need for attention, and the awareness of the initiation or operation of the process, and without drawing upon general processing resources or interfering with other concurrent thought processes. Put simply, an automatic process is unintentional, involuntary, effortless (not consumptive of limited processing capacity), and occurring outside awareness. Controlled processes are defined as a process that is under the flexible, intentional control of the individual, that the individual is consciously aware of, and that are effortful and constrained by the amount of attentional resources available at the moment.
== Characteristics ==
=== Automatic processes ===
When examining the label "automatic" in social psychology, we find that some processes are intended, and others require recent conscious and intentional processing of related information. Automatic processes are more complicated than people may think. Some examples of automatic processes include motor skills, implicit biases, procedural tasks, and priming. The tasks that are listed can be done without the need for conscious attention.
That being said automatic effects fall into three classes: Those that occur prior to conscious awareness (preconscious); those that require some form of conscious processing but that produce an unintended outcome (postconscious); and those that require a specific type of intentional, goal directed processing (goal-dependent).
Preconscious automaticity requires only the triggering proximal stimulus event, and occur prior to or in the absence of any conscious awareness of that event. Because they occur without our conscious awareness they are unnoticeable, uncontrollable, and nearly effortless.
Postconscious automaticity depends on recent conscious experience for its occurrence. This postconscious influence on processing can be defined as the non-conscious consequences of conscious thought. The conscious experience may be intentional, or it may be unintentional, what is important is that the material be in awareness. Most things we are aware of are driven by the environment, and one does not intend or control the flood of these perceptual experiences, yet they still result in postconscious effects. In other words, we need to consciously engage in something and depending on the experience we will unconsciously think, and or behave a certain way. In the classic Bobo doll experiment a child watches a video of an adult acting aggressive towards a Bobo doll. Later when the child is put in the room with that same doll, the child was more likely to also engage in that act, versus children who didn't watch the video. In a study participants were primed with the stereotype of professors by being told to imagine a typical professor for 5 min and to list (a conscious act) the behaviors, lifestyle, and appearance attributes of this typical professor. After they were primed they had to perform a general knowledge task. The results were that the participants in the professor condition outperformed those in the control conditions (those not primed at all).
Goal-dependent automaticity concerns skill and thought processes that require a goal to engage in them. This process is much similar to postconscious in that it requires conscious awareness to be initiated, but after that it can be guided outside of awareness by the unconscious mind. A good example would be driving a car: in order to drive a car, one needs to consciously have a goal to drive somewhere. When engaged in driving (only with enough practice) one can operate the car almost entirely without conscious awareness. However, more attentional control and decision making are needed when introduced to novel (reference) situations like driving through an unfamiliar town. The process needs to be learned enough that it can be automatic, requiring little conscious thought as to how to do it.
=== Controlled processes ===
One definition of a controlled process is an intentionally-initiated sequence of cognitive activities. In other words, when attention is required for a task, we are consciously aware and in control. Controlled processes require us to think about situations, evaluate and make decisions. An example would be reading this article. We are required to read and understand the concepts of these processes and it takes effort to think conceptually. Controlled processes are thought to be slower, since by definition they require effortful control; therefore, they generally cannot be conducted simultaneously with other controlled processes without task-switching or impaired performance. So the drawback of controlled processes is that humans are thought to have a limited capacity for overtly controlling behavior. Being tightly capacity-limited, controlled processing imposes considerable limitations on speed and the ability to have divided attention. Divided attention is the ability to switch between tasks. Some tasks are easier to perform with other tasks like talking and driving. Holding a conversation, however, becomes more difficult when traffic increases because of the need to focus more on driving than on talking.
Forster and Lavie found that the ability to focus on a task is influenced by processing capacity and perceptual load. Processing capacity is the amount of incoming information a person can process or handle at one time. Perceptual load is how difficult the task is. A low load task is when one can think less about the task they are involved in. A high load task is when one needs to devote all their focus to the task. If they become distracted then they won't be able to accomplish the task.
In a study, participants were randomly assigned into two conditions, one requiring one task (small cognitive load) and one requiring two tasks (heavy cognitive load). In the one-task condition, participants were told that they would hear an anti- or pro-abortion speech and would have to diagnose the speaker's attitude toward abortion. The two-task condition had the same first assignment, but they were required to switch spots with the speaker and take their place after that. Even after being specifically told that they would be given further instructions at the next step, their cognitive load was affected in this study. Participants in the two-task condition performed more poorly than the one-task condition simply because they had the next task on their mind (they had extra cognitive load). Basically, the more tasks someone tries to manage at the same time, the more their performance will suffer.
=== Processes with ambiguous categorization ===
Some actions utilize a combination of automatic and controlled processes. One example is brushing your teeth. At any point, you could think about each tooth as you individually scrub them, but for the most part, the action is automatic. Another example is playing a musical instrument. After learning where your fingers should be placed and how to play certain notes you no longer have to think about what your fingers are doing. Your controlled process are then engaged in thinking about dynamics and intonation. Some processes can even start as controlled and become more automatic. Some cognitive processes are difficult to categorize as distinctly automatic or controlled, either because they contain components of both types of process or because the phenomena are difficult to define or observe. An example of the former is driving a car. An example of the latter is flow.
Process of breathing, automatic and controlled, easily observed.
=== Flow ===
Flow has been described as involving highly focused attention on the task at hand, loss of self-consciousness, and distorted time perception, among other cognitive characteristics. Some people report that during flow states they are less aware of autonomic responses such as hunger, fatigue, and discomfort. Some researchers hypothesize that because of this, some challenging tasks can counterintuitively require less effort to perform.
Flow has been difficult to study, however, because it is difficult to produce in a controlled laboratory setting. Most experiments have relied heavily on correlating the presence of flow with various attributes of the task and the subjects' reported experiences. Of those correlations, subjects experiencing flow generally report that they perceive a good match between the task requirements and their skills (e.g. a professional basketball player in a professional basketball game.) Task structure and the clarity of the goal of the task are also thought to be related to when flow occurs. All of these aspects of flow imply that there must be an opportunity to suppress other controlled processes, as well as inhibit certain types of automatic processes.
A study involving video game performance showed that flow in participants (determined based on a self-report survey of flow characteristics) strongly correlated with performance in the game. A related study attempted to inhibit and induce flow by biasing the moods of participants. The experimenters found that flow could be inhibited by a negative mood, but could not be induced by a positive mood.
"A person does not need to be told to pay attention to a stimulus that captures attention quickly and effortlessly." In many cases, explicitly directing one's own or another's attention is necessary due to the presence of another stimulus that more easily captures attention. In the case of flow, however, an action that would normally grab one's attention is ignored, and many automatic processes are either suppressed (such as stimulus-driven attention changes) or ignored (such as discomfort.)
On the other hand, situations in which autonomy is encroached upon (for example, if the individual must always control his/her actions to abide by rules imposed by the task) are thought to inhibit flow. This implies that another requirement of flow is to be free from constraints that force controlled processes to be used. Additionally, several areas of research indicate that during a state of flow an otherwise-controlled process becomes automatic allowing it to behave dominant over all other automatic processes.
== See also ==
Conscious mind
Dual process theory
Modularity of mind
== References ==
=== Further reading ===
Kahneman, Daniel (2013) [2011]. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. ISBN 978-0374533557. | Wikipedia/Automatic_and_controlled_processes |
In psychology and philosophy, theory of mind (often abbreviated to ToM) refers to the capacity to understand other individuals by ascribing mental states to them. A theory of mind includes the understanding that others' beliefs, desires, intentions, emotions, and thoughts may be different from one's own. Possessing a functional theory of mind is crucial for success in everyday human social interactions. People utilize a theory of mind when analyzing, judging, and inferring other people's behaviors.
Theory of mind was first conceptualized by researchers evaluating the presence of theory of mind in animals. Today, theory of mind research also investigates factors affecting theory of mind in humans, such as whether drug and alcohol consumption, language development, cognitive delays, age, and culture can affect a person's capacity to display theory of mind.
It has been proposed that deficits in theory of mind may occur in people with autism, anorexia nervosa, schizophrenia, dysphoria, addiction, and brain damage caused by alcohol's neurotoxicity. Neuroimaging shows that the medial prefrontal cortex (mPFC), the posterior superior temporal sulcus (pSTS), the precuneus, and the amygdala are associated with theory of mind tasks. Patients with frontal lobe or temporoparietal junction lesions find some theory of mind tasks difficult. One's theory of mind develops in childhood as the prefrontal cortex develops.
== Definition ==
The "theory of mind" is described as a theory because the behavior of the other person, such as their statements and expressions, is the only thing being directly observed; no one has direct access to the mind of another, and the existence and nature of the mind must be inferred. It is typically assumed others have minds analogous to one's own; this assumption is based on three reciprocal social interactions, as observed in joint attention, the functional use of language, and the understanding of others' emotions and actions. Theory of mind allows one to attribute thoughts, desires, and intentions to others, to predict or explain their actions, and to posit their intentions. It enables one to understand that mental states can be the cause of—and can be used to explain and predict—the behavior of others. Being able to attribute mental states to others and understanding them as causes of behavior implies, in part, one must be able to conceive of the mind as a "generator of representations". If a person does not have a mature theory of mind, it may be a sign of cognitive or developmental impairment.
Theory of mind appears to be an innate potential ability in humans that requires social and other experience over many years for its full development. Different people may develop more or less effective theories of mind. Neo-Piagetian theories of cognitive development maintain that theory of mind is a byproduct of a broader hypercognitive ability of the human mind to register, monitor, and represent its own functioning.
Empathy—the recognition and understanding of the states of mind of others, including their beliefs, desires, and particularly emotions—is a related concept. Empathy is often characterized as the ability to "put oneself into another's shoes". Recent neuro-ethological studies of animal behavior suggest that rodents may exhibit empathetic abilities. While empathy is known as emotional perspective-taking, theory of mind is defined as cognitive perspective-taking.
Research on theory of mind, in humans and animals, adults and children, normally and atypically developing, has grown rapidly in the years since Premack and Guy Woodruff's 1978 paper, "Does the chimpanzee have a theory of mind?". The field of social neuroscience has also begun to address this debate by imaging the brains of humans while they perform tasks that require the understanding of an intention, belief, or other mental state in others.
An alternative account of theory of mind is given in operant psychology and provides empirical evidence for a functional account of both perspective-taking and empathy. The most developed operant approach is founded on research on derived relational responding and is subsumed within relational frame theory. Derived relational responding relies on the ability to identify derived relations, or relationships between stimuli that are not directly learned or reinforced; for example, if "snake" is related to "danger" and "danger" is related to "fear", people may know to fear snakes even without learning an explicit connection between snakes and fear. According to this view, empathy and perspective-taking comprise a complex set of derived relational abilities based on learning to discriminate and respond verbally to ever more complex relations between self, others, place, and time, and through established relations.
== Philosophical and psychological roots ==
Discussions of theory of mind have their roots in philosophical debate from the time of René Descartes' Second Meditation, which set the foundations for considering the science of the mind.
Two differing approaches in philosophy for explaining theory of mind are theory-theory and simulation theory. Theory-theory claims that individuals use "theories" grounded in folk psychology to reason about others' minds. According to theory-theory, these folk psychology theories are developed automatically and innately by concepts and rules we have for ourselves, and then instantiated through social interactions. In contrast, simulation-theory argues that individuals simulate the internal states of others to build mental models for their cognitive processes. A basic example of this is someone imagining themselves in the position of another person to infer the other person's thoughts and feelings. Theory of mind is also closely related to person perception and attribution theory from social psychology.
It is common and intuitive to assume that others have minds. People anthropomorphize non-human animals, inanimate objects, and even natural phenomena. Daniel Dennett referred to this tendency as taking an "intentional stance" toward things: we assume they have intentions, to help predict their future behavior. However, there is an important distinction between taking an "intentional stance" toward something and entering a "shared world" with it. The intentional stance is a functional relationship, describing the use of a theory due to its practical utility, rather than the accuracy of its representation of the world. As such, it is something people resort to during interpersonal interactions. A shared world is directly perceived and its existence structures reality itself for the perceiver. It is not just a lens, through which the perceiver views the world; it in many ways constitutes the cognition, as both its object and the blueprint used to structure perception into understanding.
The philosophical roots of another perspective, the relational frame theory (RFT) account of theory of mind, arise from contextual psychology, which refers to the study of organisms (both human and non-human) interacting in and with a historical and current situational context. It is an approach based on contextualism, a philosophy in which any event is interpreted as an ongoing act inseparable from its current and historical context and in which a radically functional approach to truth and meaning is adopted. As a variant of contextualism, RFT focuses on the construction of practical, scientific knowledge. This scientific form of contextual psychology is virtually synonymous with the philosophy of operant psychology.
== Development ==
The study of which animals are capable of attributing knowledge and mental states to others, as well as the development of this ability in human ontogeny and phylogeny, identifies several behavioral precursors to theory of mind. Understanding attention, understanding of others' intentions, and imitative experience with others are hallmarks of a theory of mind that may be observed early in the development of what later becomes a full-fledged theory.
Simon Baron-Cohen proposed that infants' understanding of attention in others acts as a critical precursor to the development of theory of mind. Understanding attention involves understanding that seeing can be directed selectively as attention, that the looker assesses the seen object as "of interest", and that seeing can induce beliefs. A possible illustration of theory of mind in infants is joint attention. Joint attention refers to when two people look at and attend to the same thing. Parents often use the act of pointing to prompt infants to engage in joint attention; understanding this prompt requires that infants take into account another person's mental state and understand that the person notices an object or finds it of interest. Baron-Cohen speculates that the inclination to spontaneously reference an object in the world as of interest, via pointing, ("Proto declarative pointing") and to likewise appreciate the directed attention of another, may be the underlying motive behind all human communication.
Understanding others' intentions is another critical precursor to understanding other minds because intentionality is a fundamental feature of mental states and events. The "intentional stance" was defined by Daniel Dennett as an understanding that others' actions are goal-directed and arise from particular beliefs or desires. Both two and three-year-old children could discriminate when an experimenter intentionally or accidentally marked a box with stickers. Even earlier in development, Andrew N. Meltzoff found that 18-month-old infants could perform target tasks involving the manipulation of objects that adult experimenters attempted and failed, suggesting the infants could represent the object-manipulating behavior of adults as involving goals and intentions. While attribution of intention and knowledge is investigated in young humans and nonhuman animals to detect precursors to a theory of mind, Gagliardi et al. have pointed out that even adult humans do not always act in a way consistent with an attributional perspective (i.e., based on attribution of knowledge to others). In their experiment, adult human subjects attempted to choose the container baited with a small object from a selection of four containers when guided by confederates who could not see which container was baited.
Research in developmental psychology suggests that an infant's ability to imitate others lies at the origins of both theory of mind and other social-cognitive achievements like perspective-taking and empathy. According to Meltzoff, the infant's innate understanding that others are "like me" allows them to recognize the equivalence between the physical and mental states apparent in others and those felt by the self. For example, the infant uses their own experiences, orienting their head and eyes toward an object of interest to understand the movements of others who turn toward an object; that is, they will generally attend to objects of interest or significance. Some researchers in comparative disciplines have hesitated to put too much weight on imitation as a critical precursor to advanced human social-cognitive skills like mentalizing and empathizing, especially if true imitation is no longer employed by adults. A test of imitation by Alexandra Horowitz found that adult subjects imitated an experimenter demonstrating a novel task far less closely than children did. Horowitz points out that the precise psychological state underlying imitation is unclear and cannot, by itself, be used to draw conclusions about the mental states of humans.
While much research has been done on infants, theory of mind develops continuously throughout childhood and into late adolescence as the synapses in the prefrontal cortex develop. The prefrontal cortex is thought to be involved in planning and decision-making. Children seem to develop theory of mind skills sequentially. The first skill to develop is the ability to recognize that others have diverse desires. Children are able to recognize that others have diverse beliefs soon after. The next skill to develop is recognizing that others have access to different knowledge bases. Finally, children are able to understand that others may have false beliefs and that others are capable of hiding emotions. While this sequence represents the general trend in skill acquisition, it seems that more emphasis is placed on some skills in certain cultures, leading to more valued skills to develop before those that are considered not as important. For example, in individualistic cultures such as the United States, a greater emphasis is placed on the ability to recognize that others have different opinions and beliefs. In a collectivistic culture, such as China, this skill may not be as important and therefore may not develop until later.
=== Language ===
There is evidence that the development of theory of mind is closely intertwined with language development in humans. One meta-analysis showed a moderate to strong correlation (r = 0.43) between performance on theory of mind and language tasks. Both language and theory of mind begin to develop around the same time in children (between ages two and five), but many other abilities develop during this same time period as well, and they do not produce such high correlations with one another nor with theory of mind.
Pragmatic theories of communication assume that infants must possess an understanding of beliefs and mental states of others to infer the communicative content that proficient language users intend to convey. Since spoken phrases can have different meanings depending on context, theory of mind can play a crucial role in understanding the intentions of others and inferring the meaning of words. Some empirical results suggest that even 13-month-old infants have an early capacity for communicative mind-reading that enables them to infer what relevant information is transferred between communicative partners, which implies that human language relies at least partially on theory of mind skills.
Carol A. Miller posed further possible explanations for this relationship. Perhaps the extent of verbal communication and conversation involving children in a family could explain theory of mind development. Such language exposure could help introduce a child to the different mental states and perspectives of others. Empirical findings indicate that participation in family discussion predicts scores on theory of mind tasks, and that deaf children who have hearing parents and may not be able to communicate with their parents much during early years of development tend to score lower on theory of mind tasks.
Another explanation of the relationship between language and theory of mind development has to do with a child's understanding of mental-state words such as "think" and "believe". Since a mental state is not something that one can observe from behavior, children must learn the meanings of words denoting mental states from verbal explanations alone, requiring knowledge of the syntactic rules, semantic systems, and pragmatics of a language. Studies have shown that understanding of these mental state words predicts theory of mind in four-year-olds.
A third hypothesis is that the ability to distinguish a whole sentence ("Jimmy thinks the world is flat") from its embedded complement ("the world is flat") and understand that one can be true while the other can be false is related to theory of mind development. Recognizing these complements as being independent of one another is a relatively complex syntactic skill and correlates with increased scores on theory of mind tasks in children.
There is also evidence that the areas of the brain responsible for language and theory of mind are closely connected. The temporoparietal junction (TPJ) is involved in the ability to acquire new vocabulary, as well as to perceive and reproduce words. The TPJ also contains areas that specialize in recognizing faces, voices, and biological motion, and in theory of mind. Since all of these areas are located so closely together, it is reasonable to suspect that they work together. Studies have reported an increase in activity in the TPJ when patients are absorbing information through reading or images regarding other peoples' beliefs but not while observing information about physical control stimuli.
=== Theory of mind in adults ===
Adults have theory of mind concepts that they developed as children (concepts such as belief, desire, knowledge, and intention). They use these concepts to meet the diverse demands of social life, ranging from snap decisions about how to trick an opponent in a competitive game, to keeping up with who knows what in a fast-moving conversation, to judging the guilt or innocence of the accused in a court of law.
Boaz Keysar, Dale Barr, and colleagues found that adults often failed to use their theory of mind abilities to interpret a speaker's message, and acted as if unaware that the speaker lacked critical knowledge about a task. In one study, a confederate instructed adult participants to rearrange objects, some of which were not visible to the confederate, as part of a communication game. Only objects that were visible to both the confederate and the participant were part of the game. Despite knowing that the confederate could not see some of the objects, a third of the participants still tried to move those objects. Other studies show that adults are prone to egocentric biases, with which they are influenced by their own beliefs, knowledge, or preferences when judging those of other people, or that they neglect other people's perspectives entirely. There is also evidence that adults with greater memory, inhibitory capacity, and motivation are more likely to use their theory of mind abilities.
In contrast, evidence about indirect effects of thinking about other people's mental states suggests that adults may sometimes use their theory of mind automatically. Agnes Kovacs and colleagues measured the time it took adults to detect the presence of a ball as it was revealed from behind an occluder. They found that adults' speed of response was influenced by whether another person (the "agent") in the scene thought there was a ball behind the occluder, even though adults were not asked to pay attention to what the agent thought.
Dana Samson and colleagues measured the time it took adults to judge the number of dots on the wall of a room. They found that adults responded more slowly when another person standing in the room happened to see fewer dots than they did, even when they had never been asked to pay attention to what the person could see. It has been questioned whether these "altercentric biases" truly reflect automatic processing of what another person is thinking or seeing or, instead, reflect attention and memory effects cued by the other person, but not involving any representation of what they think or see.
Different theories seek to explain such results. If theory of mind is automatic, this would help explain how people keep up with the theory of mind demands of competitive games and fast-moving conversations. It might also explain evidence that human infants and some non-human species sometimes appear capable of theory of mind, despite their limited resources for memory and cognitive control. If theory of mind is effortful and not automatic, on the other hand, this explains why it feels effortful to decide whether a defendant is guilty or whether a negotiator is bluffing. Economy of effort would help explain why people sometimes neglect to use their theory of mind.
Ian Apperly and Stephen Butterfill suggested that people have "two systems" for theory of mind, in common with "two systems" accounts in many other areas of psychology. In this account, "system 1" is cognitively efficient and enables theory of mind for a limited but useful set of circumstances. "System 2" is cognitively effortful, but enables much more flexible theory of mind abilities. Philosopher Peter Carruthers disagrees, arguing that the same core theory of mind abilities can be used in both simple and complex ways. The account has been criticized by Celia Heyes who suggests that "system 1" theory of mind abilities do not require representation of mental states of other people, and so are better thought of as "sub-mentalizing".
=== Aging ===
In older age, theory of mind capacities decline, irrespective of how exactly they are tested. However, the decline in other cognitive functions is even stronger, suggesting that social cognition is better preserved. In contrast to theory of mind, empathy shows no impairments in aging.
There are two kinds of theory of mind representations: cognitive (concerning mental states, beliefs, thoughts, and intentions) and affective (concerning the emotions of others). Cognitive theory of mind is further separated into first order (e.g., I think she thinks that) and second order (e.g. he thinks that she thinks that). There is evidence that cognitive and affective theory of mind processes are functionally independent from one another. In studies of Alzheimer's disease, which typically occurs in older adults, patients display impairment with second order cognitive theory of mind, but usually not with first order cognitive or affective theory of mind. However, it is difficult to discern a clear pattern of theory of mind variation due to age. There have been many discrepancies in the data collected thus far, likely due to small sample sizes and the use of different tasks that only explore one aspect of theory of mind. Many researchers suggest that theory of mind impairment is simply due to the normal decline in cognitive function.
=== Cultural variations ===
Researchers propose that five key aspects of theory of mind develop sequentially for all children between the ages of three and five: diverse desires, diverse beliefs, knowledge access, false beliefs, and hidden emotions. Australian, American, and European children acquire theory of mind in this exact order, and studies with children in Canada, India, Peru, Samoa, and Thailand indicate that they all pass the false belief task at around the same time, suggesting that children develop theory of mind consistently around the world.
However, children from Iran and China develop theory of mind in a slightly different order. Although they begin the development of theory of mind around the same time, toddlers from these countries understand knowledge access before Western children but take longer to understand diverse beliefs. Researchers believe this swap in the developmental order is related to the culture of collectivism in Iran and China, which emphasizes interdependence and shared knowledge as opposed to the culture of individualism in Western countries, which promotes individuality and accepts differing opinions. Because of these different cultural values, Iranian and Chinese children might take longer to understand that other people have different beliefs and opinions. This suggests that the development of theory of mind is not universal and solely determined by innate brain processes but also influenced by social and cultural factors.
=== Historiography ===
Theory of mind can help historians to more properly understand historical figures' characters, for example Thomas Jefferson. Emancipationists like Douglas L. Wilson and scholars at the Thomas Jefferson Foundation view Jefferson as an opponent of slavery all his life, noting Jefferson's attempts within the limited range of options available to him to undermine slavery, his many attempts at abolition legislation, the manner in which he provided for slaves, and his advocacy of their more humane treatment. This view contrasts with that of revisionists like Paul Finkelman, who criticizes Jefferson for racism, slavery, and hypocrisy. Emancipationist views on this hypocrisy recognize that if he tried to be true to his word, it would have alienated his fellow Virginians. In another example, Franklin D. Roosevelt did not join NAACP leaders in pushing for federal anti-lynching legislation, as he believed that such legislation was unlikely to pass and that his support for it would alienate Southern congressmen, including many of Roosevelt's fellow Democrats.
== Empirical investigation ==
Whether children younger than three or four years old have a theory of mind is a topic of debate among researchers. It is a challenging question, due to the difficulty of assessing what pre-linguistic children understand about others and the world. Tasks used in research into the development of theory of mind must take into account the umwelt of the pre-verbal child.
=== False-belief task ===
One of the most important milestones in theory of mind development is the ability to attribute false belief: in other words, to understand that other people can believe things which are not true. To do this, it is suggested, one must understand how knowledge is formed, that people's beliefs are based on their knowledge, that mental states can differ from reality, and that people's behavior can be predicted by their mental states. Numerous versions of false-belief task have been developed, based on the initial task created by Wimmer and Perner (1983).
In the most common version of the false-belief task (often called the Sally-Anne test), children are told a story about Sally and Anne. Sally has a marble, which she places into her basket, and then leaves the room. While she is out of the room, Anne takes the marble from the basket and puts it into the box. The child being tested is then asked where Sally will look for the marble once she returns. The child passes the task if she answers that Sally will look in the basket, where Sally put the marble; the child fails the task if she answers that Sally will look in the box. To pass the task, the child must be able to understand that another's mental representation of the situation is different from their own, and the child must be able to predict behavior based on that understanding. Another example depicts a boy who leaves chocolate on a shelf and then leaves the room. His mother puts it in the fridge. To pass the task, the child must understand that the boy, upon returning, holds the false belief that his chocolate is still on the shelf.
The results of research using false-belief tasks have been called into question: most typically developing children are able to pass the tasks from around age four. Yet early studies asserted that 80% of children diagnosed with autism were unable to pass this test, while children with other disabilities like Down syndrome were able to. However this assertion could not be replicated by later studies. It instead was concluded that children fail these tests due to a lack of understanding of extraneous processes and a basic lack of mental processing capabilities.
Adults may also struggle with false beliefs, for instance when they show hindsight bias. In one experiment, adult subjects who were asked for an independent assessment were unable to disregard information on actual outcome. Also in experiments with complicated situations, when assessing others' thinking, adults can fail to correctly disregard certain information that they have been given.
=== Unexpected contents ===
Other tasks have been developed to try to extend the false-belief task. In the "unexpected contents" or "smarties" task, experimenters ask children what they believe to be the contents of a box that looks as though it holds Smarties chocolates. After the child guesses "Smarties", it is shown that the box in fact contained pencils. The experimenter then re-closes the box and asks the child what she thinks another person, who has not been shown the true contents of the box, will think is inside. The child passes the task if he/she responds that another person will think that "Smarties" exist in the box, but fails the task if she responds that another person will think that the box contains pencils. Gopnik & Astington found that children pass this test at age four or five years. Though the use of such implicit tests has yet to reach a consensus on their validity and reproducibility of study results.
=== Other tasks ===
The "false-photograph" task also measures theory of mind development. In this task, children must reason about what is represented in a photograph that differs from the current state of affairs. Within the false-photograph task, either a location or identity change exists. In the location-change task, the examiner puts an object in one location (e.g. chocolate in an open green cupboard), whereupon the child takes a Polaroid photograph of the scene. While the photograph is developing, the examiner moves the object to a different location (e.g. a blue cupboard), allowing the child to view the examiner's action. The examiner asks the child two control questions: "When we first took the picture, where was the object?" and "Where is the object now?" The subject is also asked a "false-photograph" question: "Where is the object in the picture?" The child passes the task if he/she correctly identifies the location of the object in the picture and the actual location of the object at the time of the question. However, the last question might be misinterpreted as "Where in this room is the object that the picture depicts?" and therefore some examiners use an alternative phrasing.
To make it easier for animals, young children, and individuals with classical autism to understand and perform theory of mind tasks, researchers have developed tests in which verbal communication is de-emphasized: some whose administration does not involve verbal communication on the part of the examiner, some whose successful completion does not require verbal communication on the part of the subject, and some that meet both of those standards. One category of tasks uses a preferential-looking paradigm, with looking time as the dependent variable. For instance, nine-month-old infants prefer looking at behaviors performed by a human hand over those made by an inanimate hand-like object. Other paradigms look at rates of imitative behavior, the ability to replicate and complete unfinished goal-directed acts, and rates of pretend play.
=== Early precursors ===
Research on the early precursors of theory of mind has invented ways to observe preverbal infants' understanding of other people's mental states, including perception and beliefs. Using a variety of experimental procedures, studies show that infants from their first year of life have an implicit understanding of what other people see and what they know. A popular paradigm used to study infants' theory of mind is the violation-of-expectation procedure, which exploits infants' tendency to look longer at unexpected and surprising events compared to familiar and expected events. The amount of time they look at an event gives researchers an indication of what infants might be inferring, or their implicit understanding of events. One study using this paradigm found that 16-month-olds tend to attribute beliefs to a person whose visual perception was previously witnessed as being "reliable", compared to someone whose visual perception was "unreliable". Specifically, 16-month-olds were trained to expect a person's excited vocalization and gaze into a container to be associated with finding a toy in the reliable-looker condition or an absence of a toy in the unreliable-looker condition. Following this training phase, infants witnessed, in an object-search task, the same persons searching for a toy either in the correct or incorrect location after they both witnessed the location of where the toy was hidden. Infants who experienced the reliable looker were surprised and therefore looked longer when the person searched for the toy in the incorrect location compared to the correct location. In contrast, the looking time for infants who experienced the unreliable looker did not differ for either search locations. These findings suggest that 16-month-old infants can differentially attribute beliefs about a toy's location based on the person's prior record of visual perception.
=== Methodological problems ===
With the methods used to test theory of mind, it has been experimentally shown that very simple robots that only react by reflexes and are not built to have any complex cognition at all can pass the tests for having theory of mind abilities that psychology textbooks assume to be exclusive to humans older than four or five years. Whether such a robot passes the test is influenced by completely non-cognitive factors such as placement of objects and the structure of the robot body influencing how the reflexes are conducted. It has therefore been suggested that theory of mind tests may not actually test cognitive abilities.
Furthermore, early research into theory of mind in autistic children is argued to constitute epistemological violence due to implicit or explicit negative and universal conclusions about autistic individuals being drawn from empirical data that viably supports other (non-universal) conclusions.
== Deficits ==
Theory of mind impairment, or mind-blindness, describes a difficulty someone would have with perspective-taking. Individuals with theory of mind impairment struggle to see phenomena from any other perspective than their own. Individuals who experience a theory of mind deficit have difficulty determining the intentions of others, lack understanding of how their behavior affects others, and have a difficult time with social reciprocity. Theory of mind deficits have been observed in people with autism spectrum disorders, schizophrenia, nonverbal learning disorder and along with people under the influence of alcohol and narcotics, sleep-deprived people, and people who are experiencing severe emotional or physical pain. Theory of mind deficits have also been observed in deaf children who are late signers (i.e. are born to hearing parents), but such a deficit is due to the delay in language learning, not any cognitive deficit, and therefore disappears once the child learns sign language.
=== Autism ===
In 1985 Simon Baron-Cohen, Alan M. Leslie, and Uta Frith suggested that children with autism do not employ theory of mind and that autistic children have particular difficulties with tasks requiring the child to understand another person's beliefs. These difficulties persist when children are matched for verbal skills and they have been taken as a key feature of autism. Although in a 2019 review, Gernsbacher and Yergeau argued that "the claim that autistic people lack a theory of mind is empirically questionable", as there have been numerous failed replications of classic ToM studies and the meta-analytical effect sizes of such replications were minimal to small.
Many individuals classified as autistic have severe difficulty assigning mental states to others, and some seem to lack theory of mind capabilities. Researchers who study the relationship between autism and theory of mind attempt to explain the connection in a variety of ways. One account assumes that theory of mind plays a role in the attribution of mental states to others and in childhood pretend play. According to Leslie, theory of mind is the capacity to mentally represent thoughts, beliefs, and desires, regardless of whether the circumstances involved are real. This might explain why some autistic individuals show extreme deficits in both theory of mind and pretend play. However, Hobson proposes a social-affective justification, in which deficits in theory of mind in autistic people result from a distortion in understanding and responding to emotions. He suggests that typically developing individuals, unlike autistic individuals, are born with a set of skills (such as social referencing ability) that later lets them comprehend and react to other people's feelings. Other scholars emphasize that autism involves a specific developmental delay, so that autistic children vary in their deficiencies, because they experience difficulty in different stages of growth. Very early setbacks can alter proper advancement of joint-attention behaviors, which may lead to a failure to form a full theory of mind.
It has been speculated that theory of mind exists on a continuum as opposed to the traditional view of a discrete presence or absence. While some research has suggested that some autistic populations are unable to attribute mental states to others, recent evidence points to the possibility of coping mechanisms that facilitate the attribution of mental states. A binary view regarding theory of mind contributes to the stigmatization of autistic adults who do possess perspective-taking capacity, as the assumption that autistic people do not have empathy can become a rationale for dehumanization.
Tine et al. report that autistic children score substantially lower on measures of social theory of mind (i.e., "reasoning about others' mental states", p. 1) in comparison to children diagnosed with Asperger syndrome.
Generally, children with more advanced theory of mind abilities display more advanced social skills, greater adaptability to new situations, and greater cooperation with others. As a result, these children are typically well-liked. However, "children may use their mind-reading abilities to manipulate, outwit, tease, or trick their peers." Individuals possessing inferior theory of mind skills, such as children with autism spectrum disorder, may be socially rejected by their peers since they are unable to communicate effectively. Social rejection has been proven to negatively impact a child's development and can put the child at greater risk of developing depressive symptoms.
Peer-mediated interventions (PMI) are a school-based treatment approach for children and adolescents with autism spectrum disorder in which peers are trained to be role models in order to promote social behavior. Laghi et al. studied whether analysis of prosocial (nice) and antisocial (nasty) theory-of-mind behaviors could be used, in addition to teacher recommendations, to select appropriate candidates for PMI programs. Selecting children with advanced theory-of-mind skills who use them in prosocial ways will theoretically make the program more effective. While the results indicated that analyzing the social uses of theory of mind of possible candidates for a PMI program may increase the program's efficacy, it may not be a good predictor of a candidate's performance as a role model.
A 2014 Cochrane review on interventions based on theory of mind found that such a theory could be taught to individuals with autism but claimed little evidence of skill maintenance, generalization to other settings, or development effects on related skills.
Some 21st century studies have shown that the results of some studies of theory of mind tests on autistic people may be misinterpreted based on the double empathy problem, which proposes that rather than autistic people specifically having trouble with theory of mind, autistic people and non-autistic people have equal difficulty understanding one-another due to their neurological differences. Studies have shown that autistic adults perform better in theory of mind tests when paired with other autistic adults as well as possibly autistic close family members. Academics who acknowledge the double empathy problem also propose that it is likely autistic people understand non-autistic people to a higher degree than vice-versa, due to the necessity of functioning in a non-autistic society.
=== Psychopathy ===
Psychopathy is another deficit that is of large importance when discussing theory of mind. While psychopathic individuals show impaired emotional behavior including a lack of emotional responsiveness to others and deficient empathy, as well as impaired social behavior, there are many controversies regarding psychopathic individuals' theory of mind. Many different studies provide contradictory information on a correlation between theory of mind impairment and psychopathic individuals.
There have been some speculations made about the similarities between individuals with autism and psychopathic individuals in the theory of mind performance. In this study in 2008, the Happé's advanced test of theory of mind was presented to a group of 25 psychopaths, and 25 non-psychopaths incarcerated. This test showed that there was not a difference in the performance of the task for the psychopaths and non-psychopaths. However, they were able to see that the psychopaths were performing significantly better than the most highly able adult autistic population. This shows that there is not a similarity between individuals with autism and psychopathic individuals.
There have been repetitive suggestions regarding the possibility that a deficient or biased grasp of others’ mental states, or theory of mind, could potentially contribute to antisocial behavior, aggression, and psychopathy. In one study named ‘Reading the Mind in the Eyes’, the participants view photographs of an individual’s eye and had to attribute a mental state, or emotion, to the individual. This is an interesting test because Magnetic resonance imaging studies showed that this task produced increased activity in the dorsolateral prefrontal and the left medial frontal cortices, the superior temporal gyrus, and the left amygdala. There is extensive literature suggesting amygdala dysfunction in psychopathy however, this test shows that both groups of psychopathic and non-psychopathic adults performed equally well on the test. Thus, disregarding that there isn’t Theory of Mind impairment in psychopathic individuals.
In another study using a systemic review and meta-analysis, data was gathered from 42 different studies and found that psychopathic traits are associated with impairment in the theory of mind task performance. This relationship was not regulated by age, population, psychopathy measurement (self-report versus clinical checklist) or theory of mind task type (cognitive versus affective). This study used past studies to show that there is a relationship between psychopathic individuals and theory of mind impairments.
In 2009 a study was conducted to test whether impairment in the emotional aspects of theory of mind rather that the general theory of mind abilities may account for some of the impaired social behavior in psychopathy. This study involved criminal offenders diagnosed with antisocial personality disorder who had high psychopathy features, participants with localized lesions in the orbitofrontal cortex, participants with non-frontal lesions, and healthy control subjects. Subjects were tested with a task that examines affective versus cognitive theory of mind. They found that the individuals with psychopathy and those with orbitofrontal cortex lesions were both impaired on the affective theory of mind but not in cognitive theory of mind when compared to the control group.
=== Schizophrenia ===
Individuals diagnosed with schizophrenia can show deficits in theory of mind. Mirjam Sprong and colleagues investigated the impairment by examining 29 different studies, with a total of over 1500 participants. This meta-analysis showed significant and stable deficit of theory of mind in people with schizophrenia. They performed poorly on false-belief tasks, which test the ability to understand that others can hold false beliefs about events in the world, and also on intention-inference tasks, which assess the ability to infer a character's intention from reading a short story. Schizophrenia patients with negative symptoms, such as lack of emotion, motivation, or speech, have the most impairment in theory of mind and are unable to represent the mental states of themselves and of others. Paranoid schizophrenic patients also perform poorly because they have difficulty accurately interpreting others' intentions. The meta-analysis additionally showed that IQ, gender, and age of the participants do not significantly affect the performance of theory of mind tasks.
Research suggests that impairment in theory of mind negatively affects clinical insight—the patient's awareness of their mental illness. Insight requires theory of mind; a patient must be able to adopt a third-person perspective and see the self as others do. A patient with good insight can accurately self-represent, by comparing himself with others and by viewing himself from the perspective of others. Insight allows a patient to recognize and react appropriately to his symptoms. A patient who lacks insight does not realize that he has a mental illness, because of his inability to accurately self-represent. Therapies that teach patients perspective-taking and self-reflection skills can improve abilities in reading social cues and taking the perspective of another person.
Research indicates that theory-of-mind deficit is a stable trait-characteristic rather than a state-characteristic of schizophrenia. The meta-analysis conducted by Sprong et al. showed that patients in remission still had impairment in theory of mind. This indicates that the deficit is not merely a consequence of the active phase of schizophrenia.
Schizophrenic patients' deficit in theory of mind impairs their interactions with others. Theory of mind is particularly important for parents, who must understand the thoughts and behaviors of their children and react accordingly. Dysfunctional parenting is associated with deficits in the first-order theory of mind, the ability to understand another person's thoughts, and in the second-order theory of mind, the ability to infer what one person thinks about another person's thoughts. Compared with healthy mothers, mothers with schizophrenia are found to be more remote, quiet, self-absorbed, insensitive, unresponsive, and to have fewer satisfying interactions with their children. They also tend to misinterpret their children's emotional cues, and often misunderstand neutral faces as negative. Activities such as role-playing and individual or group-based sessions are effective interventions that help the parents improve on perspective-taking and theory of mind. There is a strong association between theory of mind deficit and parental role dysfunction.
=== Alcohol use disorders ===
Impairments in theory of mind, as well as other social-cognitive deficits, are commonly found in people who have alcohol use disorders, due to the neurotoxic effects of alcohol on the brain, particularly the prefrontal cortex.
=== Depression and dysphoria ===
Individuals in a major depressive episode, a disorder characterized by social impairment, show deficits in theory of mind decoding. Theory of mind decoding is the ability to use information available in the immediate environment (e.g., facial expression, tone of voice, body posture) to accurately label the mental states of others. The opposite pattern, enhanced theory of mind, is observed in individuals vulnerable to depression, including those individuals with past major depressive disorder (MDD), dysphoric individuals, and individuals with a maternal history of MDD.
=== Developmental language disorder ===
Children diagnosed with developmental language disorder (DLD) exhibit much lower scores on reading and writing sections of standardized tests, yet have a normal nonverbal IQ. These language deficits can be any specific deficits in lexical semantics, syntax, or pragmatics, or a combination of multiple problems. Such children often exhibit poorer social skills than normally developing children, and seem to have problems decoding beliefs in others. A recent meta-analysis confirmed that children with DLD have substantially lower scores on theory of mind tasks compared to typically developing children. This strengthens the claim that language development is related to theory of mind.
== Brain mechanisms ==
=== In non autistic people ===
Research on theory of mind in autism led to the view that mentalizing abilities are subserved by dedicated mechanisms that can—in some cases—be impaired while general cognitive function remains largely intact.
Neuroimaging research supports this view, demonstrating specific brain regions are consistently engaged during theory of mind tasks. Positron emission tomography (PET) research on theory of mind, using verbal and pictorial story comprehension tasks, identifies a set of brain regions including the medial prefrontal cortex (mPFC), and area around posterior superior temporal sulcus (pSTS), and sometimes precuneus and amygdala/temporopolar cortex. Research on the neural basis of theory of mind has diversified, with separate lines of research focusing on the understanding of beliefs, intentions, and more complex properties of minds such as psychological traits.
Studies from Rebecca Saxe's lab at MIT, using a false-belief versus false-photograph task contrast aimed at isolating the mentalizing component of the false-belief task, have consistently found activation in the mPFC, precuneus, and temporoparietal junction (TPJ), right-lateralized. In particular, Saxe et al. proposed that the right TPJ (rTPJ) is selectively involved in representing the beliefs of others. Some debate exists, as the same rTPJ region is consistently activated during spatial reorienting of visual attention; Jean Decety from the University of Chicago and Jason Mitchell from Harvard thus propose that the rTPJ subserves a more general function involved in both false-belief understanding and attentional reorienting, rather than a mechanism specialized for social cognition. However, it is possible that the observation of overlapping regions for representing beliefs and attentional reorienting may simply be due to adjacent, but distinct, neuronal populations that code for each. The resolution of typical fMRI studies may not be good enough to show that distinct/adjacent neuronal populations code for each of these processes. In a study following Decety and Mitchell, Saxe and colleagues used higher-resolution fMRI and showed that the peak of activation for attentional reorienting is approximately 6–10 mm above the peak for representing beliefs. Further corroborating that differing populations of neurons may code for each process, they found no similarity in the patterning of fMRI response across space.
Using single-cell recordings in the human dorsomedial prefrontal cortex (dmPFC), researchers at MGH identified neurons that encode information about others' beliefs, which were distinct from self-beliefs, across different scenarios in a false-belief task. They further showed that these neurons could provide detailed information about others' beliefs, and could accurately predict these beliefs' verity. These findings suggest a prominent role of distinct neuronal populations in the dmPFC in theory of mind complemented by the TPJ and pSTS.
Functional imaging also illuminates the detection of mental state information in animations of moving geometric shapes similar to those used in Heider and Simmel (1944), which typical humans automatically perceive as social interactions laden with intention and emotion. Three studies found remarkably similar patterns of activation during the perception of such animations versus a random or deterministic motion control: mPFC, pSTS, fusiform face area (FFA), and amygdala were selectively engaged during the theory of mind condition. Another study presented subjects with an animation of two dots moving with a parameterized degree of intentionality (quantifying the extent to which the dots chased each other), and found that pSTS activation correlated with this parameter.
A separate body of research implicates the posterior superior temporal sulcus in the perception of intentionality in human action. This area is also involved in perceiving biological motion, including body, eye, mouth, and point-light display motion. One study found increased pSTS activation while watching a human lift his hand versus having his hand pushed up by a piston (intentional versus unintentional action). Several studies found increased pSTS activation when subjects perceive a human action that is incongruent with the action expected from the actor's context and inferred intention. Examples would be: a human performing a reach-to-grasp motion on empty space next to an object, versus grasping the object; a human shifting eye gaze toward empty space next to a checkerboard target versus shifting gaze toward the target; an unladen human turning on a light with his knee, versus turning on a light with his knee while carrying a pile of books; and a walking human pausing as he passes behind a bookshelf, versus walking at a constant speed. In these studies, actions in the "congruent" case have a straightforward goal, and are easy to explain in terms of the actor's intention. The incongruent actions, on the other hand, require further explanation (why would someone twist empty space next to a gear?), and apparently demand more processing in the STS. This region is distinct from the temporoparietal area activated during false belief tasks. pSTS activation in most of the above studies was largely right-lateralized, following the general trend in neuroimaging studies of social cognition and perception. Also right-lateralized are the TPJ activation during false belief tasks, the STS response to biological motion, and the FFA response to faces.
Neuropsychological evidence supports neuroimaging results regarding the neural basis of theory of mind. Studies with patients with a lesion of the frontal lobes and the temporoparietal junction of the brain (between the temporal lobe and parietal lobe) report that they have difficulty with some theory of mind tasks. This shows that theory of mind abilities are associated with specific parts of the human brain. However, the fact that the medial prefrontal cortex and temporoparietal junction are necessary for theory of mind tasks does not imply that these regions are specific to that function. TPJ and mPFC may subserve more general functions necessary for Theory of Mind.
Research by Vittorio Gallese, Luciano Fadiga, and Giacomo Rizzolatti shows that some sensorimotor neurons, referred to as mirror neurons and first discovered in the premotor cortex of rhesus monkeys, may be involved in action understanding. Single-electrode recording revealed that these neurons fired when a monkey performed an action, as well as when the monkey viewed another agent performing the same action. fMRI studies with human participants show brain regions (assumed to contain mirror neurons) that are active when one person sees another person's goal-directed action. These data led some authors to suggest that mirror neurons may provide the basis for theory of mind in the brain, and to support simulation theory of mind reading.
There is also evidence against a link between mirror neurons and theory of mind. First, macaque monkeys have mirror neurons but do not seem to have a 'human-like' capacity to understand theory of mind and belief. Second, fMRI studies of theory of mind typically report activation in the mPFC, temporal poles, and TPJ or STS, but those brain areas are not part of the mirror neuron system. Some investigators, like developmental psychologist Andrew Meltzoff and neuroscientist Jean Decety, believe that mirror neurons merely facilitate learning through imitation and may provide a precursor to the development of theory of mind. Others, like philosopher Shaun Gallagher, suggest that mirror-neuron activation, on a number of counts, fails to meet the definition of simulation as proposed by the simulation theory of mindreading.
=== In autism ===
Several neuroimaging studies have looked at the neural basis for theory of mind impairment in subjects with Asperger syndrome and high-functioning autism (HFA). The first PET study of theory of mind in autism (also the first neuroimaging study using a task-induced activation paradigm in autism) replicated a prior study in non autistic individuals, which employed a story-comprehension task. This study found displaced and diminished mPFC activation in subjects with autism. However, because the study used only six subjects with autism, and because the spatial resolution of PET imaging is relatively poor, these results should be considered preliminary.
A subsequent fMRI study scanned normally developing adults and adults with HFA while performing a "reading the mind in the eyes" task: viewing a photo of a human's eyes and choosing which of two adjectives better describes the person's mental state, versus a gender discrimination control. The authors found activity in orbitofrontal cortex, STS, and amygdala in normal subjects, and found less amygdala activation and abnormal STS activation in subjects with autism.
A more recent PET study looked at brain activity in individuals with HFA and Asperger syndrome while viewing Heider-Simmel animations (see above) versus a random motion control. In contrast to normally-developing subjects, those with autism showed little STS or FFA activation, and less mPFC and amygdala activation. Activity in extrastriate regions V3 and LO was identical across the two groups, suggesting intact lower-level visual processing in the subjects with autism. The study also reported less functional connectivity between STS and V3 in the autism group. However decreased temporal correlation between activity in STS and V3 would be expected simply from the lack of an evoked response in STS to intent-laden animations in subjects with autism. A more informative analysis would be to compute functional connectivity after regressing out evoked responses from all-time series.
A subsequent study, using the incongruent/congruent gaze-shift paradigm described above, found that in high-functioning adults with autism, posterior STS (pSTS) activation was undifferentiated while they watched a human shift gaze toward a target and then toward adjacent empty space. The lack of additional STS processing in the incongruent state may suggest that these subjects fail to form an expectation of what the actor should do given contextual information, or that feedback about the violation of this expectation does not reach STS. Both explanations involve an impairment or deficit in the ability to link eye gaze shifts with intentional explanations. This study also found a significant anticorrelation between STS activation in the incongruent-congruent contrast and social subscale score on the Autism Diagnostic Interview-Revised, but not scores on the other subscales.
An fMRI study demonstrated that the right temporoparietal junction (rTPJ) of higher-functioning adults with autism was not more selectively activated for mentalizing judgments when compared to physical judgments about self and other. rTPJ selectivity for mentalizing was also related to individual variation on clinical measures of social impairment: individuals whose rTPJ was increasingly more active for mentalizing compared to physical judgments were less socially impaired, while those who showed little to no difference in response to mentalizing or physical judgments were the most socially impaired. This evidence builds on work in typical development that suggests rTPJ is critical for representing mental state information, whether it is about oneself or others. It also points to an explanation at the neural level for the pervasive mind-blindness difficulties in autism that are evident throughout the lifespan.
=== In schizophrenia ===
The brain regions associated with theory of mind include the superior temporal gyrus (STS), the temporoparietal junction (TPJ), the medial prefrontal cortex (mPFC), the precuneus, and the amygdala. The reduced activity in the mPFC of individuals with schizophrenia is associated with theory of mind deficit and may explain impairments in social function among people with schizophrenia. Increased neural activity in mPFC is related to better perspective-taking, emotion management, and increased social functioning. Disrupted brain activities in areas related to theory of mind may increase social stress or disinterest in social interaction, and contribute to the social dysfunction associated with schizophrenia.
== Practical validity ==
Group member average scores of theory of mind abilities, measured with the Reading the Mind in the Eyes test (RME), are possibly drivers of successful group performance. High group average scores on the RME are correlated with the collective intelligence factor c, defined as a group's ability to perform a wide range of mental tasks, a group intelligence measure similar to the g factor for general individual intelligence. RME is a theory of mind test for adults that shows sufficient test-retest reliability and constantly differentiates control groups from individuals with functional autism or Asperger syndrome. It is one of the most widely accepted and well-validated tests for theory of mind abilities within adults.
== Evolution ==
The evolutionary origin of theory of mind remains obscure. While many theories make claims about its role in the development of human language and social cognition, few of them specify in detail any evolutionary neurophysiological precursors. One theory claims that theory of mind has its roots in two defensive reactions—immobilization stress and tonic immobility—which are implicated in the handling of stressful encounters and also figure prominently in mammalian childrearing practice. Their combined effect seems capable of producing many of the hallmarks of theory of mind, such as eye-contact, gaze-following, inhibitory control, and intentional attributions.
== Non-human ==
An open question is whether non-human animals have a genetic endowment and social environment that allows them to acquire a theory of mind like human children do. This is a contentious issue because of the difficulty of inferring from animal behavior the existence of thinking or of particular thoughts, or the existence of a concept of self or self-awareness, consciousness, and qualia. One difficulty with non-human studies of theory of mind is the lack of sufficient numbers of naturalistic observations, giving insight into what the evolutionary pressures might be on a species' development of theory of mind.
Non-human research still has a major place in this field. It is especially useful in illuminating which nonverbal behaviors signify components of theory of mind, and in pointing to possible stepping points in the evolution of that aspect of social cognition. While it is difficult to study human-like theory of mind and mental states in species of whose potential mental states we have an incomplete understanding, researchers can focus on simpler components of more complex capabilities. For example, many researchers focus on animals' understanding of intention, gaze, perspective, or knowledge (of what another being has seen). A study that looked at understanding of intention in orangutans, chimpanzees, and children showed that all three species understood the difference between accidental and intentional acts.
Individuals exhibit theory of mind by extrapolating another's internal mental states from their observable behavior. So one challenge in this line of research is to distinguish this from more run-of-the-mill stimulus-response learning, with the other's observable behavior being the stimulus.
Recently, most non-human theory of mind research has focused on monkeys and great apes, who are of most interest in the study of the evolution of human social cognition. Other studies relevant to attributions theory of mind have been conducted using plovers and dogs, which show preliminary evidence of understanding attention—one precursor of theory of mind—in others.
There has been some controversy over the interpretation of evidence purporting to show theory of mind ability—or inability—in animals. For example, Povinelli et al. presented chimpanzees with the choice of two experimenters from whom to request food: one who had seen where food was hidden, and one who, by virtue of one of a variety of mechanisms (having a bucket or bag over his head, a blindfold over his eyes, or being turned away from the baiting) does not know, and can only guess. They found that the animals failed in most cases to differentially request food from the "knower". By contrast, Hare, Call, and Tomasello found that subordinate chimpanzees were able to use the knowledge state of dominant rival chimpanzees to determine which container of hidden food they approached. William Field and Sue Savage-Rumbaugh believe that bonobos have developed theory of mind, and cite their communications with a captive bonobo, Kanzi, as evidence.
In one experiment, ravens (Corvus corax) took into account visual access of unseen conspecifics. The researchers argued that "ravens can generalize from their own perceptual experience to infer the possibility of being seen".
Evolutionary anthropologist Christopher Krupenye studied the existence of theory of mind, and particularly false beliefs, in non-human primates.
Keren Haroush and Ziv Williams outlined the case for a group of neurons in primates' brains that uniquely predicted the choice selection of their interacting partner. These primates' neurons, located in the anterior cingulate cortex of rhesus monkeys, were observed using single-unit recording while the monkeys played a variant of the iterative prisoner's dilemma game. By identifying cells that represent the yet unknown intentions of a game partner, Haroush & Williams' study supports the idea that theory of mind may be a fundamental and generalized process, and suggests that anterior cingulate cortex neurons may act to complement the function of mirror neurons during social interchange.
== See also ==
== References ==
== Further reading ==
Excerpts taken from: Davis, E. (2007) "Mental Verbs in Nicaraguan Sign Language and the Role of Language in Theory of Mind". Undergraduate senior thesis, Barnard College, Columbia University.
== External links ==
Eye Test Simon Baron Cohen Archived 21 October 2020 at the Wayback Machine
The Computational Theory of Mind
The Identity Theory of Mind
Sally-Anne and Smarties tests
Functional Contextualism
Theory of Mind article in the Internet Encyclopedia of Philosophy
Research into Theory of mind | Wikipedia/Theory_of_mind |
The empathising–systemising (E–S) theory is a theory on the psychological basis of autism and male–female neurological differences originally put forward by clinical psychologist Simon Baron-Cohen. It classifies individuals based on abilities in empathic thinking (E) and systematic thinking (S). It attempts to explain the social and communication symptoms in autism spectrum disorders as deficits and delays in empathy combined with intact or superior systemising.
According to Baron-Cohen, the E–S theory has been tested using the Empathy Quotient (EQ) and Systemising Quotient (SQ), developed by him and colleagues, and generates five different 'brain types' depending on the presence or absence of discrepancies between their scores on E or S. E–S profiles show that the profile E>S is more common in females than in males, and the profile S>E is more common in males than in females. Baron-Cohen and associates assert that E–S theory is a better predictor than gender of who chooses STEM subjects.
The E–S theory has been extended into the extreme male brain (EMB) theory of autism and Asperger syndrome, which are associated in the E–S theory with below-average empathy and average or above-average systemising.
Baron-Cohen's studies and theory have been questioned on multiple grounds. For instance, a 1998 study on autism found that overrepresentation of engineers could depend on a socioeconomic status rather than E–S differences.
== History ==
E–S theory was developed by psychologist Simon Baron-Cohen in 2002, as a reconceptualization of cognitive sex differences in the general population. This was done in an effort to understand why the cognitive difficulties in autism appeared to lie in domains in which he says on average females outperformed males, along with why cognitive strengths in autism appeared to lie in domains in which on average males outperformed females. In the first chapter of his 2003 book The Essential Difference, he discusses the bestseller Men Are from Mars, Women Are from Venus, written by John Gray in 1992, and states: "the view that men are from Mars and women Venus paints the differences between the two sexes as too extreme. The two sexes are different, but are not so different that we cannot understand each other." The Essential Difference had a second edition published in 2009.
The 2003 edition of The Essential Difference discusses two different sources of inspiration for Baron-Cohen's E-S theory. The first inspiration is epistemological with a number of influences including historicism and the German separation between erklären and verstehen, which Wilhelm Windelband described as nomothetic and idiographic methods. This was part of the positivism dispute in Germany from 1961 to 1969 where the human sciences and natural sciences (Geisteswissenschaften and Naturwissenschaften) disagreed on how to conduct social science. The second source of inspiration was interpreting gender essentialism from Charles Darwin's seminal book The Descent of Man, and Selection in Relation to Sex. According to The Guardian regarding the publication's 2003 edition:
The book [The Essential Difference] has been five years in the writing, partly because he deemed its subject too politically sensitive for the 1990s, and partly because he first wanted to float his ideas about autism [E-S theory] at scientific conferences, where he says reaction has been largely supportive.
Prior to the development of E-S theory, Baron-Cohen had previously proposed and studied the mind-blindness theory in 1990, which proposed a homogenous (single-cause) explanation of autism as due to either a lack of theory of mind, or developmental delay in theory of mind during childhood. Theory of mind is the ability to attribute mental states to themselves or others. The mind-blindness theory could explain social and communication difficulties, but could not explain other key traits of autism including unusually narrow interests and highly repetitive behaviors. Mind-blindness was later largely rejected by academia in response to strong evidence for the heterogeneity of autism, although some proponents in academia including Baron-Cohen existed as of March 2011.
== Research ==
According to Baron-Cohen, females on average score higher on measures of empathy and males on average score higher on measures of systemising. This has been found using the child and adolescent versions of the Empathy Quotient (EQ) and the Systemising Quotient (SQ), which are completed by parents about their child/adolescent, and on the self-report version of the EQ and SQ in adults.
Baron-Cohen and associates say that similar sex differences on average have been found using performance tests of empathy such as facial emotion recognition tasks and on performance tests of systemising such as measures of mechanical reasoning or 'intuitive physics'. He has also argued that these sex differences are not only due to socialization. In a 2018 article published in the Proceedings of the National Academy of Sciences (PNAS), Baron-Cohen's team demonstrated the robustness of the theory on sample of half a million individuals.
== Fetal testosterone ==
While experience and socialization contribute to the observed sex differences in empathy and systemising, Baron-Cohen and colleagues suggest that biology also plays a role. A candidate biological factor influencing E and S is fetal testosterone (FT). FT levels are positively correlated with scores on the Systemising Quotient and are negatively correlated with scores on the Empathy Quotient. A new field of research has emerged to investigate the role of testosterone levels in autism. Correlational research demonstrated that elevated rates of testosterone were associated with higher rates of autistic traits, lower rates of eye contact, and higher rates of other medical conditions. Furthermore, experimental studies showed that altering testosterone levels influences the maze performance in rats, having implications for human studies. The fetal testosterone theories posit that the level of testosterone in the womb influences the development of sexually dimorphic brain structures, resulting in sex differences and autistic traits in individuals.
Baron-Cohen and colleagues performed a study in 2014 using 19,677 samples of amniotic fluid to show that people who would later develop autism had elevated fetal steroidogenic levels, including testosterone.
== Evolutionary explanations for sex differences ==
Baron-Cohen presents several possible evolutionary psychology explanations for this sex difference. For example, he says that better empathising may improve care of children, and that better empathy may also improve women's social network which may help in various ways with the caring of children. On the other hand, he says that systemising may help males become good hunters and increase their social status by improving spatial navigation and the making and use of tools.
== Extreme male brain theory of autism ==
Baron-Cohen's work in systemising-empathising led him to investigate whether higher levels of fetal testosterone explain the increased prevalence of autism spectrum disorders among males in his theory known as the "extreme male brain" theory of autism. A review of his book The Essential Difference published in Nature in 2003 summarises his proposals as: "the male brain is programmed to systemize and the female brain to empathize ... Asperger's syndrome represents the extreme male brain".
Baron-Cohen and colleagues extended the E–S theory into the extreme male brain theory of autism, which hypothesises that autism shows an extreme of the typical male profile. This theory divides people into five groups:
Type E, whose empathy is at a significantly higher level than their systemising (E > S).
Type S, whose systemising is at a significantly higher level than their empathy (S > E).
Type B (for balanced), whose empathy is at the same level as their systemising (E = S).
Extreme Type E, whose empathy is above average but whose systemising is below average (E ≫ S).
Extreme Type S, whose systemising is above average but whose empathy is below average (S ≫ E).
Baron-Cohen says that tests of the E–S model show that twice as many females than males are Type E and twice as many males than females are Type S. 65% of people with autism spectrum conditions are Extreme Type S. The concept of the Extreme Type E brain has been proposed; however, little research has been conducted on this brain profile.
Apart from the research using EQ and SQ, several other similar tests also have found female and male differences and that people with autism or Asperger syndrome on average score similarly to but more extremely than the average male. For example, the brain differences model provides a broad overview of sex differences that are represented in individuals with autism, including brain structures and hormone levels.
Some, but not all, studies have found that brain regions that differ in average size between males and females also differ similarly between people who have autism and those who do not have autism.
Baron-Cohen's research on relatives of people with Asperger syndrome and autism found that their fathers and grandfathers are twice as likely to be engineers as the general population. A follow-up study by David A. Routh and Christopher Jarrold found disproportionate numbers of doctors, scientists, and accountants were fathers of autists, while "skilled and unskilled manual workers are less common as fathers than would be predicted". They hypothesised that this observed overrepresentation of science and accounting among autism fathers could be due to a sampling bias. Another similar finding by Baron-Cohen in California has been referred to as the Silicon Valley phenomenon, where a large portion of the population works in technical fields, and he says autism prevalence rates are ten times higher than the average of the US population. These data suggest that genetics and the environment play a role in autism prevalence, and children with technically minded parents are therefore more likely to be diagnosed with autism.
Another possibility has been proposed that spins the perspective of the extreme male brain. Social theorists have been investigating the concept that females have protective factors against autism by having a more developed language repertoire and more empathy skills. Female children speak earlier and use language more than their male counterparts, and the lack of this skill translates into many symptoms of autism, offering another explanation for the discrepancy in prevalence.
=== Development of brain structures ===
The fetal testosterone theory hypothesises that higher levels of testosterone in the amniotic fluid of mothers push brain development towards improved ability to see patterns and analyse complex systems while diminishing communication and empathy, emphasising "male" traits over "female", or in E–S theory terminology, emphasising "systemising" over "empathising". This theory states that fetal testosterone influences the development of certain structures in the brain, and that these changes relate to behavioral traits seen in those with autism. Males generally have higher levels of fetal testosterone contributing to their brain developing in that particular way.
The extreme male brain theory (EMB), put forward by Baron-Cohen suggests that autistic brains show an exaggeration of the features associated with male brains. These are mainly size and connectivity with males generally having a larger brain with more white matter, leading to increased connectivity in each hemisphere. This is seen in an exaggerated form in the brains of those with ASD. There is a decrease in the Corpus Callosum in people with ASD.. Individuals with ASD were found to have widespread interconnectivity abnormalities in specific brain regions. This could explain the different results on empathy tests between men and women as well as the deficiencies in empathy seen in ASD as empathy requires several brain regions to be activated which need information from many different areas of the brain. A further example of how brain structure can influence ASD is looking at cases where the corpus callosum does not fully develop (agenesis of corpus callosum). It was found that autism is commonly diagnosed in children where the corpus callosum does not fully develop (45% of children with agenesis of the corpus callosum). A further example of brain structures relating to ASD is that children with ASD tend to have a larger amygdala, this is another example of being an extreme version of the male brain which generally has a larger amygdala.
These brain differences have all been shown to have an influence on social cognition and communication. High levels of fetal testosterone have also been shown to be related to behavior associated with autism, such as eye contact. Studies examining the relationship between prenatal testosterone levels and autistic traits found that high levels correlated with traits such as decreased eye contact. These were present in both sexes. This suggests that fetal testosterone (fT) is the cause of sex differences in the brain and that there is a link between fT levels and ASD. In general females with autism have a higher rate of medical conditions which are related to high androgen levels and both males and females with autism have higher than average androgen levels. Males have higher fT levels naturally meaning that there is less of a change required in the hormone levels to reach a point high enough to cause the developmental changes seen in autism. This is a possible cause for the male prevalence seen in autism.
== Cognitive versus affective empathy ==
Empathy can be subdivided into two major components:
cognitive empathy (also termed 'mentalising'), the ability to understand another's mental state;
affective or emotional empathy, the ability to emotionally respond to another's mental states. Affective empathy can be subdivided into personal distress (self-centered feelings of discomfort and anxiety in response to another's suffering) and empathic concern (sympathy towards others that are suffering).
Studies found that individuals with autism spectrum disorder (ASD) self-report lower levels of empathic concern, show less or absent comforting responses toward someone who is suffering, and report equal or higher levels of personal distress compared to controls. The combination of reduced empathic concern and increased personal distress may lead to the overall reduction of empathy in ASD.
Studies also suggest that individuals with ASD may have impaired theory of mind, involving the ability to understand the perspectives of others. The terms cognitive empathy and theory of mind are often used synonymously, but due to a lack of studies comparing theory of mind with types of empathy, it is unclear whether these are equivalent. Notably, many reports on the empathic deficits of individuals with Asperger syndrome are actually based on impairments in theory of mind.
Baron-Cohen argued that psychopathy is associated with intact cognitive empathy but reduced affective empathy while ASD is associated with both reduced cognitive and affective empathy.
== Criticism ==
Empathising-Systemising theory has also been criticised, from various points of view.
A 2004 review of Baron-Cohen's book The Essential Difference by philosopher Neil Levy in Phenomenology and the Cognitive Sciences characterised it as "very disappointing" with a "superficial notion of intelligence", concluding that Baron-Cohen's major claims about mind-blindness and systemising–empathising are "at best, dubious".
In a 2011 article in Time magazine, journalist and author Judith Warner wrote that Baron-Cohen "most dramatically wandered into fraught territory in 2003, when he published the book The Essential Difference, which called autism a manifestation of an extreme 'male brain'—one that's 'predominantly hard-wired for understanding and building systems,' as opposed to a 'female brain,' one that's 'predominantly hard-wired for empathy'—and ended up on the wrong side of the debate on science and sex differences."
In a 2003 book review published in the journal Nature, human biologist Joyce Benenson, while showing vivid interest in Baron-Cohen's findings on systemising, put in doubt the relative negative difference in empathising of males:"The idea that males are more interested in systemizing than females merits serious consideration ... It is unquestionably a novel and fascinating idea that seems likely to generate a rich empirical body of literature as its properties are tested. The second part of the theory—that females are more empathic than males—is more problematic ... Other measures, however, show that males are highly socially skilled."Others have criticised the original EQ and SQ, which form most of the research basis behind the notions of empathising and systemising. Both measure more than one factor, and sex differences exist on only some of the factors. In a 2003 Wall Street Journal article, Robert McGough wrote about responses to the theory by neurologist and pediatrician Isabelle Rapin and psychologist Helen Tager-Flusberg: Isabelle Rapin ... finds Dr. Baron-Cohen's theory "provocative" but adds that "it does not account for some of the many neurological features of the disorder, like the motor symptoms [such as repetitive movements and clumsiness], the sleep problems or the seizures." Others worry that the term "extreme male brain" could be misinterpreted. Males are commonly associated with "qualities such as aggression", says Helen Tager-Flusberg ... "What's dangerous is that's the inference people will make: Oh, these are extreme males."
Some research in systemising and empathising in early life indicates that boys and girls develop in similar ways, casting doubt on the theory of sex differences in these areas. A cognitive style that more naturally opposes empathising, which has been given the name Machiavellianism, emphasises self-interest and has been shown to be strongly correlated with competitiveness. Evolutionary theory predicts that typical males will be more competitive than typical females. In contrast, research has generally shown a weak negative correlation between empathising and systemising. (It is worth noting that weak correlation between empathising and systemising would support treating them as independent variables, i.e., as distinct dimensions of personality, each of which may or may not correlate with an individual's biological sex or preferred gender.)
The 'extreme male brain' theory has also been criticised, with critics saying that the tests behind this theory are based on gender stereotypes, and not on hard science. Psychologist and leading autism researcher Catherine Lord says the theory is based on "gross misinterpretations" of developmental data. Psychiatrist David Skuse has claimed that communication differences between genders are likely to be small. Psychiatrist Meng-Chuan Lai says the study results have not been replicated.
Lizzie Buchen, a science journalist for Nature's news feature section, wrote in 2011 that because Baron-Cohen's work has focused on higher-functioning individuals with autism spectrum disorders, his work requires independent replication with broader samples. Mirroring Helen Tager-Flusberg's 2003 warnings, Buchen added that it could lead to hurtful discriminatory views of autistic children "Some critics are also rankled by Baron-Cohen's history of headline-grabbing theories—particularly one that autism is an 'extreme male' brain state. They worry that his theory about technically minded parents may be giving the public wrong ideas, including the impression that autism is linked to being a 'geek'." In a 2003 article in The Spectator, philosopher Hugh Lawson-Tancred wrote "The emphasis on the ultra-maleness approach is no doubt attributable to the fact that Baron-Cohen works mainly with higher functioning autism and Asperger's syndrome."
As a basis for his theory, Baron-Cohen cited a study done on newborn infants in which baby boys looked longer at an object and baby girls looked longer at a person. However, Elizabeth Spelke's 2005 review of studies done with very young children found no consistent differences between boys and girls. Subsequent research showed that there could indeed be a sex difference between males and females, but that males actually looked more at human faces than females on average. A European Union Horizon 2020 backed research program in brain and autism research pointed at genetic factors, confirming individual differences in object or human proclivities in babies but did not confirm the sex difference.
In her 2010 book Delusions of Gender, Cordelia Fine pointed to Baron-Cohen's views as an example of "neurosexism". She also criticised some of the experimental work that Baron-Cohen cited in support of his views as being methodologically flawed.
In her 2017 book Inferior: How Science Got Women Wrong and the New Research That's Rewriting the Story, science journalist Angela Saini criticised Cohen's research, arguing that he had overstated the significance of his findings, that the study on babies on which he based much of his research has not been successfully replicated, and that his studies of fetal testosterone levels have not provided evidence for his theories.
Neuroscientist Gina Rippon criticised Baron-Cohen's theories in her 2019 book The Gendered Brain: The new neuroscience that shatters the myth of the female brain. Speaking in 2020, she called his book The Essential Difference "neurotrash", and characterised his research methods as "weak". Rippon has also argued against using "male" and "female" for describing different types of brains which do not correspond to genders. Reviewing her work for Nature, neuroscientist Lise Eliot supported Rippon's point of view, and wrote "The hunt for male and female distinctions inside the skull is a lesson in bad research practice".
== See also ==
Neuroscience of sex differences
The NeuroGenderings Network
== References ==
== External links ==
20-Question Online EQ/SQ test (University of Cambridge)
Online version of the EQ and SQ tests
Baron-Cohen, Simon — The Male Condition The New York Times, 8 August 2005
Baron-Cohen, Simon — They just can't help it The Guardian, 17 April 2003
Kunzig, Robert — Autism: What's Sex Got to Do with It? Psychology Today, 1 January 2004 | Wikipedia/Empathising–systemising_theory |
In evolutionary psychology and behavioral ecology, human mating strategies are a set of behaviors used by individuals to select, attract, and retain mates. Mating strategies overlap with reproductive strategies, which encompass a broader set of behaviors involving the timing of reproduction and the trade-off between quantity and quality of offspring.
Relative to those of other animals, human mating strategies are unique in their relationship with cultural variables such as the institution of marriage. Humans may seek out individuals with the intention of forming a long-term intimate relationship, marriage, casual relationship, or friendship. The human desire for companionship is one of the strongest human drives. It is an innate feature of human nature and may be related to the sex drive. The human mating process encompasses the social and cultural processes whereby one person may meet another to assess suitability, the courtship process and the process of forming an interpersonal relationship. Commonalities, however, can be found between humans and nonhuman animals in mating behavior, as in the case of animal sexual behavior in general and assortative mating in particular.
== Theoretical background ==
=== Parental investment ===
Research on human mating strategies is guided by the theory of sexual selection, and in particular, Robert Trivers' concept of parental investment. Trivers defined parental investment as "any investment by the parent in an individual offspring that increases the offspring's chance of surviving (and hence reproductive success) at the cost of the parent's ability to invest in other offspring." The support given to each offspring typically differs between the father and mother. Trivers posited that it is the differential parental investment between males and females that drives the process of sexual selection. In turn, sexual selection leads to the evolution of sexual dimorphism in mate choice, competitive ability, and courtship displays (see secondary sex characteristics).
Minimum parental investment is the least required care for successful reproduction. In humans, females have a higher minimum parental investment. They have to invest in internal fertilization, placentation, and gestation, followed by childbirth and lactation. While human males can invest heavily in their offspring as well, their minimum parental investment is still lower than that of females.
This same concept can be looked at from an economic perspective regarding the costs of engaging in sexual relations. Females incur the higher costs, as they carry the possibility of becoming pregnant among other costs. Conversely, males have comparatively minimal costs of having a sexual encounter. Therefore, evolutionary psychologists have predicted a number of sex differences in human mating psychologies.
Women tend to appreciate men who are chivalrous even if they might be patriarchal towards them. They are likely to be more dependent on such men, to limit their own ambitions, and to submit to them. Because such men are more likely to invest in these women and their children, it makes evolutionary sense for women to be drawn towards them.
=== Life history strategies ===
Life history theory helps to explain differences in timing of sexual relationships, quantity of sexual partners, and parental investment. According to this theory, organisms have a limited supply of energy, which they use to develop their bodies. This energy is put on a theoretical spectrum of how organisms prioritize energy use. At one end of the spectrum, the organism prioritizes speeding up physical development and reaching sexual maturation quickly, which is deemed a "fast" strategy. Organisms implementing a "fast" strategy seek to have sexual relationships earlier, multiple mates, and to invest less energy in their offspring. On the other end of the spectrum is "slow" strategy, in which organisms prioritize physical development. "Slow" strategy organisms seek to have sexual relationships later, fewer mates, and invest more heavily in their offspring.
Generally, fast strategies are developed in populations that are r-selected (r being the maximal intrinsic value of natural increase), and slow strategies are developed in populations that are K-selected (K being the carrying capacity of the population, or how many individuals within a population that their environment can support). Species that are r-selected tend to reproduce faster, be specialists, and to be smaller. K-selected organisms reproduce less over the course of their lifetimes, but individuals live longer; they are more likely to be larger and to be generalists. Species exist along an r-K continuum, rather than being one or the other. Humans are considered a K-selected species, meaning that on the whole, they pursue "slow" strategies relative to other species.
Life history characteristics include age at sexual maturity, gestation period, birth weight, litter size, postnatal growth rates, breastfeeding duration, birth spacing, length of juvenile dependence, level of parental investment, adult body size, and longevity. Variation in these traits between individuals, according to life-history theory, is due to homeostasis, reproduction, and growth. For example, if more of one species' resources are going towards reproduction than physical growth, then the age at which they reach sexual maturity will be earlier than a species that devotes more energy to physical growth.
These strategies are unconscious and help increase the organism's reproductive success in a given environment. Early childhood environments may play a part in which strategy a person unconsciously pursues. In a hostile environment, risk and unpredictability is increased and therefore survival is a higher priority. A "fast" strategy is more likely to be pursued by populations living in hostile environments in order to reach maturity and reproduce quickly. In less risky environments, populations are more likely to pursue a "slow" strategy to physically develop first and then reproduce. This concept has been applied to humans as well, though there are differences in life history strategy application both between and within species.
=== Challenges with applying life history strategies to humans ===
The binary between "fast" and "slow" mating strategies as applied to humans can be misleading. Those who pursue "fast" strategies may face criticism in the form of cross-cultural contempt or ethical and/or religious critique. For example, in societies which portray women as more likely to pursue slow strategies, female sexual behavior may be taboo.
One theory, "psychosocial acceleration theory", refers to the predictions about human development of "fast" or "slow" strategies given individuals' experience of their environment while young. It predicts that people born into harsher environments (in which they have less control over the threats around them) are more likely to reach sexual maturity faster and to reproduce earlier, due to phenotypic plasticity (external cues prompting change in physiology and behavior). Evolutionary psychologists use three metrics to describe environments that predict which life history strategy people will choose: resource availability, harshness, and unpredictability. Harshness and unpredictability come into play when resource availability is satisfied, because without resources, individuals have few opportunities to mature and reproduce. For example, in humans, low resource availability could refer to food insecurity, and unpredictability could refer to frequently moving houses or switching schools. Smoking, poor health status, and low personal care are all traits that have been shown to be correlated with earlier sexual experiences, earlier births, and more short-term sex partners. Although psychologists describe these traits as a "cascade", in which a set of childhood experiences and traits affect later-in-life sexual behavior in specific, grouped ways, studies show that sexual consequences can vary across cultures and class and might not be as linearly related to childhood experiences as has been assumed.
Human life history theories in psychology focus on behavioral choices like mate choice and parenting effort (see Evolutionary Anthropology), while in evolutionary ecology, they focus on allocation of energy to maximize success and reproduction.
Several studies undermine the psychological application of life history theories in humans. For example, it has been found that extrinsic mortality (the harshness of an individual's environment) does not directly affect whether people adopt a fast or slow strategy. The reason that extrinsic mortality appears to do so is that it increases competition within populations: it is more accurate to say that harsh environments create situations of high competition, in which people are more likely to adopt fast strategies to maximize their chances of reproduction, than it is to say that individuals in harsh environments adopt fast strategies because otherwise they would die before reproducing.
Another study questioning life history theory in humans was a meta-analysis of pace-of-life studies. The pace-of-life syndrome hypothesis relates environmental factors (unpredictable environments, high predation, etc.) to behavior (earlier mating, more sexual partners, etc.), thus creating a link between behavior, phenotype, and the environment. The analysis, however, suggested that pace-of-life studies had few significant findings regarding differences between individuals due to environment. This means that the link between individuals experiencing difficult environments growing up and their later sexual behavior may be tenuous, or else too muddied with confounding variables to track.
Behavior sciences might not, in general, be a good framework with which to consider life-history theory. Biological life-history theory is based on tradeoffs between energy expenditure and the benefits of reproduction, and these tradeoffs are difficult to measure in humans due to: inability to ascertain tradeoffs among phenotypically different individuals, poor models for tradeoffs, and a reliance on allo-parental investment. It has been proposed that life history theory in humans could be made more useful by considering the principle of time preferences shared between evolutionary biology and psychology, recognizing that individuals will see their assets as more valuable in the present than in the future. Individuals who place a higher "discount rate" on their reproductive abilities, or see it as much more valuable now than later, are more likely to mate earlier and pursue fast strategies.
== Sex similarities ==
=== Assortative mating ===
Human mating is inherently non-random. Despite the common trope "opposites attract", humans generally prefer mates who share the same or similar traits, such as genetics, quantitative phenotypes like height or body-mass index, skin pigmentation, the level of physical attractiveness, disease risk (including cancers and mental disorders), race or ethnicity, facial features, socioeconomic factors (such as (potential) income level and occupational prestige), cultural backgrounds, moral values, religious beliefs, political orientation, (perceived) personality traits (such as conscientiousness or extraversion), behavioral characteristics (such as the level of generosity or the propensity for alcoholism), educational attainment, and IQ or general intelligence. Furthermore, in the past, marriage across status lines was more common. Women typically looked for a man of high status (hypergamy), a sign of access to resources. However, men were usually willing to marry down the socioeconomic ladder (hypogamy) if the woman was young, good looking, and possessed domestic skills (proxies of fertility). In the modern world, people tend to desire well-educated and intelligent children; this goal is better achieved by marrying bright people with high incomes, resulting in the intensification of economic assortative mating. Indeed, better educated parents tend to have children who are not only well-educated but also healthy and successful. For this reason, when judging the value of a potential mate, people commonly consider the other person's grasp of grammar (a proxy of socioeconomic status of educational level), teeth quality (indicators of health and age), and self-confidence (psychological stability). Furthermore, the age gap between two partners has also declined. In other words, men and women became more symmetrical in the socioeconomic traits they desire in a mate. Among the aforementioned traits, the correlations in age, race or ethnicity, religion, educational attainment, and intelligence between spouses are the most pronounced, while height is one of the most heritable, with mating partners sharing 89% of the genetic variations affecting the preference for height.
It is not unusual for couples to look alike (as if they were related). Besides assortative mating, some people are unconsciously attracted by their own faces or prefer familiar-looking ones for ease of cognitive processing. People who are emotionally close to their opposite-sex parents may be prone to unknowingly selecting mates bearing resemblance to said parents, who served as role models for what a desirable mate should be like, a phenomenon called sexual imprinting.
Public secondary school is the last time people of various backgrounds are lumped together in the same setting. After that, they begin sorting themselves out by various measures of social screening. Among those marrying late (relative to the time when they left school), socioeconomic status is especially important. In societies where the numbers of highly educated and career-minded women are increasing, the role of socioeconomic status is likely to be even more important in the future. These women generally do not choose to mate with men who are less occupationally and educationally accomplished than they are. For this reason, in societies where they outnumber men, the competition for high-quality males has been intensifying. This trend first emerged in Europe and North America but has been spreading to other places as well.
Positive assortative mating raises the chances of a given trait being passed on to the couple's offspring, strengthens the bond between the parents, and increases genetic similarity between family members, whereupon in-group altruism and inclusive fitness are enhanced. That the two partners are culturally compatible reduces uncertainty in lifestyle choices and ensures social support. In some cases, homogamy can also increase the couple's fertility and the number of offspring surviving till adulthood. On the other hand, there is evolutionary pressure against mating with people too genetically similar to oneself, such as members of the same nuclear family. In addition, children born into parents who are cousins have an increased risk of autosomal recessive genetic disorders, and this risk is higher in populations that are already highly ethnically homogeneous. Children of more distantly related cousins have less risk of these disorders, though still higher than the average population. Therefore, humans tend to maximize the genetic similarity of their mates while avoiding excessive inbreeding or incest. First-cousin marriages nowadays are rare and are in fact prohibited in a number of jurisdictions worldwide. In general, humans seem to prefer mates who are (the equivalent of) second or higher-parity cousins. Genetic analyses suggest that the genomic correlation between spouses is comparable to that between second cousins. In the past, there was indeed some awareness of the dangers of inbreeding, as can be seen in legal prohibitions in some societies, while in the current era, better transportation infrastructure makes it less likely to occur. Moreover, modern transportation has diminished residential propinquity as a factor in assortative mating. But cultural anthropologists have noted that avoidance of inbreeding cannot be the sole basis for the incest taboo because the boundaries of the incest prohibition vary widely between cultures, and not necessarily in ways that maximize the avoidance of inbreeding. A study indicated that between 1800 and 1965 in Iceland, more children and grandchildren were produced from marriages between third or fourth cousins (people with common great-great- or great-great-great-grandparents) than from other degrees of consanguinity.
While human assortative mating is usually positive, in the case of the major histocompatibility complex (MHC) on chromosome 6, humans tend to be more attracted to those who are genetically different in this region, judging from their odors. This promotes MHC heterogeneity in their offspring, making them more resistant to pathogens. Another example of negative assortative mating is among people with traits linked to high testosterone (such as analytical thinking and spatial reasoning) and those traits due to high estrogen (empathy and social skills). They generally find each other appealing.
Assortative mating is partly due to social effects. For instance, religious people are more likely to meet their potential mates in their places of worship while highly educated people typically meet their future spouses in institutions of higher learning. Nevertheless, it can have a quantitatively discernible impact upon the human genome and as such has implications for human evolution even in the presence of population stratification. Pleiotropy, or the phenomenon in which a single gene can influence multiple traits, and assortative mating are responsible for the correlations between some sexually selected traits in humans, such as height and IQ, which are weakly positively correlated. In a knowledge-based economy, educational and socioeconomic assortative mating contributes to the growth in household income inequality, as parents with higher incomes and levels of education tend to invest more in their offspring, giving them an edge later in life.
=== Dating ===
People date to assess each other's suitability as a partner in an intimate relationship or as a spouse. Dating rules may vary across different cultures, and some societies may even replace the dating process by a courtship instead.
=== Double standards and infidelity ===
Both men and women apply one set of standards for themselves and another for their partners. In particular, what counts as sexual contact is different depending on the person engaging in the act, oneself or one's partner. If the person in question is the one to do it, they are unlikely to consider it infidelity compared to when their partner does it. Nevertheless, women are more likely than men to be judged harshly for their promiscuity, even in the most gender-egalitarian of modern societies like Norway. In fact, women are the most aggressive in shaming other women for being promiscuous.
=== Flirting ===
To bond or express sexual interest, people flirt. Social anthropologist Kate Fox posits two main types of flirting: flirting for fun and flirting with intent. Flirting for fun can take place between friends, co-workers, or total strangers who wish to get to know each other. This type of flirting does not seek sexual intercourse or romantic relationship, but increases the bonds between two people.
Flirting with intent plays a role in mate-selection. The person flirting sends out signals of sexual availability to another, and hopes to see the interest returned to encourage continued flirting. Flirting can involve non-verbal signs, such as an exchange of glances, hand-touching, hair-touching, or verbal signs, such as chatting up, flattering comments, and exchange of telephone numbers to enable further contact.
=== Kissing ===
While parental kissing was common throughout human history, romantic or sexual kissing was by no means universal. Historical evidence suggests that this practice arose independently in different complex or stratified societies, such as India, Mesopotamia, and Egypt during the Bronze Age, but did not necessarily spread to other places. Kissing is also more common in colder climates. As is the case with other primates, humans kiss to determine mate suitability and enhance reproduction.
=== Matchmaking ===
Historically, one of the roles of the family was to select spouses of the opposite sex but from the same race or ethnicity and religion for the children. In many cultural traditions, a date may be arranged by a third party, who may be a family member, acquaintance, or (professional) matchmaker. Such a matchmaker could be a religious leader in a community where religious attendance is common. In some cultures, a marriage may be arranged by the couple's parents or an outside party. In some cultures, such as India, arranged marriages are common while in others, such as the United States, these are deemed unacceptable. From the 2000s onward, internet dating—a new form of matchmaking—has become increasingly popular.
== Sex differences ==
=== Short-term and long-term mating ===
Due to differential parental investment, the less investing sex should display more intrasexual competitiveness. This is because they can invest less in each offspring and therefore can reproduce at a higher frequency, which allows them to compete for more mates. Additionally, the higher investing sex should be more choosy in their mate. Since they have a higher minimum parental investment, they carry greater costs with each sexual encounter. These costs lead them to have higher selection standards and therefore they are more choosy.
In humans, females have the higher obligatory biological parental investment. In short-term mating, females are choosier as they have the bigger parental investment. In long-term mating, males and females are equally choosy as they have the same amount of parental investment. Therefore, female and male intrasexual competition and female and male choosiness is equally high in long-term mating but not in short-term mating.
Since males have the lower obligatory parental investment, they should pursue a short-term mating strategy more often than females. Short term mating is characterized by casual, low commitment sexual relationships with many partners that do not last a long time. Additionally, males benefit more from short-term mating than females do. Because males generally pursue short-term mating strategies, their overall reproductive success is higher than that of females, however it is also more variable. This means that males are able to have more offspring on average, however only relatively few males are able to have a very large number of offspring. Due to this short-mating strategy, males have a greater desire for sexual variety, need less time to consent to intercourse, and seek short-term mates more than females.
However, females also pursue short-term mates, but the motivations differ from males. Females can benefit from short-term mating in numerous ways. First, it allows for a quick extraction of resources. Women in a stressed situation may benefit from protection from a male and short term mating is a way to achieve this as is seen in contemporary asylum seeker anthropological studies.
One prominent hypothesis is that ancestral women selectively engaged in short-term mating with men capable of transmitting genetic benefits to their offspring such as health, disease resistance, or attractiveness (see good genes theory and sexy son hypothesis). Since women cannot inspect men's genes directly, they may have evolved to infer genetic quality from certain observable characteristics (see indicator traits). One prominent candidate for a "good genes" indicator includes absence of fluctuating asymmetry, or the degree to which men have perfect bodily symmetry. Other candidates include masculine facial features, behavioral dominance, and low vocal pitch. Evolutionary psychologists have therefore indicated that women pursuing a short-term mating strategy have higher preferences for these good gene indicators, and men who possess good genes indicators are more successful in pursuing short-term mating strategies than men who do not. Indeed, research indicates that self-perceived physical attractiveness, absence of fluctuating asymmetry, and low vocal pitch are positively related to short-term mating success in men but not in women.
Conversely, long-term mating is marked by serious committed sexual relationships with relatively few partners. While males generally pursue a short-term mating strategy when possible, females typically pursue a long-term mating strategy. Long-term strategies are characterized by extended courtships, high investment, and few sexual partners. While pursuing a long-term strategy, females are able to get resources from males over the course of the relationship. Female mating psychology is generally more focused on finding high quality mates rather than increasing the quantity of their mates, which is reflected in their pursuit of a long-term strategy. Additionally, they also benefit from higher parental investment by males. Women are thought to seek long-term partners with resources (such as shelter and food) that provide aid and support survival of offspring. To achieve this, women are thought to have evolved extended sexuality. The key benefit for males pursuing a long-term strategy is higher parental certainty. However, both sexes pursue both strategies and get benefits from both strategies. Additionally, humans typically do not pursue the extremes of either short or long-term mating strategies.
It is possible that females are more prone to psychological depression than males if they are subject to K-selection. Because women's reproductive decisions are made with more risks then men's, postpartum depression could be a signal to women that they faced a bad investment opportunity, would be evolutionarily adaptive. By the same token, some researchers hypothesized that postpartum depression is more likely to occur in mothers who are suffering a fitness cost, in order to inform them that they should reduce or withdraw investment in their infants. Moreover, there is some evidence that postpartum depression could function as a bargaining strategy, in which parents who were not receiving adequate support from their partners withdrew their investment in order to elicit additional support. In support of this, Hagen found that postpartum depression in one spouse was related to increased levels of child investment in the other spouse.
=== Mate value ===
Mate values correspond to future reproductive success likelihood of an individual. Mate value contains the ability of the individual to produce healthy offspring in the future, based on the individual's age and sex. The mate value of each sex is determined by what the opposite sex desires in a mate, so male mate values is determined by what females desire and vice versa. Over time, the individuals who had higher mate values had higher reproductive success. These qualities that make up mate value evolved into what is considered physically attractive. Thus individuals with a high mate value are perceived to be more attractive by the opposite sex than those with low mate value. Additionally, individuals with a high mate value are more able to be more choosy in their mates and reproduce more often than those with a low mate value. Due to biological differences between the sexes, it is predicted that there are differences in what the sexes desire in a mate. Therefore, it is thought that there are differences between male and female mate values.
Mate value is perceived through signals and cues. Signals are characteristics that have been selected for because they offer reliable changes in receiver behavior that lead to higher reproductive success for the receiver. Conversely, cues have not been selected for to carry meaning, but instead are byproducts. However, with sexual selection, cues can become signals over time. Costly signals are ones that require intense effort for the signaler to send. Because they require high investment, costly signals are typically honest signals of underlying genetic qualities. However, signals that are not costly enough can be faked and therefore are not associated with the underlying benefits.
Evolutionary psychologists have predicted that men generally place a greater value on youth and physical attractiveness in a mate than do women. Youth is associated with reproductive value in women, because their ability to have offspring decreases dramatically over time compared to men. Therefore, males typically prefer to mate with females who are younger than themselves, except when they are maturing in their teens. The features that men find physically attractive in women are thought to signal health and fertility. Examples of the determinants of female attractiveness includes the waist-to-hip ratio and curvaceousness. While this is found across cultures, there are differences with regards to what the ideal waist-to-hip ratio is, ranging from 0.6 in China, South America, and some of Africa to 0.8 in Cameroon and among the Hadza tribe of Tanzania. In the United States, divergent preferences of African- and European-Americans have been noted. There is also evidence of variation across time, even within a single culture or civilization. On the other hand, there is evidence that a mother's waist-hip ratio before pregnancy is correlated with her child's cognitive ability, as hip fat, which contains long chain polyunsaturated fatty acids, critical for the development of the fetus's brain.
One factor that affects a woman's waist-to-hip ratio is her gynoid fat distribution, where the energy is stored for pregnancy for early infant care, including breastfeeding. A female human's waist–hip ratio is at its optimal minimum during times of peak fertility—late adolescence and early adulthood, before increasing later in life.
Additionally, physical attractiveness signals genetic quality for both males and females. Men who preferentially mated with healthy, fertile, and reproductively valuable women would have left more descendants than men who did not. Since men's reproductive value does not decline as steeply with age as does women's, women are not expected to exhibit as strong of a preference for youth in a mate.
However, male mate value is partly based upon his ability to acquire resources. This is because one of the costs of pregnancy is the limited ability to get resources for oneself. Additionally, it signals ability of the male to commit and invest in the female and her offspring. Male resource investment increases the likelihood the offspring will survive and reproduce itself. Due to this, females are typically attracted to older males, since they are likely to have a greater ability to provide resources and have a higher social status. Evolutionary psychologists have speculated that women are relatively more attracted to ambition and social status in a mate because they associate these characteristics with men's access to resources. Women who preferentially mated with men capable of investing resources in themselves and their offspring, thereby ensuring their offspring's survival, would have left more descendants than women who did not. Male mate value is also determined by his physical and social dominance, which are signals to high quality genes. In addition, women tend to be attracted to men who are taller than they themselves are and who display a high degree of facial symmetry, masculine facial dimorphism, upper body strength, broad shoulders, a relatively narrow waist, and a V-shaped torso.
Body odor, which contains pheromones, is another crucial criterion in assessing the suitability of a mate. In humans, some olfactory receptions are directly connected to the parts of the brain controlling reproductive behavior. Men are able to detect women's sexual arousal by the sense of smell, and a woman's smell may increase a man's level of arousal.
=== Sexual desire ===
Sexual selection theory states that because of their lower minimum parental investment, men can achieve greater reproductive success by mating with multiple women than women can achieve by mating with multiple men. Evolutionary psychologists therefore argue that ancestral men who possessed a desire for multiple short-term sex partners, to the extent that they were capable of attracting them, would have left more descendants than men without such a desire. Ancestral women, by contrast, would have maximized reproductive success not by mating with as many men as possible, but by selectively mating with those men who were most able and willing to invest resources in their offspring. Gradually in a bid to compete to get resources from potential men, women have evolved to show extended sexuality.
One classic study of college students at Florida State University found that among 96 subjects chosen for attractiveness, approached on campus by opposite-sex confederates and asked if they wanted to "go to bed" with him/her, 75% of the men said yes while 0% of the women said yes. Evidence also indicates that, across cultures, men report a greater openness to casual sex, a larger desired number of sexual partners, and a greater desire to have sex sooner in a relationship. These sex differences have been shown to be reliable across various studies and methodologies. However, there is some controversy as to the scope and interpretation of these sex differences.
Evolutionary research often indicates that men have a strong desire for casual sex, unlike women. Men are often depicted as wanting numerous female sexual partners to maximize reproductive success. Evolutionary mechanisms for short-term mating are evident today. Mate-guarding behaviors and sexual jealousy point to an evolutionary history in which sexual relations with multiple partners became a recurrent adaptive problem, while the willingness of modern-day men to have sex with attractive strangers, and the prevalence of extramarital affairs in similar frequencies cross-culturally, are evidence of an ancestral past in which polygamous mating strategies were adopted.
Flanagan and Cardwell argue that men could not pursue this ideology without willing female partners. Every time a man has a new sexual partner, the woman also has a new sexual partner. It has been proposed, therefore, that casual sex and numerous sexual partners may also confer some benefit to females. That is, they would produce more genetically diverse offspring as a result, which would increase their chances of successfully rearing children to adolescence, or independence.
Error management theory states that psychological processes should be biased to minimize costs of making incorrect judgments and decisions. Since males generally pursue a short-term mating strategy, the costs of not having sexual intercourse is higher than having sexual intercourse. Therefore, the cost for a male thinking a female does not desire to engage in sexual intercourse when in fact she does is higher than perceiving a female does want to have sexual intercourse when she actually does not. Conversely, since females generally pursue a long-term strategy, the costs of having sexual intercourse is higher than not having sexual intercourse. Therefore, the cost for a female of perceiving a male wants to invest when he does not is higher than perceiving a male does not want to invest when in fact he does want to invest. Due to these costs, males and females have developed separate psychological mechanisms where males over-perceive female desire for sex and females under-perceive male commitment. However, males accurately perceive female commitment and females accurately perceive male sexual interests.
=== Mate retention ===
In addition to acquiring and attracting mates, humans need to retain their mate over a certain period of time. This is especially important in long-term, pair-bonded relationships. It has been hypothesized that feelings of love have evolved to keep humans in their mating relationship. It has been shown that feelings of love motivate individuals to pursue their current partner and stray away from alternatives. Additionally, proclaiming feelings of love increases the attachment and commitment to the current partner. Further, when proclaiming recalling love and commitment, oxytocin, a hormone associated with pair-bonding activities, increases in the bloodstream. This links physiological indicators with mate retention behaviors.
Despite this link, maintaining a pair-bonded relationship can be difficult, especially around alternative mates. When presented with alternative mates with a high mate value, humans tend to view their current relationship less favorably. This occurs when males are presented with physically attractive females, and it occurs for females when they are presented with socially dominant males. However, there are psychological counter-measures to these processes. First, individuals in a committed relationship tend to devalue alternative mate options, thus finding them less attractive. Second, these individuals do not always consider potential alternatives. Instead they pay less attention to alternative mates and therefore do not undergo the devaluation process. These mechanisms tend to happen unconsciously and help the individual maintain their current relationship.
There are several strategies that an individual can do to retain their mate. First off, individuals should engage in more mate retention strategies when their mate is of high value. Therefore, males with more physically attractive mates and females with mates that have more resources engage in more mate retention behaviors. Additionally, to retain their mates, males undertake resource displays and females enhance their physical appearance. Finally, jealousy helps maintain relationships. Jealousy is associated with the threat of mate loss and helps individuals engage in behaviors to keep their current mate. However, males and females differ on what cues jealousy. Since males have issues confirming parental certainty, they become more jealous than females for sexual cheating. However, historically females needed male resources for offspring investment. Therefore, females become more jealous over emotional cheating, as it threatens the devotion of resources to her and her offspring.
=== Intrasexual competition ===
For both sexes, high social status and ample access to resources are important for evolutionary success. But each sex has its own strategies for competing against members of the same sex. To safeguard their genetic interests, girls and women tend to form alliances with kin, affines (in-laws), and a few select female friends. Instead of direct competition, females tend to disguise their efforts to outclass their competitors in order to avoid physical harm and violence unless they are already of high status, in which case they can rely on greater protection and greater access to resources. Other strategies include enforcing equality within a social clique in order to minimize competition and excluding other females—that is, potential competitors—from one's social circles.
== Individual differences ==
=== Sociosexual Orientation Inventory ===
Just as there are differences between the sexes in mating strategies, there are differences within the sexes and such within-sex variation is substantial. Individual differences in mating strategies are commonly measured using the Sociosexual Orientation Inventory (SOI), a questionnaire that includes items assessing past sexual behavior, anticipated future sexual behavior, and openness to casual sex. Higher scores on the SOI indicate a sexually unrestricted mating strategy, which indicates an openness to casual sex and more partners. Conversely, lower scores on the SOI indicate a sexually restricted mating strategy, which a focus on higher commitment and fewer partners.
Several studies have found that scores on the SOI are related to mate preferences, with more sexually restricted individuals preferring personal/parenting qualities in a mate (e.g. responsibility and loyalty), and with less sexual restricted individual preferring qualities related to physical attractiveness and social visibility. Other studies have shown that SOI scores are related to personality traits (i.e. extraversion, erotophilia, and low agreeableness), conspicuous consumption in men as a means to attract women, and increased allocation of visual attention to attractive opposite-sex faces.
=== Short-term vs. long-term mating ===
Evolutionary psychologists have proposed that individuals adopt conditional mating strategies in which they adjust their mating tactics to relevant environmental or internal conditions, which is called strategic pluralism. The concept of sexual pluralism states that humans do not pursue the same mating strategy all of the time. There are different motivations and environmental influences that determine the mating strategy which a person will adopt. The long-term and short-term mating behaviors are triggered in the individual by the current strategy being pursued. Therefore, not only are there differences between the sexes in long-term and short-term mating, but there are also differences within the sexes. To the extent that ancestral men were capable of pursuing short-term mating strategies with multiple women, they tend to do so. However, not every male is able to pursue this option. Additionally, even though most women pursue a long term mating strategy, some females pursue a short-term strategy.
==== Differences among males ====
When possible, males will typically pursue a short-term mating strategy. The ability to do this depends upon their mate value, so males with a high mate value are more likely to pursue a short-term mating strategy. High mate value males have been shown to have sexual intercourse earlier and more often than low mate value males. Self-esteem and physical attractiveness have been shown to be related to male pursuing a short term mating strategy. Additionally, males with more testosterone have been shown to pursue more short-term strategies.
However, not all males pursue a short-term mating strategy. There are several reasons for this. First, long-term mating has its own advantages that have already been discussed. Second, while males of higher mate value and status have opportunities to pursue short-term mates, low mate value males typically do not have the same opportunities. Since females generally prefer long-term mating strategies, the few who would mate in the short-term are already paired with the high mate value males. Additionally, the benefits of short-term mating for females are only obtained through high mate value males. Therefore, low status males are more likely to pursue long-term mating strategy.
==== Differences among females ====
While more attractive males tend to pursue a short-term mating strategy, more attractive females tend to pursue a more long-term mating strategy. Additionally, younger females are more likely to pursue a short-term mating strategy, as well as those who are not satisfied with their current partner.
The ovulatory cycle has been shown to influence a female's mating strategy. In the late follicular phase, women are the most fertile in the ovulatory cycle. During this time, there is evidence that females tend to pursue a short-term oriented mating strategy over a long-term one. Additionally, female sexual desires increase as well as their attraction towards more masculine males.
Additionally, female mating strategies can change across their lifetime. In their early thirties, females experience a peak in sexual desire. In turn, this increase influences females to pursue a more long or short term oriented strategy depending on the mate value of their current partner.
=== Mating plasticity ===
Research on the conditional nature of mating strategies has revealed that long-term and short-term mating preferences can be fairly plastic. Following exposure to cues that would have been affected mating in the ancestral past, both men and women appear to adjust their mating preferences in ways that would have historically enhanced their fitness. Such cues include the need to care for young, danger from animals and other humans, and resource availability. Additionally, there is evidence that the female sex drive is more plastic than male sex drive, because they are the selecting sex. Since females typically chose when and with whom to engage in sex, this sex drive plasticity could be an effect of female mate choice.
=== Asexuality ===
While the general lack of sexual attraction—asexuality—has traditionally been viewed as a problem to be rectified, research since the 2010s has cast doubt upon this view. Research on how asexual individuals forge social relationships, including romantic ones, is ongoing.
== Environmental predictors ==
=== Culture ===
Evolutionary psychologists have investigated different strategies and environmental influences across different cultures and confirmed that men tend to report a greater preference for youth and physical attractiveness in a mate than do women. Additionally, women tend to report a greater preference for ambition and social status in a mate than do men. The specific role that culture plays in modulating sex differences in mate preferences is subject to debate. Cultural variations in mate preference can be due to the evolved differences between males and females in a given culture.
Culture also has a link to mating strategies in the form of marriage systems in the society. Specifically, pathogens have been linked to whether a society is more likely to have polygynous or monogamous marriage systems. Cultures with high pathogen stress are more likely to have polygynous marriage systems, especially exogamous polygamy systems. This is helpful for both males and females, as males obtain greater genetic diversity for their offspring and females have access to healthy males, which are typically lacking in high pathogen societies. Conversely, monogamy is often absent from high pathogen environments, but common in low pathogen environments.
Further, since physical attractiveness is thought to signal health and disease resistance, evolutionary psychologists have predicted that, in societies high in pathogen prevalence, people value attractiveness more in a mate. Indeed, research has confirmed that pathogen prevalence is associated with preferences for attractiveness across nations. Women in nations with high pathogen prevalence also show greater preferences for facial masculinity. Researchers have also reasoned that sexual contact with multiple individuals increases the risk of disease transmission, thereby increasing the costs of pursuing a short-term mating strategy. Consistent with this reasoning, higher pathogen prevalence is associated with lower national SOI scores. Finally, several studies have found that experimentally manipulating disease salience has a causal influence on attractiveness preferences and SOI scores in predicted directions.
=== Sex ratio ===
The local operational sex ratio has been shown to have an impact on mating strategies. This is defined as the ratio of marriage-age males to marriage-age females, with a high ratio representing more males and a low ratio representing more females in the local area. When there is an imbalance of sexes, the rare sex typically has more choice, while the plentiful sex has to compete more strategically for the rare sex. This leads to the plentiful sex competing on specific areas that the rare sex finds attractive. Additionally, the plentiful sex will adopt more of the rare sex's mating strategy. In a population with a low sex ratio, females will adopt a more short-term mating strategy and will compete more intensely on things like physical attractiveness. On the other hand, in a society with a high sex ratio, males will adopt a more long-term strategy to attractive females. (See going steady.) For example, in the major metropolitan areas of China, females are generally in short supply and as such are more likely to be fulfilled should they find a mate while many men are simply left out of the dating market. On the other hand, on the Island of Manhattan and in some Western university campuses, females are in excess and as such they compete intensely for male attention, giving rise to hookup culture and short-term mating websites such as Tinder.
In 2005, the evolutionary psychologist David Schmitt conducted a multinational survey of sexual attitudes and behaviors involving 48 countries called the International Sexual Description Project (ISSR). Schmitt assessed relationships between several societal-level variables and average scores on the SOI. One variable that was shown to significantly predict a nation's average SOI score was the Operational Sex Ratio (OSR). This prediction was confirmed; OSR was significantly positively correlated with national SOI scores. Another variable that Schmitt predicted would influence SOI scores was the need for biparental care. In societies where extensive care from both parents is needed to ensure offspring survival, the costs of having sex with an uncommitted partner are much higher. Schmitt found significant negative correlations between several indices of need for biparental care (e.g. infant mortality, child malnutrition, and low birth-weight infants) and national SOI scores.
=== Income, education, and individual empowerment ===
During times of economic distress, women would be highly reluctant to commit to low-status men in long-term relationships and men would be delaying marriage, if they ever get married at all, in order to accumulate enough resources to attract attention. Consequently, both marriage and birth rates would drop. In addition, because the number of children a woman can have over her lifetime is much smaller than that of a man, under harsh economic realities women tend to sacrifice their careers in favor of domestic duties in order to safeguard their genetic interests. Traditional gender roles would be reinforced as a result.
Historically, marriage was the best option to gain independence from one's parents, and people generally married early in life and after short periods of courtship. This is no longer true in a modern society, where people are more independent from their parents and are willing to wait longer to find an ideal mate (a "soulmate"). Consequently, the average age at first marriage has increased, while many individuals are choosing to remain single. Furthermore, in a country where few children are born out of wedlock like Japan, those who are uninterested in having children tend to not get married.
Some sex differences in mate preferences may be attenuated by national levels of gender equity and gender empowerment. For example, as women gain more access to resources their mate preferences change. While in the past, women typically need to get married in order to ensure their own financial security, modern women are more likely to be able to achieve this on their own and as such are in a position to set high standards for potential mates. Finding a mate with access to material resources becomes less of a priority compared to finding someone with domestic skills and who can provide emotional support. For this reason, female physical attractiveness and male access to resources can be labeled as "necessities" while other qualities, as such as humor, can be categorized as "luxuries" in finding a mate.
Indeed, by the 2020s, young women in industrialized nations are more likely than young men to have tertiary education. In some countries, such as the United Kingdom, they even have higher employment rates and higher incomes. Meanwhile, a growing number do not see a need for pairing up. As sociologist Philip Cohen explains, "It's an advantage to not need to be married, in terms of economics or social pressure. People can improve their career status and happiness on their own terms, and they can set the terms for potential mates in the future."
In the modern era, the availability of reliable contraception has severed the tie between sexual intercourse and reproduction. Furthermore, access to the combined oral contraceptive pill has been found to change a woman's taste in men. Women not on the pill tend to prefer men whose major histocompatibility complex (MHC) genes is different from their own, whereas those on the pill tend to find men with similar MHC genes more attractive.
Since the late twentieth century, marriages across the developed world have become unstable. Divorces have become much more common while people increasingly choose to remain single. In addition, as a culture becomes more individualistic, public support for traditional gender roles declines. Marriage becomes increasingly viewed as an option, rather than an obligation. In fact, since the 1960s, marriage has stopped being primarily focused on having and raising children but instead, the fulfillment of adults. Unmarried women were no longer considered "sick" or "immoral" the way they were in the past. In addition, neither working mothers nor single parenthood (what used to be called illegitimacy) was socially ostracized the way they used to be, at least in the Western world. But while marriage rates have declined, the prevalence of cohabitation has gone up. Cohabitation may help determine suitability of a mate before marriage. At the same time, significant numbers deem marriage to be an outdated institution and an overwhelming majority think it is unnecessary for a fulfilling or happy life, though they may remain open to that option. Meanwhile, married men have become noticeably less willing to disrupt the careers of their wives. Whereas in the past women had typically looked for men of high social status while men had not, by the late twentieth-century, men also looked for women of high earning potential, resulting in an even more pronounced educational and economic assortative mating. More generally, higher rates of university attendance and workforce participation by women affects the marital expectations of both sexes in that both men and women became more symmetrical in what they desired in a mate. The share of marriages in which both spouses were of the same educational level steadily rose. Moreover, it was no longer possible for a couple with one spouse having no more than a high-school diploma to earn about the national average; on the other hand, couples both of whom have at least a bachelor's degree could expect to make a significant amount above the national average. People thus have a clear economic incentive to seek out a mate with at least as high a level of education in order to maximize their potential income. A societal outcome of this was that as household gender equality improved because women had more choices, income inequality widened. Part of the reason why people increasingly married their socioeconomic and educational peers was technological change. Innovations that became commercially available in the twentieth century, such as the refrigerator or the washing machine, reduced the amount of time people needed to spend on housework, which diminished the importance of domestic skills.
== Impact of and on culture ==
=== Adolescent behavior ===
From the neurological perspective, the well-known tendencies of teenagers to be emotional, impulsive, and to take high risks are due to the fact that the limbic system (responsible for emotional thought) is developing faster than the prefrontal cortex (logical reasoning). From the evolutionary viewpoint, this mismatch is adaptive in that it helps young people connect with other people (by being emotional) and learn to negotiate the complexities of life (by taking risks yet being more sensitive to rewards). As a result, teenagers are more prone to feelings of fear, anxiety, and depression than adults. In order to attract potential mates, males are especially prone to take risks and showcase their athleticism whereas females tend to direct attention to their beauty. Young males (who have the highest reproductive variance) take more risks than any other group in both experiments and observations. By undertaking risky endeavors, males are thought to signal the qualities which may be directly related to one's ability to provision and protect one's family, namely, physical skill, good judgment, or bravery. Social dominance, confidence, and ambition could help in competition among other males, while social dominance, ambition, and wealth might alleviate the costs of failure. In addition, traits like bravery and physical prowess may also be valued by cooperative partners due to their benefits in group-hunting and warfare, thereby increasing the potential audience for risk takers. The tendency of adolescent and young-adult males to engage in risky and aggressive behaviors is known as the 'young male syndrome'. His self-worth is tied to being perceived as a 'real man'. His likelihood of committing or falling victim to a violent crime peaks between his late teens and late twenties. Young females, on the other hand, are under strong peer pressure to be physically attractive, potentially leading to problems with their body image. A teenage girl or young woman's bond with her first sexual partner is often deep. In both sexes, intense adolescent intrasexual competition, amorous infatuations, and sexual experimentation are common.
Psychological research indicates the existence of a "reminiscence bump" between the ages of 10 and 30, a period important in human development, when people receive a substantial amount of feedback on their social status and reproductive desirability. Due to sex differences in mating strategies, it is more difficult for a female to alter the course of her reproductive career than it is more a male. In fact, females not only mature more quickly but also were historically more likely than males to marry and bear their first children before the age of 20. As a consequence, by late adolescence, it is, from the perspective of evolution, crucial that a girl finds herself a high-quality mate.
Whereas ancestral humans lived in small bands of related people of all ages, modern secondary school students share the same social environment as people from the same age groups from diverse backgrounds, an evolutionary novelty. Back then, social competition during adolescence proved crucial to future social and reproductive success, hence the strong desire to be popular. Today, it is possible for people to relocate to a different place or transfer to another school. Still, the curiosity about how the lives of others for the sake of comparison remains. Teenagers are also quite conformist with regards to their peers, for under ancestral condition, social ostracism was generally deadly. In 21st-century society, youths who rebel against the dominant culture or figures of authority tend become more homogeneous with respect to their own subculture, making their behavior the opposite to any claims of counterculture. This synchronization occurs even if more than two choices are available, such as multiple styles of beard rather than whether or not to have a beard. Mathematician Jonathan Touboul who studies how information propagation through society affects human behavior calls this the hipster effect.
=== Consumer psychology ===
According to psychologist Gad Saad, consumer behavior can only be truly understood in light of evolutionary psychology because consumer behavior "is rooted in a shared biological heritage based around four key Darwinian factors: survival, reproduction, kin selection, and reciprocal altruism." Consequently, products that can manipulate or enhance a person's body odor (perfumes and deodorants) and looks (cosmetics and plastic surgeries) are profitable businesses. In Brazil at the end of the twentieth century, for instance, there were more people selling Avon cosmetics than there were members of the armed forces. Similarly, in the United States, more money was spent on cosmetics and plastic surgeries than on education or social services.
One way to signal one's socioeconomic status is conspicuous consumption, or when individuals purchase luxurious items which provide little to no utility over less costly versions, thereby prioritizing self-promotion over economic sense. It is a common behavior of class and often involves strategic planning to maximize the audience of the display and the strength of the signal. Most signaling explanations of conspicuous consumption predict the targets of the signal will predominately be potential mates. Among males, the information signaled is thought to go beyond genetic quality and signal the potential for investment, which can be attractive to those seeking both long-term and short-term mating strategies. Among females, a suggested to benefit from conspicuous consumption in mating contexts is its hypothesized ability to demonstrate the commitment of one's partner and signal one's mate quality to rivals, both of which may help in intrasexual competition and deter mate poaching. Conspicuous consumption may also be useful for problems outside of acquiring mates. This can involve attempts at attracting other cooperative partners, who stand to gain from the signalers ability to confer benefits should they form an alliance. As in mating contexts, there may also be benefits to intimidating rivals, thereby decreasing the likelihood of direct competition for resources in the future. Its prevalence across cultures and social classes suggests that humans may be well suited to balancing the costs and benefits of the signal.
The notion that "sex sells" is now commonly accepted and utilized by advertisers. Nevertheless, some cultures (such as France) are more receptive to sex in advertising than others (such as South Korea).
=== Sensational journalism and gossip ===
Despite common objections, sensational news stories continue to attract a large audience. A 2003 analysis of 736 stories from 1700 to 2001 by Hank Davis and S. Lyndsay McLeod reveals that these stories could be categorized according themes with reproductive value, such as cheater detection and treatment of offspring. Davis and McLeod propose that sensational journalism serves the same purpose as gossip. Gossip is the sharing of both positive and negative information about a third person who may or may not be absent from the group, and as such is useful for acquiring potentially useful information about the social structure, rivals, as well as allies. It may also be used for the purposes of intrasexual competition, or the denigration of rivals in order to elevate oneself, with men gossiping about access to resources (wealth and achievement) and women about looks and reputations. However, women appear to be more likely to gossip than men and to think of it more positively than men. Furthermore, much gossip concerns social affairs. According to Frank T. McAndrew, the same psychological reasons that underlie more traditional forms gossip carry over to gossip about "celebrities" in the modern world because, on the evolutionary timescale, the birth of celebrity culture is a recent phenomenon.
=== Romantic novels, fan fiction, and pornography ===
As defined by the Romance Writers of America, a romantic novel features "a central love story and an emotionally satisfying and optimistic ending." Many also carry erotic undertones. According to the same organization, only a minority of those who read this genre are male. Indeed, evolutionary psychologists have gained valuable insights into women's mate choice by studying romance novels popular among women, such as those sold by Harlequin. Popular contemporary female romance novels conform to strategies common among women, for example by avoiding short-term relationships, and as such pertain to their genetic interests. Five of the most common words in such novels are, in order of most to least frequent, 'love', 'bride', 'baby', 'man', and 'marriage' and the most common themes are commitment, reproduction, high-value—i.e. masculine—males, and resources. Romance novels sell rather well, with around 10,000 new titles appearing each year in the U.S. alone.
Fan fiction is the online equivalent of romance novels. During the first two decades of the 21st century, writing and reading fan fiction became a prevalent activity worldwide. Demographic data from various depositories revealed that those who read and wrote fan fiction were overwhelmingly young, in their teens and twenties, and female. For example, an analysis of the site fanfiction.net published in 2019 by data scientists Cecilia Aragon and Katie Davis showed that some 60 billion words of contents were added during the previous 20 years by 10 million English-speaking people whose median age was 15½ years. Much of fan fiction concerns the romantic pairing of fictional characters of interest, or 'shipping'. Fan fiction writers base their work on various internationally popular cultural phenomena such as K-pop, Star Trek, Harry Potter, Doctor Who, and My Little Pony, known as 'canon', as well as other things they considered important to their lives, like natural disasters. Socially dominant men—the so-called "alpha males"—are the most popular among women.
Males, by contrast, are generally more interested in pornography because it carries the same cues to female fertility they look for under mating conditions. Online pornography is now ubiquitous and popularly consumed. In their book A Billion Wicked Thoughts (2011) analyzing search-engine results, cognitive scientists Ogi Ogas and Sai Gaddam wrote, "Men's brains are designed to objectify females. The shapely curves of female ornamentation indicate how many years of healthy childbearing remain across a woman's entire lifetime." By letting her test subject watch erotic materials of various kinds—straight sex, gay sex, and bonobos—sexologist Meredith Chivers discovered an excellent agreement between the self-reported arousal of men and the amount of blood flow to their genitals. Men were only aroused by videos of straight sex. On the other hand, Chivers found a clear mismatch between the self-reports of women and what her devices measured. While women seemed easily aroused by videos of all three categories, increased blood flow alone was not enough to induce arousal. This seems to correspond with the different mating behaviors of men and women.
=== Music, film, and television ===
A 2011 study by Dawn R. Hobbs and Gordon G. Gallup of songs dating back over four centuries show that reproductive messaging has been a common theme among the most popular of songs. Hobbs and Gallup observe that their "content analysis of these messages revealed 18 reproductive themes that read like topics taken from an outline for a course on evolutionary psychology." An overwhelming majority (about 92%) of the songs that made it to the Billboard Top 10 in 2009 contained reproductive messages. In fact, "further analyses showed that the bestselling songs in all three charts featured significantly more reproductive messages than those that failed to make it into the Top Ten." Among contemporary English-language songs, country music tends to focus on commitment, parenting, and rejection; pop music on sex appeal, reputation, short-term strategies, and fidelity assurance; and rhythms and blues (R&B) and hip hop on sex appeal, resources, sex act, and status.
Hobbs and Gallup classified the reproductive messaging of the songs into 18 categories, including genitalia (e.g. "Baby Got Back" (1992) by Sir Mix-A-Lot), courtship displays and long-term mating ("I Wanna Hold Your Hand" (1963) by The Beatles), short-term mating ("LoveGame" (2009) by Lady Gaga), foreplay and arousal ("Sugar, Sugar" (1969) by The Archies), sex act ("Honky Tonk Women" (1969) by the Rolling Stones), sexual prowess ("Sixty Minute Man" (1951) by Billy Ward and the Dominoes), promiscuity, reputation, and derogation ("Roxanne" (1978) by the Police), commitment and fidelity ("Love Story" (2008) by Taylor Swift), access to resources ("For the Love of Money" (1973) by the O'Jays), rejection ("Red Light" (2009) by David Nail), infidelity, cheater detection, and mate poaching ("I Heard It Through the Grapevine" (1966) by Marvin Gaye), and parenting ("It Won't Be Like This For Long" (2008) by Darius Rucker).
Nevertheless, the evolutionary purpose of music, if such exists, remains unclear. Some researchers like Charles Darwin and Geoffrey Miller propose that it is a form of courtship that has evolved by means of sexual selection whereas others, such as Steven Pinker and Gary Markus, reject it as "auditory cheesecake"—no more than a purely cultural invention that is a by-product of evolved traits such as cognition and language.
A similar pattern is found in popular movies, where themes of survival (fighting epic battles), reproduction (courtship), kin selection (treatment of family members), and altruism (saving a stranger's life) are ubiquitous. Indeed, as in the case with novels or mythology, the number of basic plots is rather small. However, even though the standard assumption in many movies and stories is that people are looking to get married or at least desire a long-term partner, this is not necessarily true in real life.
=== Online dating ===
Online dating services have made it much easier for those who would otherwise never meet because their social circles do not intersect (perfect strangers) to find one another and pursue a romantic relationship together. They are especially useful for middle-aged individuals who have fewer options in real life, compared to those in their 20s. Compared to heterosexual couples, same-sex ones are much likely to have met online. Such platforms also offer goldmines of information for social scientists studying human mating behavior. Nevertheless, as of 2017, no new pattern has been identified; to the contrary, scientists have only found the strengthening of gender stereotypes, namely the attention to a prospective mate's socioeconomic status among women, the preference for youth and beauty among men, and the deliberate self-misrepresentation among both sexes. No longer do people looking for a mate have to confine themselves to their own backgrounds, though in practice, the data still indicates assortative mating.
Concerns that online dating makes people more "superficial" by giving them an incentive to judge one another based on looks are unfounded, since this is how humans normally behave. Moreover, while there are online dating sites geared towards short-term sexual relationships (hookups), others are designed to help those looking for a long-term arrangement, including marriage. Individuals who pursue the latter option are no less successful in finding the right mates.
=== Politics and religions ===
In general, the emotion of disgust can be divided into three categories: pathogen disgust, sexual disgust, and moral disgust. Sexual disgust leads to the avoidance of individuals and behaviors that jeopardizes one's long-term mating success. Moral disgust is being repelled by socially abnormal behaviors.
Some evolutionary psychologists have argued that mating strategies can influence political attitudes. According to this perspective, different mating strategies are in direct strategic conflict. For instance, the stability of long-term partnerships may be threatened by the availability of short-term sexual opportunities. Therefore, public policy measures that impose costs on casual sex may benefit people pursuing long-term mating strategies by reducing the availability of short-term mating opportunities outside of committed relationships. One public policy measure that imposes costs on people pursuing short-term mating strategies, and may thereby appeal to sexually restricted individuals, is the banning of abortion. In a doctoral dissertation, the psychologist Jason Weeden conducted statistical analyses on public and undergraduate datasets supporting the hypothesis that attitudes towards abortion are more strongly predicted by mating-relevant variables than by variables related to views on the sanctity of life.
Weeden and colleagues have also argued that attitudes towards drug legalization are driven by individual differences in mating strategies. Insofar as sexually restricted individuals associate recreational drug use with promiscuity, they may be motivated to oppose drug legalization. Consistent with this, one study found that the strongest predictor of attitudes towards drug legalization was scores on the SOI. This relationship remained strong even when controlling for personality traits, political orientation, and moral values. By contrast, nonsexual variables typically associated with attitudes towards drug legalization were strongly attenuated or eliminated when controlling for SOI and other sexuality-related measures. These findings were replicated in Belgium, Japan, and the Netherlands. Weeden and colleagues have made similar arguments and have conducted similar analyses in regard to religiosity; that is, religious institutions may function to facilitate high-fertility, monogamous mating and reproductive strategies.
On the other hand, there is evidence that as a society becomes wealthier, more urbanized, and more secular, religion is becoming increasingly irrelevant in the role of matchmaking.
== See also ==
Mate choice in humans
Recent human evolution
Parental investment in humans
Sociosexuality
Online dating service
Alternative mating strategy
== References ==
== External links ==
Victorian mate choice by evolutionary psychologist Geoffrey Miller (8:46). Transcript.
Muus, Harriet. "Evolutionary Ethics and Mate Selection". PsyArXiv. Center for Open Science. doi:10.31234/osf.io/c659q. | Wikipedia/Human_mating_strategies |
Cultural selection theory is the study of cultural change modelled on theories of evolutionary biology. Cultural selection theory has so far never been a separate discipline. However it has been proposed that
human culture exhibits key Darwinian evolutionary properties, and "the structure of a science of cultural evolution should share fundamental features with the structure of the science of biological evolution".
In addition to Darwin's work the term historically covers a diverse range of theories from both the sciences and the humanities including those of Lamark, politics and economics e.g. Bagehot, anthropology e.g. Edward B. Tylor, literature e.g. Ferdinand Brunetière, evolutionary ethics e.g. Leslie Stephen, sociology e.g. Albert Keller, anthropology e.g. Bronislaw Malinowski, Biosciences e.g.
Alex Mesoudi, geography e.g.
Richard Ormrod, sociobiology and biodiversity e.g. E.O. Wilson, computer programming e.g. Richard Brodie, and other fields e.g. Neoevolutionism, and Evolutionary archaeology.
== Outline ==
Crozier suggests that Cultural Selection emerges from three bases: Social contagion theory, Evolutionary epistemology, and Memetics.
This theory is an extension of memetics. In memetics, memes, much like biology's genes, are informational units passed through generations of culture. However, unlike memetics, cultural selection theory moves past these isolated "memes" to encompass selection processes, including continuous and quantitative parameters. Two other approaches to cultural selection theory are social contagion and evolutionary epistemology.
Social contagion theory’s epidemiological approach construes social entities as analogous to parasites that are transmitted virally through a population of biological organisms. Evolutionary epistemology's focus lies in causally connecting evolutionary biology and rationality by generating explanations for why traits for rational behaviour or thought patterns would have been selected for in a species’ evolutionary history. Memetics models cultural change after population genetics, taking cultural units to be analogous to genes.
A good example of this theory is found by looking to the reason large businesses tend to grow larger. The answer includes the benefits of mass production and distribution, international advertising, and more funds for product development. These self-amplifying effects, known as the economies of scale, give rise to selection effects which have a quantitative nature, unlike the qualitative effects described by the theory of memetics.
On the whole, cultural selection theory embraces the inherent complexity of cultural change and vouches for a systemic, rather than deconstructionist, approach to analyzing the way a society's norms and values change.
== Criticism ==
The cultural selection theory faces many objections due to the lack of evidence to support the adaptation of natural selection in the structural mechanisms of cultural systems. Major objections against the cultural selection theory stem from Lamarckianism, genotype-phenotype distinction, common hereditary architecture, biological analogue for cultural units, and environmental interactions. The Biological Analogue for Cultural Units breaks down into 3 subunits. The first is regarding strict analogues. This means that a biological unit (traits etc.) should be related to a cultural unit. This is a way for the old biological model and the modern cultural model to correlate and solidify the point. The second is regarding trait analogues. This means that some analogues are viewed the wrong way. Sometimes, one analogue is mistaken for another and often, the line between the two analogues is unclear and the distinction isn't as evident. The third is regarding virus analogue. This clarifies the point that the ability of the virus is different from the organism and the ability of both the virus and organisms should be looked at independently.
Some have argued that in order for the cultural selection theory to stand strong against objections, conclusive and explicit case studies are required. There needs to be empirical support to clarify the interaction between cultural systems and their environments. Crozier conducted a study on the acoustic adaptation of bird songs. This research study provided empirical evidence to support and strengthen the cultural selection theory.
Like Darwin's natural selection theory, cultural selection theory has three phases too; variation, reproduction and selection. Variation gives rise to a subject, reproduction is responsible for the spread and selection is based on the factors that control the spread.
== See also ==
Biocultural evolution – Theory of human behaviorPages displaying short descriptions of redirect targets
Dual inheritance theory – Theory of human behavior
Evolutionary economics – A field in economics that considers economic evolution
Evolutionary epistemology – Ambiguous term applied to several concepts
Evolutionary psychology – Branch of psychology
Leitkultur – Concept in German conservatism
Meme – Cultural idea which spreads through imitation
Multiple discovery – Hypothesis about scientific discoveries and inventions
Behavioral contagion, also known as Social contagion – Spontaneous, unsolicited and uncritical imitation of another's behavior
Sociocultural evolution – Evolution of societies
Universal Darwinism – Application of Darwinian theory to other fields
== References == | Wikipedia/Cultural_selection_theory |
The evolution of nervous systems dates back to the first development of nervous systems in animals (or metazoans). Neurons developed as specialized electrical signaling cells in multicellular animals, adapting the mechanism of action potentials present in motile single-celled and colonial eukaryotes. Primitive systems, like those found in protists, use chemical signalling for movement and sensitivity; data suggests these were precursors to modern neural cell types and their synapses. When some animals started living a mobile lifestyle and eating larger food particles externally, they developed ciliated epithelia, contractile muscles and coordinating & sensitive neurons for it in their outer layer.
Simple nerve nets seen in acoels (basal bilaterians) and cnidarians are thought to be the ancestral condition for the Planulozoa (bilaterians plus cnidarians and, perhaps, placozoans). A more complex nerve net with simple nerve cords is present in ancient animals called ctenophores but no nerves, thus no nervous systems, are present in another group of ancient animals, the sponges (Porifera). Due to the common presence and similarity of some neural genes in these ancient animals and their protist relatives, the controversy of whether ctenophores or sponges diverged earlier, and the recent discovery of "neuroid" cells specialized in coordination of digestive choanocytes in Spongilla, the origin of neurons in the phylogenetic tree of life is still disputed. Further cephalization and nerve cord (ventral and dorsal) evolution occurred many times independently in bilaterians.
== Neural precursors ==
Action potentials, which are necessary for neural activity, evolved in single-celled eukaryotes. These use calcium rather than sodium action potentials, but the mechanism was probably adapted into neural electrical signaling in multicellular animals. In some colonial eukaryotes, such as Obelia, electrical signals propagate not only through neural nets, but also through epithelial cells in the shared digestive system of the colony. Several non-metazoan phyla, including choanoflagellates, filasterea, and mesomycetozoea, have been found to have synaptic protein homologs, including secretory SNAREs, Shank, and Homer. In choanoflagellates and mesomycetozoea, these proteins are upregulated during colonial phases, suggesting the importance of these proto-synaptic proteins for cell to cell communication. The history of ideas on how neurons and the first nervous systems emerged in evolution has been discussed in a 2015 book by Michel Antcil. In 2022 two proteins SMIM20 and NUCB2, that are precursors of the neuropeptides phoenixin and nesfatin-1 respectively have been found to have deep homology across all lineages that preceded creatures with central nervous systems, bilaterians, cnidarians, ctenophores, and sponges as well as in choanoflagellates.
== Sponges ==
Sponges have no cells connected to each other by synaptic junctions, that is, no neurons, and therefore no nervous system. They do, however, have homologs of many genes that play key roles in synaptic function. Recent studies have shown that sponge cells express a group of proteins that cluster together to form a structure resembling a postsynaptic density (the signal-receiving part of a synapse). However, the function of this structure is currently unclear. Although sponge cells do not show synaptic transmission, they do communicate with each other via calcium waves and other impulses, which mediate some simple actions such as whole-body contraction. Other ways sponge cells communicate with neighboring cells is through vesicular transport across highly dense regions of the cell membranes. These vesicles carry ions and other signaling molecules, but contain no true synaptic function.
== Nerve nets ==
Jellyfish, comb jellies, and related animals have diffuse nerve nets rather than a central nervous system. In most jellyfish the nerve net is spread more or less evenly across the body; in comb jellies it is concentrated near the mouth. The nerve nets consist of sensory neurons that pick up chemical, tactile, and visual signals, motor neurons that can activate contractions of the body wall, and intermediate neurons that detect patterns of activity in the sensory neurons and send signals to groups of motor neurons as a result. In some cases groups of intermediate neurons are clustered into discrete ganglia.
The development of the nervous system in radiata is relatively unstructured. Unlike bilaterians, radiata only have two primordial cell layers, endoderm and ectoderm. Neurons are generated from a special set of ectodermal precursor cells, which also serve as precursors for every other ectodermal cell type.
== Nerve cords ==
The vast majority of existing animals are bilaterians, meaning animals with left and right sides that are approximate mirror images of each other. All bilateria are thought to have descended from a common wormlike ancestor that appeared in the Cryogenian period, 700–650 million years ago. The fundamental bilaterian body form is a tube with a hollow gut cavity running from mouth to anus, and a nerve cord with an especially large ganglion at the front, called the "brain".
Even mammals, including humans, show the segmented bilaterian body plan at the level of the nervous system. The spinal cord contains a series of segmental ganglia, each giving rise to motor and sensory nerves that innervate a portion of the body surface and underlying musculature. On the limbs, the layout of the innervation pattern is complex, but on the trunk it gives rise to a series of narrow bands. The top three segments belong to the brain, giving rise to the forebrain, midbrain, and hindbrain.
Bilaterians can be divided, based on events that occur very early in embryonic development, into two groups (superphyla) called protostomes and deuterostomes. Deuterostomes include vertebrates as well as echinoderms and hemichordates (mainly acorn worms). Protostomes, the more diverse group, include arthropods, molluscs, and numerous types of worms. There is a basic difference between the two groups in the placement of the nervous system within the body: protostomes possess a nerve cord on the ventral (usually bottom) side of the body, whereas in deuterostomes the nerve cord is on the dorsal (usually top) side. In fact, numerous aspects of the body are inverted between the two groups, including the expression patterns of several genes that show dorsal-to-ventral gradients. Some anatomists now consider that the bodies of protostomes and deuterostomes are "flipped over" with respect to each other, a hypothesis that was first proposed by Geoffroy Saint-Hilaire for insects in comparison to vertebrates. Thus insects, for example, have nerve cords that run along the ventral midline of the body, while all vertebrates have spinal cords that run along the dorsal midline. But recent molecular data from different protostomes and deuterostomes reject this scenario and suggest that nerve cords independently evolved in both.
=== Annelida ===
Earthworms have dual nerve cords running along the length of the body and merging at the tail and the mouth. These nerve cords are connected by transverse nerves like the rungs of a ladder. These transverse nerves help coordinate the two sides of the animal. Two ganglia at the head end function similar to a simple brain. Photoreceptors on the animal's eyespots provide sensory information on light and dark.
=== Nematoda ===
The nervous system of one very small worm, the roundworm Caenorhabditis elegans, has been mapped out down to the synaptic level. Every neuron and its cellular lineage has been recorded and most, if not all, of the neural connections are known. In this species, the nervous system is sexually dimorphic; the nervous systems of the two sexes, males and hermaphrodites, have different numbers of neurons and groups of neurons that perform sex-specific functions. In C. elegans, males have exactly 383 neurons, while hermaphrodites have exactly 302 neurons.
=== Arthropods ===
Arthropods, such as insects and crustaceans, have a nervous system made up of a series of ganglia, connected by a ventral nerve cord made up of two parallel connectives running along the length of the belly. Typically, each body segment has one ganglion on each side, though some ganglia are fused to form the brain and other large ganglia. The head segment contains the brain, also known as the supraesophageal ganglion. In the insect nervous system, the brain is anatomically divided into the protocerebrum, deutocerebrum, and tritocerebrum. Immediately behind the brain is the subesophageal ganglion, which is composed of three pairs of fused ganglia. It controls the mouthparts, the salivary glands and certain muscles. Many arthropods have well-developed sensory organs, including compound eyes for vision and antennae for olfaction and pheromone sensation. The sensory information from these organs is processed by the brain.
In insects, many neurons have cell bodies that are positioned at the edge of the brain and are electrically passive—the cell bodies serve only to provide metabolic support and do not participate in signalling. A protoplasmic fiber runs from the cell body and branches profusely, with some parts transmitting signals and other parts receiving signals. Thus, most parts of the insect brain have passive cell bodies arranged around the periphery, while the neural signal processing takes place in a tangle of protoplasmic fibers called neuropil, in the interior.
== Evolution of central nervous systems ==
=== Evolution of the human brain ===
There has been a gradual increase in brain volume as the ancestors of modern humans progressed along the human timeline of evolution (see Homininae), starting from about 600 cm3 in Homo habilis up to 1736 cm3 in Homo neanderthalensis. Thus, in general there is a correlation between brain volume and intelligence. However, modern Homo sapiens have a smaller brain volume (brain size 1250 cm3) than neanderthals; women have a brain volume slightly smaller than men, and the Flores hominids (Homo floresiensis), nicknamed "hobbits", had a cranial capacity of about 380 cm3, about a third of the Homo erectus average and considered small for a chimpanzee. It is proposed that they evolved from H. erectus as a case of insular dwarfism. In spite of their threefold smaller brain there is evidence that H. floresiensis used fire and made stone tools as sophisticated as those of their proposed ancestor, H. erectus. Iain Davidson summarizes the opposite evolutionary constraints on human brain size as "As large as you need and as small as you can". The human brain has evolved around the metabolic, environmental, and social needs that the species has dealt with throughout its existence. As hominid species evolved with increased brain size and processing power, the overall metabolic need increased. Compared to chimpanzees, humans consume more calories from animals than from plants. While not certain, studies have shown that this shift in diet is due to the increased need for the fatty acids more readily found in animal products. These fatty acids are essential for brain maintenance and development. Other factors to consider are the need for social interaction and how hominids have interacted with their environments over time.
Brain evolution can be studied using endocasts, a branch of neurology and paleontology called paleoneurology.
== See also ==
Evolutionary developmental biology
Evolutionary neuroscience
== References == | Wikipedia/Evolution_of_nervous_systems |
Affective neuroscience is the study of how the brain processes emotions. This field combines neuroscience with the psychological study of personality, emotion, and mood. The basis of emotions and what emotions are remains an issue of debate within the field of affective neuroscience.
The term "affective neuroscience" was coined by neuroscientist Jaak Panksepp, at a time when cognitive neuroscience focused on parts of psychology that did not include emotion, such as attention or memory.
== Brain areas related to emotion ==
Emotions are thought to be related to activity in brain areas that direct our attention, motivate our behavior, and help us make decisions about our environment. Early stages of research on emotions and the brain was conducted by Paul Broca, James Papez, and Paul D. MacLean. Their work suggests that emotion is related to a group of structures in the center of the brain called the limbic system. The limbic system is made up of the following brain structures:
=== Limbic system ===
Amygdala – The amygdala is made up of two small, round structures located closer to the forehead (anterior) to the hippocampi near the temporal poles. The amygdalae are involved in detecting and learning which parts of our surroundings are important and have emotional significance. They are critical for the production of emotion. They are known to be very important for negative emotions, especially fear. Amygdala activation often happens when we see a potential threat. The amygdala uses our past and related memories to help us make decisions about what is currently happening.
Thalamus – The thalamus is involved in combining sensory and motor signals and then sending that information to the cerebral cortex. The thalamus plays an important role in regulating sleep and wakefulness.
Hypothalamus – The hypothalamus is involved in producing a physical response (for example, crying) with an emotion. The hypothalamus is also used in reward circuits which are associated with positive emotions.
Hippocampus – The hippocampus is a structure that is mainly involved in memory. It works to make new memories and also connects senses such as visual input, smell or sound to memories. The hippocampus allows long term memories to be stored and retrieves them when necessary.
Fornix – The fornix is the main connection between the hippocampus to the mammillary bodies. It is important for spatial memory functions, episodic memory and executive functions.
Mammillary body – Mammillary bodies are important for recollective memory. They are found by the brain stem and cerebrum.
Olfactory bulb – The olfactory bulbs are the first cranial nerves. They are involved in smell (olfaction) and memory that is connected with specific smells.
Cingulate gyrus – The cingulate gyrus is located above the corpus callosum. The parts of the cingulate gyrus have different functions, and are involved with affect, visceromotor control, response selection, skeletomotor control, visuospatial processing, and in memory access. The anterior cingulate cortex is important for conscious, subjective emotional awareness as well as motivation. The subgenual cingulate is more active during both experimentally induced sadness and during depressive episodes.
Research has shown the limbic system is directly related to emotion, but there are other brain areas and structures that are important for producing and processing emotion.
=== Other brain structures ===
Basal ganglia – Basal ganglia are groups of nuclei found on either side of the thalamus. Basal ganglia play an important role in motivation, action selection and reward learning.
Orbitofrontal cortex – The orbitofrontal cortex is involved in decision making and helping us understand how emotions have influenced our decision making.
Prefrontal cortex – The prefrontal cortex is the front of the brain, behind the forehead and above the eyes. It plays a role in regulating emotion and behavior by anticipating consequences. The prefrontal cortex also plays an important role in delayed gratification by maintaining emotions over time and organizing behavior toward specific goals.
The ventromedial prefrontal cortex (vmPFC) is a portion of the prefrontal cortex that has shown to have a significant influence on emotion regulation. Through studies, the vmPFC has shown high activation when presented with highly emotional stimuli. This suggests that this portion of the brain is essential for high emotional arousal.
Ventral striatum – The ventral striatum is a group of structures thought to play a role in emotion and behavior. An area of the ventral striatum known as the nucleus accumbens is involved in the experience of pleasure. It is common for individuals with addictions to exhibit increased activity in this area when they are exposed to the object of their addiction.
Insula – This area of the brain plays a significant role in bodily emotions due to its connections to other neural structures that control automatic functions such as heart rate, breathing, and digestion. The insula is also implicated in empathy and awareness of emotion.
Cerebellum – Cerebellum has many uses. Has a very important role in emotion perception and emotion attributions. Cerebellar dysfunction has been shown to decrease positive emotions during lesion studies. Over the course of evolution, the cerebellum may have evolved into a circuit that helps reduce fear in order to enhance survival. The cerebellum may also play a regulatory role in the neural response to rewarding stimuli, such as money, addictive drugs, and orgasm.
A "cerebellar cognitive affective syndrome" has been described resulting in personality change and how the person shows emotions.
Lateral prefrontal cortex – Using our emotions, the lateral prefrontal cortex is responsible for helping us reach our goals by suppressing harmful behaviors or selecting productive ones.
Primary sensorimotor cortex – The somatosensory cortex is involved in each stage of emotional processing. We use it to collect information that helps us in identifying and creating emotion, and then regulate that emotion once it has started.
Temporal cortex – This brain area is important in processing sound, speech, and language use, as well as helping us understand others faces and others emotions based on facial cues. The temporal cortex is responsible for determining the quality and content of our emotional memories.
Brainstem – The brainstem is composed of three parts: ascending (sensory information), descending (motor information), and modulatory. The brainstem takes information from our environment (ascending) and creates a bodily response (descending) such as crying. The information from the environment and our body's responses to the information we receive is combined in the modulatory part of the brain stem and we are then able to label an emotion.
== Right hemisphere ==
Many theories about the role of the right hemisphere in emotion has resulted in several models of emotional functioning. After observing decreased emotional processing after right hemisphere injuries, C.K. Mills hypothesized emotions are directly related to the right hemisphere. In 1992, researchers found that emotional expression and understanding may be controlled by smaller brain structures in the right hemisphere. These findings were the basis for the right hemisphere hypothesis and the valence hypothesis.
=== Right hemisphere hypothesis ===
It is believed that the right hemisphere is more specialized in processing emotions than the left hemisphere. The right hemisphere is associated with nonverbal, synthetic, integrative, holistic and gestaltic mental strategies. As demonstrated by patients who have increased spatial neglect when damage affects the right brain rather than the left brain, the right hemisphere is more connected to subcortical systems of autonomic arousal and attention. Right hemisphere disorders have been associated with abnormal patterns of autonomic nervous system responses. These findings suggest the right hemisphere and subcortical brain areas are closely related.
=== Valence hypothesis ===
According to the valence hypothesis, although the right hemisphere is involved in emotion, it is primarily involved in the processing of negative emotions, while the left hemisphere is involved in processing positive emotions. In one explanation, negative emotions are processed by the right brain, while positive emotions are processed by the left. An alternative explanation is that the right hemisphere is dominant when it comes to feeling both positive and negative emotions. Recent studies indicate that the frontal lobes of both hemispheres play an active role in emotions, while the parietal and temporal lobes process them. Depression has been associated with decreased right parietal lobe activity, while anxiety has been associated with increased right parietal lobe activity. Based on the original valence model, increasingly complex models have been developed as a result of the increasing understanding of the different hemispheres.
== Cognitive neuroscience ==
While emotions are integral to thought processes, cognition has been investigated without emotion until the late 1990s, focusing instead on non-emotional processes such as memory, attention, perception, problem solving, and mental imagery. Cognitive neuroscience and affective neuroscience have emerged as separate fields for studying the neural basis of non-emotional and emotional processes. Despite the fact that fields are classified according to how the brain processes cognition and emotion, the neural and mental mechanisms behind emotional and non-emotional processes often overlap.
== Cognitive neuroscience tasks in affective neuroscience research ==
=== Emotion go/no-go ===
Emotion go/no-go tasks are used to study behavioral inhibition, especially how it is influenced by emotion. A "go" cue tells the participant to respond rapidly, but a "no-go" cue tells them to withhold a response. Because the "go" cue occurs more frequently, it can be used to measure how well a subject suppresses a response under different emotional conditions.
This task is often used in combination with neuroimaging in healthy individuals and patients with affective disorders to identify relevant brain functions associated with emotional regulation. Several studies, including go/no-go studies, suggest that sections of the prefrontal cortex are involved in controlling emotional responses to stimuli during inhibition.
=== Emotional Stroop ===
Adapted from the Stroop, the emotional Stroop test measures how much attention you pay to emotional stimuli. In this task, participants are instructed to name the ink color of words while ignoring their meanings. Generally, people have trouble detaching their attention from words with an affective meaning compared with neutral words. It has been demonstrated in several studies that naming the color of neutral words results in a quicker response.
Selective attention to negative or threatening stimuli, which are often related to psychological disorders, is commonly tested with this task. Different mental disorders have been associated with specific attentional biases. Participants with spider phobia, for example, tend to be more inclined to use spider-related words than negatively charged words. Similar findings have been found for threat words related to other anxiety disorders. Even so, other studies have questioned these conclusions. When the words are matched for emotionality, anxious participants in some studies show the Stroop interference effect for both negative and positive words. In other words, the specificity effects of words for various disorders may be primarily due to their conceptual relation to the disorder's concerns rather than their emotionality.
=== Ekman 60 faces task ===
The Ekman faces task is used to measure emotion recognition of six basic emotions. Black and white photographs of 10 actors (6 male, 4 female) are presented, with each actor displaying each emotion. Participants are usually asked to respond quickly with the name of the displayed emotion. The task is a common tool to study deficits in emotion regulation in patients with dementia, Parkinson's, and other cognitively degenerative disorders. The task has been used to analyze recognition errors in disorders such as borderline personality disorder, schizophrenia, and bipolar disorder.
=== Dot probe (emotion) ===
The emotional dot-probe paradigm is a task used to assess selective visual attention to and failure to detach attention from affective stimuli. The paradigm begins with a fixation cross at the center of a screen. An emotional stimulus and a neutral stimulus appear side by side, after which a dot appears behind either the neutral stimulus (incongruent condition) or the affective stimulus (congruent condition). Participants are asked to indicate when they see this dot, and response latency is measured. Dots that appear on the same side of the screen as the image the participant was looking at will be identified more quickly. Thus, it is possible to discern which object the participant was attending to by subtracting the reaction time to respond to congruent versus incongruent trials.
The best documented research with the dot probe paradigm involves attention to threat related stimuli, such as fearful faces, in individuals with anxiety disorders. Anxious individuals tend to respond more quickly to congruent trials, which may indicate vigilance to threat and/or failure to detach attention from threatening stimuli. A specificity effect of attention has also been noted, with individuals attending selectively to threats related to their particular disorder. For example, those with social phobia selectively attend to social threats but not physical threats. However, this specificity may be even more nuanced. Participants with obsessive-compulsive disorder symptoms initially show attentional bias to compulsive threat, but this bias is attenuated in later trials due to habituation to the threat stimuli.
=== Fear potentiated startle ===
Fear-potentiated startle (FPS) has been utilized as a psychophysiological index of fear reaction in both animals and humans. FPS is most often assessed through the magnitude of the eyeblink startle reflex, which can be measured by electromyography. This eyeblink reflex is an automatic defensive reaction to an abrupt elicitor, making it an objective indicator of fear. Typical FPS paradigms involve bursts of noise or abrupt flashes of light transmitted while an individual attends to a set of stimuli. Startle reflexes have been shown to be modulated by emotion. For example, healthy participants tend to show enhanced startle responses while viewing negatively valenced images and attenuated startle while viewing positively valenced images, as compared with neutral images.
The startle response to a particular stimulus is greater under conditions of threat. A common example given to indicate this phenomenon is that one's startle response to a flash of light will be greater when walking in a dangerous neighborhood at night than it would under safer conditions. In laboratory studies, the threat of receiving shock is enough to potentiate startle, even without any actual shock.
Fear potentiated startle paradigms are often used to study fear learning and extinction in individuals with post-traumatic stress disorder (PTSD) and other anxiety disorders. In fear conditioning studies, an initially neutral stimulus is repeatedly paired with an aversive one, borrowing from classical conditioning. FPS studies have demonstrated that PTSD patients have enhanced startle responses during both danger cues and neutral/safety cues as compared with healthy participants.
== Learning ==
Affect plays many roles during learning. Deep, emotional attachment to a subject area allows a deeper understanding of the material and therefore, learning occurs and lasts. The emotions evoked when reading in comparison to the emotions portrayed in the content affects comprehension. Someone who is feeling sad understands a sad passage better than someone feeling happy. Therefore, a student's emotion plays an important role during the learning process.
Emotion can be embodied or perceived from words read on a page or in a facial expression. Neuroimaging studies using fMRI have demonstrated that the same area of the brain that is activated when feeling disgust is activated when observing another's disgust. In a traditional learning environment, the teacher's facial expression can play a critical role in language acquisition. Showing a fearful facial expression when reading passages that contain fearful tones facilitates students learning of the meaning of certain vocabulary words and comprehension of the passage.
== Models ==
The neurobiological basis of emotion is still disputed. The existence of basic emotions and their defining attributes represents a long lasting and yet unsettled issue in psychology. The available research suggests that the neurobiological existence of basic emotions is still tenable and heuristically seminal, pending some reformulation.
=== Basic emotions ===
These approaches hypothesize that emotion categories (including happiness, sadness, fear, anger, and disgust) are biologically basic. In this view, emotions are inherited, biologically based modules that cannot be separated into more basic psychological components. Models following this approach hypothesize that all mental states belonging to a single emotional category can be consistently and specifically localized to either a single brain region or a defined network of brain regions. Each basic emotion category also shares other universal characteristics: distinct facial behavior, physiology, subjective experience and accompanying thoughts and memories.
=== Psychological constructionist approaches ===
This approach to emotion hypothesizes that emotions like happiness, sadness, fear, anger and disgust (and many others) are constructed mental states that occur when brain systems work together. In this view, networks of brain regions underlie psychological operations (e.g., language, attention, etc.) that interact to produce emotion, perception, and cognition. One psychological operation critical for emotion is the network of brain regions that underlie valence (feeling pleasant/unpleasant) and arousal (feeling activated and energized). Emotions emerge when neural systems underlying different psychological operations interact (not just those involved in valence and arousal), producing distributed patterns of activation across the brain. Because emotions emerge from more basic components, heterogeneity affects each emotion category; for example, a person can experience many different kinds of fear, which feel differently, and which correspond to different neural patterns in the brain.
== Aging ==
People typically associate aging with a decline in the functioning of all mental processing abilities; however, this is not the case for emotion regulation. Older adults typically have a stronger drive to maintain and improve on their emotional well being. Thus providing them to utilize emotion regulation skills that provide a higher satisfaction in life.
=== Role of the vmPFC in emotion regulation of older adults ===
The ventromedial prefrontal cortex (vmPFC) has a significant influence on emotion regulation, especially regarding high emotional arousing stimuli. Compared to other areas of the prefrontal cortex (PFC), the vmPFC loses volume at a much lower rate. Due to this, an older person's emotional regulation abilities are not heavily impacted by brain changes associated with aging. Additionally, the anterior cingulate cortex (ACC) is an important area of the brain that is used for emotion regulation. The ACC has proven to be a key player in emotion regulation in not just young adults, but also in older adults. In older adults the ACC is important to create connections with from the vmPFC in order to regulate emotions. This connection was the most salient when negative emotions were reappraised. This demonstrates that older adults use the vmPFC to regulate their emotions in a more positive manner. Despite other areas of the brain decreasing in functionality as humans age, the connection between the vmPFC and ACC remains strong to reappraise negative emotions into more positive emotions. This is different from younger adults, who rely more on other areas of the PFC.
=== Neuropsychology behind older adults emotion regulation differences ===
As people age, most cognitive functions decline. This is not the case when it comes to emotion regulation. A study conducted by Carstensen and colleagues (2000) found that as people increase in age so does their ability to regulate their emotions. It is important to note that just because older adults had better emotion regulation skills, does not mean they live more stable daily lives. In fact, they tend to have more unstable negative emotions especially in comparison to the stability of their positive emotions. The major difference observed in how older adults and younger adults regulate their emotions when negative emotional stimuli are present can be explained by numerous theories.
=== Theories of emotion regulation in aging ===
==== Passive method of emotion regulation ====
How older adults handle emotionally salient events or stimuli are often vastly different from younger adults, and even middle aged adults. There does not appear to be many differences in ways that younger, middle, and older adults handle social situations; however, when a social situation becomes emotionally charged differences emerge. When intense emotions in a social situation were evoked for older adults, they tended to the situation in a more passive manner in comparison to middle aged adults. They also tend to rely more on their previous problem solving skills than both younger and older adults. This is because as people age, there tends to be a shift in preferences to maintain a more positive emotional affect. In fact, there seems to be a decrease in negative emotions felt by older adults once until they reach the age of 60, in which this decrease stops. It is important to note that while the frequency of negative emotion decreases with age, the intensity of the emotions experienced does not change. Additionally, emotional satisfaction is not lower just because they experience less frequent negative emotions.
==== Socioemotional selectivity theory ====
Carstensen (2003) hypothesized that the reason that older adults tended to have better emotion regulation skills than younger adults is due to the socioemotional selectivity theory. This theory highlights the role of social interactions in the ability to regulate emotions. Social interactions, while are often positive, can sometimes lead to negative emotional arousal. Since older adults have been alive longer, they have more dense social networks. This creates a drastic increase in social interaction that cause positive emotional arousal. On the chance they experience a negative emotional reaction from a social event, they are likely to be able to pair it with something that is more positively emotionally salient. This causes the negative emotion to be less potent, and therefore increase their hedonic perspective on life.
== Meta-analyses ==
A meta-analysis is a statistical approach to synthesizing results across multiple studies. Included studies investigated healthy, unmedicated adults and that used subtraction analysis to examine brain areas that were more active during emotional processing than during a neutral (control) condition.
=== Phan et al. 2002 ===
In the first neuroimaging meta-analysis of emotion, Phan et al. (2002) analyzed the results of 55 peer reviewed studies between January 1990 and December 2000 to determine if the emotions of fear, sadness, disgust, anger, and happiness were consistently associated with activity in specific brain regions. All studies used fMRI or PET techniques to investigate higher-order mental processing of emotion (studies of low-order sensory or motor processes were excluded). The authors' tabulated the number of studies that reported activation in specific brain regions. For each brain region, statistical chi-squared analysis was conducted. Two regions showed a statistically significant association. In the amygdala, 66% of studies inducing fear reported activity in this region, as compared to ~20% of studies inducing happiness, ~15% of studies inducing sadness (with no reported activations for anger or disgust). In the subcallosal cingulate, 46% of studies inducing sadness reported activity in this region, as compared to ~20% inducing happiness and ~20% inducing anger. This pattern of clear discriminability between emotion categories was in fact rare, with other patterns occurring in limbic regions, paralimbic regions, and uni/heteromodal regions. Brain regions implicated across discrete emotion included the basal ganglia (~60% of studies inducing happiness and ~60% of studies inducing disgust reported activity in this region) and medial prefrontal cortex (happiness ~60%, anger ~55%, sadness ~40%, disgust ~40%, and fear ~30%).
=== Murphy et al. 2003 ===
Murphy, et al. 2003 analyzed 106 peer reviewed studies published between January 1994 and December 2001 to examine the evidence for regional specialization of discrete emotions (fear, disgust, anger, happiness and sadness) across a larger set of studies. Studies included in the meta-analysis measured activity in the whole brain and regions of interest (activity in individual regions of particular interest to the study). 3-D Kolmogorov-Smirnov (KS3) statistics were used to compare rough spatial distributions of 3-D activation patterns to determine if statistically significant activations were specific to particular brain regions for all emotional categories. This pattern of consistently activated, regionally specific activations was identified in four brain regions: amygdala with fear (~40% of studies), insula with disgust (~70%), globus pallidus with disgust (~70%), and lateral orbitofrontal cortex with anger (80%). Other regions showed different patterns of activation across categories. For example, both the dorsal medial prefrontal cortex and the rostral anterior cingulate cortex showed consistent activity across emotions (happiness ~50%, sadness ~50%, anger ~ 40%, fear ~30%, and disgust ~ 20%).
=== Barrett et al. 2006 ===
Barrett, et al. 2006 examined 161 studies published between 1990 and 2001. The authors compared the consistency and specificity of prior meta-analytic findings specific to each notional basic emotion. Consistent neural patterns were defined by brain regions showing increased activity for a specific emotion (relative to a neutral control condition), regardless of the method of induction used (for example, visual vs. auditory cue). Specific neural patterns were defined as separate circuits for one emotion vs. the other emotions (for example, the fear circuit must be discriminable from the anger circuit, although both may include common brain regions). In general, the results supported Phan et al. and Murphy et al., but not specificity. Consistency was determined through the comparison of chi-squared analyses that revealed whether the proportion of studies reporting activation during one emotion was significantly higher than the proportion of studies reporting activation during the other emotions. Specificity was determined through the comparison of emotion-category brain-localizations by contrasting activations in key regions that were specific to particular emotions. Increased amygdala activation during fear was the most consistently reported across induction methods (but not specific). Both meta-analyses associated the anterior cingulate cortex with sadness, although this finding was less consistent (across induction methods) and was not specific. Both meta-analyses found that disgust was associated with the basal ganglia, but these findings were neither consistent nor specific. Neither consistent nor specific activity was observed across the meta-analyses for anger or happiness. This meta-analysis introduced the concept of the basic, irreducible elements of emotional life as dimensions such as approach and avoidance.
=== Kober et al. 2008 ===
Kober reviewed 162 neuroimaging studies published between 1990 and 2005 in order to determine if specific brain regions were activated when experiencing an emotion directly and (indirectly) through the experience of someone else. According to the study, six different functional groups showed similar activation patterns. The psychological functions of each group were discussed in more basic terms. These regions may also play a role in processing visual information and paying attention to emotional signals.
=== Vytal et al. 2010 ===
Vytal, et al. 2010 examined 83 neuroimaging studies published between 1993–2008 to examine whether neuroimaging evidence supports biologically discrete, basic emotions (i.e. fear, anger, disgust, happiness, and sadness). Consistency analyses identified brain regions associated with individual emotions. Discriminability analyses identified brain regions that were differentially active under contrasting pairs of emotions. This meta-analysis examined PET or fMRI studies that reported whole brain analyses identifying significant activations for at least one of the five emotions relative to a neutral or control condition. The authors used activation likelihood estimation (ALE) to perform spatially sensitive, voxel-wise (sensitive to the spatial properties of voxels) statistical comparisons across studies. This technique allows for direct statistical comparison between activation maps associated with each discrete emotion. Thus, discriminability between the five discrete emotion categories was assessed on a more precise spatial scale than in prior meta-analyses.
Consistency was first assessed by comparing the cross-study ALE map for each emotion to ALE maps generated by random permutations. Discriminability was assessed by pair-wise contrasts of emotion maps. Consistent and discriminable activation patterns were observed for the five categories.
=== Lindquist et al. 2012 ===
Lindquist, et al. reviewed 91 PET and fMRI studies published between January 1990 and December 2007. Induction methods were used to elicit fear, sadness, disgust, anger, and happiness. The goal was to compare basic emotions approaches with psychological constructionist approaches.
It was found that many brain regions activated consistently or selectively for one emotion category when experienced or perceived. As predicted by constructionist models, no region demonstrated functional specificity for fear, disgust, happiness, sadness, or anger.
The authors suggest that certain brain areas traditionally assigned to certain emotions are incorrect and instead correspond to different emotion categories. There is some evidence that the amygdala, anterior insula, and orbitofrontal cortex all contribute to "core affect", which are feelings of pleasure or discomfort.
The anterior cingulate and the dorsolateral prefrontal cortex play a key role in attention, which is closely related to core affect. By using sensory information, the anterior cingulate directs attention and motor responses. According to psychological constructionist theory, emotions are conceptualizations connecting the world and the body, and the dorsolateral prefrontal cortex facilitates executive attention. As well as playing an active role in conceptualizing, the prefrontal cortex and hippocampus also simulate previous experiences. In several studies, the ventrolateral prefrontal cortex, which supports language, was consistently active during emotion perception and experience.
== See also ==
Affective science
Affective spectrum
Affect (psychology)
Endocrinology
Experience machine
Feeling
Music therapy
Neuroendocrinology
Outline of brain mapping
Outline of the human brain
Psychiatry
Psychophysiology
S.M. (patient)
Wirehead (science fiction)
== References ==
== Further reading ==
Davidson, R.J.; Irwin, W. (1999). "The functional neuroanatomy of emotion and affective style". Trends in Cognitive Sciences. 3 (1): 11–21. doi:10.1016/s1364-6613(98)01265-0. PMID 10234222. S2CID 30912026.
Panksepp, J. (1992). "A critical role for affective neuroscience in resolving what is basic about basic emotions". Psychological Review. 99 (3): 554–60. doi:10.1037/0033-295x.99.3.554. PMID 1502276.
Harmon-Jones E, & Winkielman P. (Eds.) Social Neuroscience: Integrating Biological and Psychological Explanations of Social Behavior. New York: Guilford Publications.
Cacioppo, J.T., & Berntson, G.G. (2005). Social Neuroscience. Psychology Press.
Cacioppo, J.T., Tassinary, L.G., & Berntson, G.G. (2007). Handbook of Psychophysiology. Cambridge University Press.
Panksepp J. (1998). Affective Neuroscience: The Foundations of Human and Animal Emotions (Series in Affective Science). Oxford University Press, New York, New York.
Brain and Cognition, Vol. 52, No. 1, pp. 1–133 (June, 2003). Special Issue on Affective Neuroscience. | Wikipedia/Affective_neuroscience |
Evolutionary neuroscience is the scientific study of the evolution of nervous systems. Evolutionary neuroscientists investigate the evolution and natural history of nervous system structure, functions and emergent properties. The field draws on concepts and findings from both neuroscience and evolutionary biology. Historically, most empirical work has been in the area of comparative neuroanatomy, and modern studies often make use of phylogenetic comparative methods. Selective breeding and experimental evolution approaches are also being used more frequently.
Conceptually and theoretically, the field is related to fields as diverse as cognitive genomics, neurogenetics, developmental neuroscience, neuroethology, comparative psychology, evo-devo, behavioral neuroscience, cognitive neuroscience, behavioral ecology, biological anthropology and sociobiology.
Evolutionary neuroscientists examine changes in genes, anatomy, physiology, and behavior to study the evolution of changes in the brain. They study a multitude of processes including the evolution of vocal, visual, auditory, taste, and learning systems as well as language evolution and development. In addition, evolutionary neuroscientists study the evolution of specific areas or structures in the brain such as the amygdala, forebrain and cerebellum as well as the motor or visual cortex.
== History ==
Studies of the brain began during ancient Egyptian times but studies in the field of evolutionary neuroscience began after the publication of Darwin's On the Origin of Species in 1859. At that time, brain evolution was largely viewed at the time in relation to the incorrect scala naturae. Phylogeny and the evolution of the brain were still viewed as linear. During the early 20th century, there were several prevailing theories about evolution. Darwinism was based on the principles of natural selection and variation, Lamarckism was based on the passing down of acquired traits, Orthogenesis was based on the assumption that tendency towards perfection steers evolution, and Saltationism argued that discontinuous variation creates new species. Darwin's became the most accepted and allowed for people to starting thinking about the way animals and their brains evolve.
The 1936 book The Comparative Anatomy of the Nervous System of Vertebrates Including Man by the Dutch neurologist C.U. Ariëns Kappers (first published in German in 1921) was a landmark publication in the field. Following the Evolutionary Synthesis, the study of comparative neuroanatomy was conducted with an evolutionary view, and modern studies incorporate developmental genetics. It is now accepted that phylogenetic changes occur independently between species over time and can not be linear. It is also believed that an increase with brain size correlates with an increase in neural centers and behavior complexity.
=== Major arguments ===
Over time, there are several arguments that would come to define the history of evolutionary neuroscience. The first is the argument between E.G. St. Hilaire and G. Cuvier over the topic of "common plan versus diversity". St. Hilaire argued that all animals are built based on a single plan or archetype and he stressed the importance of homologies between organisms, while Cuvier believed that the structure of organs was determined by their function and that knowledge of the function of one organ could help discover the functions of other organs. He argued that there were at least four different archetypes. After Darwin, the idea of evolution was more accepted and St. Hilaire's idea of homologous structures was more accepted. The second major argument is that of Aristotle's scala naturae (scale of nature) and the great chain of being versus the phylogenetic bush. The scala naturae, later also called the phylogenetic scale, was based on the premise that phylogenies are linear or like a scale while the phylogenetic bush argument was based on the idea that phylogenies were not linear, and more resembled a bush – the currently accepted view. A third major argument dealt with the size of the brain and whether relative size or absolute size was more relevant in determining function. In the late 18th century, it was determined that brain to body ratio reduces as body size increases. However more recently, there is more focus on absolute brain size as this scales with internal structures and functions, with the degree of structural complexity, and with the amount of white matter in the brain, all suggesting that absolute size is much better predictor of brain function. Finally, a fourth argument is that of natural selection (Darwinism) versus developmental constraints (concerted evolution). It is now accepted that the evolution of development is what causes adult species to show differences and evolutionary neuroscientists maintain that many aspects of brain function and structure are conserved across species.
=== Techniques ===
Throughout history, we see how evolutionary neuroscience has been dependent on developments in biological theory and techniques. The field of evolutionary neuroscience has been shaped by the development of new techniques that allow for the discovery and examination of parts of the nervous system. In 1873, C. Golgi devised the silver nitrate method which allowed for the description of the brain at the cellular level as opposed to simply the gross level. Santiago and Pedro Ramon used this method to analyze numerous parts of brains, broadening the field of comparative neuroanatomy. In the second half of the 19th century, new techniques allowed scientists to identify neuronal cell groups and fiber bundles in brains. In 1885, Vittorio Marchi discovered a staining technique that let scientists see induced axonal degeneration in myelinated axons, in 1950, the "original nauta procedure" allowed for more accurate identification of degenerating fibers, and in the 1970s, there were several discoveries of multiple molecular tracers which would be used for experiments even today. In the last 20 years, cladistics has also become a useful tool for looking at variation in the brain.
== Evolution of brains ==
Many of Earth's early years were filled with brainless creatures, and among them was the amphioxus, which can be traced as far back as 550 million years ago. Amphioxi had a significantly simpler way of life, which made it not necessary for them to have a brain. To replace its absence of a brain, the prehistoric amphioxi had a limited nervous system, which was composed of only a bunch of cells. These cells optimized their uses because many of the cells for sensing intertwined with the cells used for its very simple system for moving, which allowed it to propel itself through bodies of water and react without much processing while the cells remaining were used for the detection of light to account to the fact that it had no eyes. It also did not need a sense of hearing. Even though the amphioxi had limited senses, they did not need them to survive efficiently, as their life was mainly dedicated to sitting on the seafloor to eat.(pp 1–2) Although the amphioxus' "brain" might seem severely underdeveloped compared to their human counterparts, it was set well for its respective environment, which has allowed it to prosper for millions of years.
Although many scientists once assumed that the brain evolved to achieve an ability to think, such a view is today considered a great misconception. 500 million years ago, the Earth entered into the Cambrian period, where hunting became a new concern for survival in an animal's environment. At this point, animals became sensitive to the presence of another, which could serve as food. Although hunting did not inherently require a brain, it was one of the main steps that pushed the development of one, as organisms progressed to develop advanced sensory systems.(pp 2, 4–5)
In response to progressively complicated surroundings, where competition between animals with brains started to arise for survival, animals had to learn to manage their energy.(pp 5–6) As creatures acquired a variety of senses for perception, animals progressed to develop allostasis, which played the role of an early brain by forcing the body to gather past experiences to improve prediction. Since prediction beat reaction, organisms who planned their manoeuvres were more likely to survive than those who did not. This came with equally managing energy adequately, which nature favoured. Animals that had not developed allostasis would be at a disadvantage for their purpose of exploration, foraging and reproduction, as death was a higher risk factor.(pp 7–8)
As allostasis continued to develop in animals, their bodies equally continuously evolved in size and complexity. They progressively started to develop cardiovascular systems, respiratory systems and immune systems to survive in their environments, which required bodies to have something more complex than the limited quality of cells to regulate themselves. This encouraged the nervous systems of many creatures to develop into a brain, which was sizeable and strikingly similar to how most animal brains look today.(pp 9–10)
== Evolution of the human brain ==
Darwin, in The Descent of Man, stipulated that the mind evolved simultaneously with the body. According to his theory, all humans have a barbaric core that they learn to deal with.(p 17) Darwin's theory allowed people to start thinking about the way animals and their brains evolve.
=== Reptile brain ===
Plato's insight on the evolution of the human brain contemplated the idea that all humans were once lizards, with similar survival needs such as feeding, fighting and mating. In the classical era Plato first described this concept as the "lizard mind" – the deepest layer and one of three parts of his conception of a three-part human mind. In the 20th century P. MacLean developed a similar, modern triune brain theory.(pp 14–16)
Recent research in molecular genetics has demonstrated evidence that there is no difference in the neurons that reptiles and nonhuman mammals have when compared to humans. Instead, new research speculates that all mammals, and potentially reptiles, birds and some species of fish, evolve from a common order pattern. This research reinforces the idea that human brains are structurally no t any different from many other organisms.(p 19–21)
The cerebral cortex of reptiles resembles that of mammals, although simplified. Although the evolution and function of the human cerebral cortex is still shrouded in mystery, we know that it is the most dramatically changed part of the brain during recent evolution. The reptilian brain, 300 million years ago, was made for all our basic urges and instincts like fighting, reproducing, and mating. The reptile brain evolved 100 million years later and gave us the ability to feel emotion. Eventually, it was able to develop a rational part that controls our inner animal.
=== Visual perception ===
Vision allows humans to process the world surrounding them to a certain extent. Through the wavelengths of light, the human brain can associate them to a specific event. Although the brain obviously perceives its surroundings at a specific moment, the brain equally predicts the upcoming changes in the environment.(p 66, 72) Once it has noticed them, the brain begins to prepare itself to encounter the new scenario by attempting to develop an adequate response. This is accomplished by using the data the brain has at its access, which can be to use past experiences and memories to form a proper response.(pp 66–67)</ref> However, sometimes the brain fails to predict accurately which means that the mind perceives a false illustration. Such an incorrect image occurs when the brain uses an inadequate memory to respond to what it is facing, which means that the memory does not correlate with the real scenario.(pp 75–76)
Research about how visual perception has developed in evolution is today best understood through studying present-day primates since the organization of the brain cannot be ascertained only by analyzing fossilized skulls.
The brain interprets visual information in the occipital lobe, a region in the back of the brain. The occipital lobe contains the visual cortex and the thalamus, which are the two main actors in processing visual information. The process of interpreting information has proven to be more complex than "what you see is what you get". Misinterpreting visual information is more common than previously believed.
As knowledge of the human brain has evolved, researchers discover that our visual perception is much closer to a construction of the brain than a direct "photograph" of what is in front of us. This can lead to misperceiving certain situations or elements in the brain's attempt to keep us safe. For example, an on-edge soldier believes a young child with a stick is a grown man with a gun, as the brain's sympathetic system, or fight-or-flight mode, is activated.
An example of this phenomenon can be observed in the rabbit-duck illusion. Depending on how the image is looked at, the brain can interpret the image of a rabbit, or a duck. There is no right or wrong answer, but it is proof that what is seen may not be the reality of the situation.
=== Auditory perception ===
The organization of the human auditory cortex is divided into core, belt, and parabelt. This closely resembles that of present-day primates.
The concept of auditory perception resembles visual perception very similarly. Our brain is wired to act on what it expects to experience. The sense of hearing helps situate an individual, but it also gives them hints about what else is around them. If something moves, they know approximately where it is and by the tone of it, the brain can predict what moved. If someone were to hear leaves rustling in a forest, the brain might interpret that sound as being an animal which could be a dangerous factor, but it would simply be another person walking. The brain can predict many things based on what it is interpreting, however, those predictions may not all be true.
=== Language development ===
Evidence of a rich cognitive life in primate relatives of humans is extensive, and a wide range of specific behaviours in line with Darwinian theory is well documented. However, until recently, research has disregarded nonhuman primates in the context of evolutionary linguistics, primarily because unlike vocal learning birds, our closest relatives seem to lack imitative abilities. Evolutionary speaking, there is great evidence suggesting a genetic groundwork for the concept of languages has been in place for millions of years, as with many other capabilities and behaviours observed today.
While evolutionary linguists agree on the fact that volitional control over vocalizing and expressing language is a quite recent leap in the history of the human race, that is not to say auditory perception is a recent development as well. Research has shown substantial evidence of well-defined neural pathways linking cortices to organize auditory perception in the brain. Thus, the issue lies in our abilities to imitate sounds.
Beyond the fact that primates may be poorly equipped to learn sounds, studies have shown them to learn and use gestures far better. Visual cues and motoric pathways developed millions of years earlier in our evolution, which seems to be one reason for our earlier ability to understand and use gestures.
=== Cognitive specializations ===
Evolution shows how certain environments and surroundings will favor the development of specific cognitive functions of the brain to aid an animal or in this case human to successfully live in that environment.
Cognitive specialization in a theory in which cognitive functions, such as the ability to communicate socially, can be passed down genetically through offspring. This would benefit species in the process of natural selection. As for studying this in relation to the human brain, it has been theorized that very specific social skills apart from language, such as trust, vulnerability, navigation, and self-awareness can also be passed by offspring.
== Researchers ==
== See also ==
== References ==
== External links ==
"Journal guidelines". Brain, Behavior, and Evolution (journal webpage).
Publisher's book description. Comparative Vertebrate Neuroanatomy: Evolution and Adaptation. Butler, Ann B. & Hodos, William (book authors). Wiley. August 2005. ISBN 978-0-471-73383-6 – via wiley.com.{{cite book}}: CS1 maint: others (link)
"Publisher's book and author descriptions". Principles of Brain Evolution (publisher webpage). Streidter, G.F. (book author). Sinauer Associates. October 2004. Archived from the original on 11 November 2005. Retrieved 12 November 2024 – via Sinauer.com.{{cite web}}: CS1 maint: others (link) | Wikipedia/Evolutionary_neuroscience |
Type physicalism (also known as reductive materialism, type identity theory, mind–brain identity theory, and identity theory of mind) is a physicalist theory in the philosophy of mind. It asserts that mental events can be grouped into types, and can then be correlated with types of physical events in the brain. For example, one type of mental event, such as "mental pains" will, presumably, turn out to be describing one type of physical event (like C-fiber firings).
Type physicalism is contrasted with token identity physicalism, which argues that mental events are unlikely to have "steady" or categorical biological correlates. These positions make use of the philosophical type–token distinction (e.g., Two persons having the same "type" of car need not mean that they share a "token", a single vehicle). Type physicalism can now be understood to argue that there is an identity between types (any mental type is identical with some physical type), whereas token identity physicalism says that every token mental state/event/property is identical to some brain state/event/property.
There are other ways a physicalist might criticize type physicalism; eliminative materialism and revisionary materialism question whether science is currently using the best categorisations. Proponents of these views argue that in the same way that talk of demonic possession was questioned with scientific advance, categorisations like "pain" may need to be revised.
== Background ==
According to U. T. Place, one of the popularizers of the idea of type-identity in the 1950s and 1960s, the idea of type-identity physicalism originated in the 1930s with the psychologist E. G. Boring and took nearly a quarter of a century to gain acceptance from the philosophical community. Boring, in a book entitled The Physical Dimensions of Consciousness (1933) wrote that:
To the author a perfect correlation is identity. Two events that always occur together at the same time in the same place, without any temporal or spatial differentiation at all, are not two events but the same event. The mind-body correlations as formulated at present, do not admit of spatial correlation, so they reduce to matters of simple correlation in time. The need for identification is no less urgent in this case (p. 16, quoted in Place [unpublished]).
The barrier to the acceptance of any such vision of the mind, according to Place, was that philosophers and logicians had not yet taken a substantial interest in questions of identity and referential identification in general. The dominant epistemology of the logical positivists at that time was phenomenalism, in the guise of the theory of sense-data. Indeed, Boring himself subscribed to the phenomenalist creed, attempting to reconcile it with an identity theory and this resulted in a reductio ad absurdum of the identity theory, since brain states would have turned out, on this analysis, to be identical to colors, shapes, tones and other sensory experiences.
The revival of interest in the work of Gottlob Frege and his ideas of sense and reference on the part of Herbert Feigl and J. J. C. Smart, along with the discrediting of phenomenalism through the influence of the later Wittgenstein and J. L. Austin, led to a more tolerant climate toward physicalistic and realist ideas. Logical behaviorism emerged as a serious contender to take the place of the Cartesian "ghost in the machine" and, although not lasting very long as a dominant position on the mind/body problem, its elimination of the whole realm of internal mental events was strongly influential in the formation and acceptance of the thesis of type identity.
== Versions of type identity theory ==
There were actually subtle but interesting differences between the three most widely credited formulations of the type-identity thesis, those of Place, Feigl and Smart which were published in several articles in the late 1950s. However, all of the versions share the central idea that the mind is identical to something physical.
=== U. T. Place ===
U. T. Place's (1956) notion of the relation of identity was derived from Bertrand Russell's distinction among several types of is statements: the is of identity, the is of equality and the is of composition. Place's version of the relation of identity is more accurately described as a relation of composition. For Place, higher-level mental events are composed out of lower-level physical events and will eventually be analytically reduced to these. So, to the objection that "sensations" do not mean the same thing as "mental processes", Place could simply reply with the example that "lightning" does not mean the same thing as "electrical discharge" since we determine that something is lightning by looking and seeing it, whereas we determine that something is an electrical discharge through experimentation and testing. Nevertheless, "lightning is an electrical discharge" is true since the one is composed of the other.
=== Feigl and Smart ===
For Feigl (1957) and Smart (1959), on the other hand, the identity was to be interpreted as the identity between the referents of two descriptions (senses) which referred to the same thing, as in "the morning star" and "the evening star" both referring to Venus, a necessary identity. So to the objection about the lack of equality of meaning between "sensation" and "brain process", their response was to invoke this Fregean distinction: "sensations" and "brain" processes do indeed mean different things but they refer to the same physical phenomenon. Moreover, "sensations are brain processes" is a contingent, not a necessary, identity.
== Criticism and replies ==
=== Multiple realizability ===
One of the most influential and common objections to the type identity theory is the argument from multiple realizability. The multiple realizability thesis asserts that mental states can be realized in multiple kinds of systems, not just brains. Since the identity theory identifies mental events with certain brain states, it does not allow for mental states to be realized in organisms or computational systems that do not have a brain. This is in effect an argument that the identity theory is too narrow because it does not allow for organisms without brains to have mental states. However, token identity (where only particular tokens of mental states are identical with particular tokens of physical events) and functionalism both account for multiple realizability.
The response of type identity theorists, such as Smart, to this objection is that, while it may be true that mental events are multiply realizable, this does not demonstrate the falsity of type identity. As Smart states:
"The functionalist second order [causal] state is a state of having some first order state or other which causes or is caused by the behavior to which the functionalist alludes. In this way we have a second order type theory".
The fundamental point is that it is extremely difficult to determine where, on the continuum of first order processes, type identity ends and merely token identities begin. Take Quine's example of English country gardens. In such gardens, the tops of hedges are cut into various shapes, for example the shape of an elf. We can make generalizations over the type elf-shaped hedge only if we abstract away from the concrete details of the individual twigs and branches of each hedge. So, whether we say that two things are of the same type or are tokens of the same type because of subtle differences is just a matter of descriptive abstraction. The type-token distinction is not all or nothing.
Hilary Putnam essentially rejects functionalism because, he believes, it is indeed a second-order type identity theory. Putnam uses multiple realizability against functionalism itself, suggesting that mental events (or kinds, in Putnam's terminology) may be diversely implemented by diverse functional/computational kinds; there may be only a token identification between particular mental kinds and particular functional kinds. Putnam, and many others who have followed him, now tend to identify themselves as generically non-reductive physicalists. Putnam's invocation of multiple realizability does not, of course, directly answer the problem raised by Smart with respect to useful generalizations over types and the flexible nature of the type-token distinction in relation to causal taxonomies in science.
=== Qualia ===
Another frequent objection is that type identity theories fail to account for phenomenal mental states (or qualia), such as having a pain, feeling sad, experiencing nausea. (Qualia are merely the subjective qualities of conscious experience. An example is the way the pain of jarring one's elbow feels to the individual.) Arguments can be found in Saul Kripke and David Chalmers, for example, according to which the identity theorist cannot identify phenomenal mental states with brain states (or any other physical state for that matter) because one has a sort of direct awareness of the nature of such qualitative mental states, and their nature is qualitative in a way that brain states are not. A famous formulation of the qualia objection comes from Frank Jackson in the form of the Mary's room thought experiment. Let us suppose, Jackson suggests, that a particularly brilliant super-scientist named Mary has been locked away in a completely black-and-white room her entire life. Over the years in her colour-deprived world she has studied (via black-and-white books and television) the sciences of neurophysiology, vision and electromagnetics to their fullest extent; eventually Mary learns all the physical facts there are to know about experiencing colour. When Mary is released from her room and experiences colour for the first time, does she learn something new? If we answer "yes" (as Jackson suggests we do) to this question, then we have supposedly denied the truth of type physicalism, for if Mary has exhausted all the physical facts about experiencing colour prior to her release, then her subsequently acquiring some new piece of information about colour upon experiencing its quale reveals that there must be something about the experience of colour which is not captured by the physicalist picture.
The type identity theorist, such as Smart, attempts to explain away such phenomena by insisting that the experiential properties of mental events are topic-neutral. The concept of topic-neutral terms and expressions goes back to Gilbert Ryle, who identified such topic-neutral terms as "if", "or", "not", "because" and "and." If one were to hear these terms alone in the course of a conversation, it would be impossible to tell whether the topic under discussion concerned geology, physics, history, gardening, or selling pizza. For the identity theorist, sense-data and qualia are not real things in the brain (or the physical world in general) but are more like "the average electrician." The average electrician can be further analyzed and explained in terms of real electricians but is not itself a real electrician.
=== Other ===
Type physicalism has also been criticized from an illusionist perspective. Keith Frankish writes that it is "an unstable position, continually on the verge of collapsing into illusionism. The central problem, of course, is that phenomenal properties seem too weird to yield to physical explanation. They resist functional analysis and float free of whatever physical mechanisms are posited to explain them." He proposes instead that phenomenality is an illusion, arguing that it is therefore the illusion rather than phenomenal consciousness itself that requires explanation.
== See also ==
== Notes ==
== References and further reading ==
Chalmers, David (1996). The Conscious Mind, Oxford University Press, New York.
Feigl, Herbert (1958). "The 'Mental' and the 'Physical'" in Feigl, H., Scriven, M. and Maxwell, G. (eds.). Concepts, Theories and the Mind-Body Problem, Minneapolis, Minnesota Studies in the Philosophy of Science, Vol. 2, reprinted with a Postscript in Feigl 1967.
Feigl, Herbert (1967). The 'Mental' and the 'Physical', The Essay and a Postscript, Minneapolis, University of Minnesota Press.
Jackson, Frank (1982) "Epiphenomenal Qualia", Philosophical Quarterly 32, pp. 127–136.
Kripke, Saul (1972/1980). Naming and Necessity, Cambridge, Mass., Harvard University Press. (Originally published in 1972 as "Naming and Necessity".)
Lewis, David (1966). "An Argument for the Identity Theory", Journal of Philosophy, 63, pp. 17–25.
Lewis, David (1980). "Mad Pain and Martian Pain" in Readings in the Philosophy of Psychology, Vol. I, N. Block (ed.), Harvard University Press, pp. 216–222. (Also in Lewis's Philosophical Papers, Vol. 1, Oxford University Press, 1983.)
Morris, Kevin (2019). Physicalism Deconstructed: Levels of Reality and the Mind–Body Problem, Cambridge University Press, Cambridge.
Place, U. T. (1956). "Is Consciousness a Brain Process?", British Journal of Psychology, 47, pp. 44–50.
Place, U. T. (unpublished). "Identity Theories", A Field Guide to the Philosophy of Mind. Società italiana per la filosofia analitica, Marco Nani (ed.). (link Archived 2020-02-23 at the Wayback Machine)
Putnam, Hilary (1988). Representation and Reality. The MIT Press.
Smart, J. J. C. (1959). "Sensations and Brain Processes", Philosophical Review, 68, pp. 141–156.
Smart, J. J. C. (2004). "The Identity Theory of Mind", The Stanford Encyclopedia of Philosophy (Fall 2004 Edition), Edward N. Zalta (ed.). (link)
== External links ==
Collection of links to online papers
Dictionary of the Philosophy of Mind
Internet Encyclopedia of Philosophy
Stanford Encyclopedia of Philosophy | Wikipedia/Identity_theory_of_mind |
The neuroscience of sex differences is the study of characteristics that separate brains of different sexes. Psychological sex differences are thought by some to reflect the interaction of genes, hormones, and social learning on brain development throughout the lifespan. A 2021 meta-synthesis led by Lise Eliot found that sex accounted for 1% of the brain's structure or laterality, finding large group-level differences only in total brain volume. A subsequent 2021 led by Camille Michèle Williams contradicted Eliot's conclusions, finding that sex differences in total brain volume are not accounted for merely by sex differences in height and weight, and that once global brain size is taken into account, there remain numerous regional sex differences in both directions. A 2022 follow-up meta-analysis led by Alex DeCasien analyzed the studies from both Eliot and Williams, concluding that "The human brain shows highly reproducible sex differences in regional brain anatomy above and beyond sex differences in overall brain size" and that these differences are of a "small-moderate effect size." A review from 2006 and a meta-analysis from 2014 found that some evidence from brain morphology and function studies indicates that male and female brains cannot always be assumed to be identical from either a structural or functional perspective, and some brain structures are sexually dimorphic.
== History ==
The ideas of differences between the male and female brains have circulated since the time of Ancient Greek philosophers around 850 BC. In 1854, German anatomist Emil Huschke discovered a size difference in the frontal lobe, where male frontal lobes are 1% larger than those of females. As the 19th century progressed, scientists began researching sexual dimorphisms in the brain significantly more. Until recent decades, scientists knew of several structural sexual dimorphisms of the brain, but they did not think that sex had any impact on how the human brain performs daily tasks. Through molecular, animal, and neuroimaging studies, a great deal of information regarding the differences between male and female brains and how much they differ in regards to both structure and function has been uncovered.
== Evolutionary explanations ==
=== Sexual selection ===
Females show enhanced information recall compared with males. This may be due to the fact that females have a more intricate evaluation of risk–scenario contemplation, based on a prefrontal cortical control of the amygdala. For example, the ability to recall information better than males most likely originated from sexual selective pressures on females during competition with other females in mate selection. Recognition of social cues was an advantageous characteristic, because it ultimately maximized offspring and was therefore selected for during evolution.
Oxytocin is a hormone that induces contraction of the uterus and lactation in mammals and is also a characteristic hormone of nursing mothers. Studies have found that oxytocin improves spatial memory. Through activation of the MAP kinase pathway, oxytocin plays a role in the enhancement of long-term synaptic plasticity, which is a change in strength between two neurons over a synapse that lasts for minutes or longer, and long-term memory. This hormone may have helped mothers remember the location of distant food sources so they could better nurture their offspring.
According to certain studies, men on average have one standard deviation higher spatial intelligence quotient than women. This domain is one of the few where clear sex differences in cognition appear. Researchers at the University of Toronto say that differences between men and women on some tasks that require spatial skills are largely eliminated after both groups play a video game for only a few hours. Although Herman Witkin had claimed women are more "visually dependent" than men, this has recently been disputed.
The gender difference in spatial ability was found to be attributed to morphological differences between male and female brains. The parietal lobe is a part of the brain that is recognized to be involved in spatial ability, especially in 2d- and 3d mental rotation. Researchers at the University of Iowa found that the thicker grey matter in the parietal lobe of females led to a disadvantage in mental rotations, and that the larger surface areas of the parietal lobe of males led to an advantage in mental rotations. The results found by the researches support the notion that gender differences in spatial abilities arose during human evolution such that both sexes cognitively and neurologically developed to behave adaptively. However, the effect of socialization and environment on the difference in spatial ability is still open for debate.
== Male and female brain anatomy ==
A 2021 meta-synthesis of existing literature found that sex accounted for 1% of the brain's structure or laterality, finding large group-level differences only in total brain volume. A 2022 follow-up meta-analysis refuted these finding based on methodological flaws, and concluded that "The human brain shows highly reproducible sex differences in regional brain anatomy above and beyond sex differences in overall brain size" and that these differences were of a "small-to-moderate effect size." In another study, men were found to have a total myelinated fiber length of 176 000 km at the age of 20, whereas in women the total length was 149 000 km (approx. 15% less).
Many similarities and differences in structure, neurotransmitters, and function have been identified, but some academics, such as Cordelia Fine and Anelis Kaiser, Sven Haller, Sigrid Schmitz, and Cordula Nitsch dispute the existence of significant sex differences in the brain, arguing that innate differences in the neurobiology of women and men have not been conclusively identified due to factors such as alleged neurosexism, methodological flaws and publication bias. Clinical psychologist Simon Baron-Cohen has defended the neuroscience of sex differences against charges of neurosexism, arguing that "Fine's neurosexism allegation is the mistaken blurring of science with politics," adding that "you can be a scientist interested in the nature of sex differences while being a clear supporter of equal opportunities and a firm opponent of all forms of discrimination in society."
Males and females differ in some aspects of their brains, notably the overall difference in size, with men having larger brains on average (between 8% and 13% larger), but a relationship between brain volume or density and brain function is not established. Additionally, there are differences in activation patterns that suggest anatomical or developmental differences.
=== Volume ===
Structurally, adult male brains are on average 11–12% heavier and 10% bigger than female brains. Though statistically there are sex differences in white matter and gray matter percentage, this ratio is directly related to brain size, and some argue these sex differences in gray and white matter percentage are caused by the average size difference between men and women. Others argue that these differences partly remain after controlling for brain volume.
Researchers also found greater cortical thickness and cortical complexity in females before, and after adjusting for overall brain volume. In contrast, surface area, brain volume and fractional anisotropy was found to be greater in males before, and after adjusting for overall brain volume. Despite attributes remaining greater for both male and female, the overall difference in these attributes decreased after adjusting for overall brain volume, except the cortical thickness in females, which increased. Given that cortical complexity and cortical features have had some evidence of positive correlation with intelligence, researchers postulated that these differences might have evolved for females to compensate for smaller brain size and equalize overall cognitive abilities with males, though the reason for environmental selection of that trait is unknown.
Researchers further analyzed the differences in brain volume, surface area and cortical thickness by testing the men and women on verbal-numerical reasoning and reaction time in separate groups. It was found that the group of men slightly outperformed the women in both the verbal-numerical reasoning and reaction time tests. Subsequently, the researchers tested to what extent the differences in performance was mediated by the varying attributes of the male and female brain (e.g. surface area) using two mixed sample groups. In verbal-numerical reasoning tests, surface area and brain volume mediated performance by >82% in both groups, and cortical thickness mediated performance far less, by 7.1% and 5.4% in each group. In reaction time tests, total brain and white matter volumes mediated performance by >27%, but the other attributes all mediated performance by smaller percentages (<15.3%), particularly mean cortical thickness (mediating <3% of performance).
According to the neuroscience journal review series Progress in Brain Research, it has been found that males have larger and longer planum temporale and Sylvian fissure while females have significantly larger proportionate volumes to total brain volume in the superior temporal cortex, Broca's area, the hippocampus and the caudate. The midsagittal and fiber numbers in the anterior commissure that connect the temporal poles and mass intermedia that connects the thalami is also larger in women.
=== Lateralization ===
Lateralization may differ between the sexes, with men often being said to have a more lateralized brain, because they tend to use one hemisphere for a behavior more consistently than females. One factor supporting this idea is the higher rate of left-handedness among males. Another example is that language is typically more strongly left-lateralized in males than in females. Additionally, males show stronger right-lateralization in emotional-face processing tasks, suggesting sex differences in brain lateralization go beyond just language. These differences may be linked to factors like how the two brain hemispheres interact and differences in the corpus callosum, rather than directly to sex hormones. Other biological factors, such as cortisol levels and dopamine asymmetries, may also play a role in these differences, particularly in decision-making and emotional responses.
Further evidence comes from studies using turning preferences as a way to measure brain lateralization. While no major sex differences were found in the direction or strength of laterality, males showed more consistent lateralization across different situations. This suggests that males may have stronger and more stable brain lateralization. However, the relationship between brain lateralization and cognitive abilities remains unclear and is likely influenced by many biological factors.
A 2014 meta-analysis of grey matter in the brain found sexually dimorphic areas of the brain in both volume and density. When synthesized, these differences show that volume increases for males tend to be on the left side of systems, while females generally see greater volume in the right hemisphere. On the other hand, a previous 2008 meta-analysis found that the difference between male and female brain lateralization was not significant.
=== Amygdala ===
There are behavioral differences between males and females that may suggest a difference in amygdala size or function. A 2017 review of amygdala volume studies found that there was a raw size difference, with males having a 10% larger amygdala, however, because male brains are larger, this finding was found to be misleading. After normalizing for brain size, there was no significant difference in size of the amygdala across sex.
In terms of activation, there is no difference in amygdala activation across sex. Differences in behavioral tests may be due to potential anatomical and physiological differences in the amygdala across sexes rather than activation differences.
Emotional expression, understanding, and behavior appears to vary between males and females. A 2012 review concluded that males and females have differences in the processing of emotions. Males tend to have stronger reactions to threatening stimuli and that males react with more physical violence.
=== Hippocampus ===
Hippocampus atrophy is associated with a variety of psychiatric disorders that have higher prevalence in females. Additionally, there are differences in memory skills between males and females which may suggest a difference in the hippocampal volume (HCV). A 2016 meta-analysis of volume differences found a higher HCV in males without correcting for total brain size. However, after adjusting for individual differences and total brain volume, they found no significant sex difference, despite the expectation that women may have larger hippocampus volume.
=== Grey matter ===
A 2014 meta-analysis found (where differences were measured) some differences in grey matter levels between the sexes.
The findings included females having more grey matter volume in the right frontal pole, inferior and middle frontal gyrus, pars triangularis, planum temporale/parietal operculum, anterior cingulate gyrus, insular cortex, and Heschl's gyrus; both thalami and precuneus; the left parahippocampal gyrus and lateral occipital cortex (superior division). Larger volumes in females were most pronounced in areas in the right hemisphere related to language in addition to several limbic structures such as the right insular cortex and anterior cingulate gyrus.
Males had more grey matter volume in both amygdalae, hippocampi, anterior parahippocampal gyri, posterior cingulate gyri, precuneus, putamen and temporal poles, areas in the left posterior and anterior cingulate gyri, and areas in the cerebellum bilateral VIIb, VIIIa and Crus I lobes, left VI and right Crus II lobes.
In terms of density, there were also differences between the sexes. Males tended to have a denser left amygdala, hippocampus, insula, pallidum, putamen, claustrum, and areas of the right VI lobule of the cerebellum, among other areas. Females tended to have denser left frontal pole.
The significance of these differences lies both in the lateralization (males having more volume in the left hemisphere and females having more volume in the right hemisphere) and the possible uses of these findings to explore differences in neurological and psychiatric conditions.
=== Transgender studies on brain anatomy ===
Early postmortem studies of transgender neurological differentiation were focused on the hypothalamic and amygdala regions of the brain. Using magnetic resonance imaging (MRI), some trans women were found to have female-typical putamina that were larger in size than those of cisgender males. Some trans women have also shown a female-typical central part of the bed nucleus of the stria terminalis (BSTc) and interstitial nucleus of the anterior hypothalamus number 3 (INAH-3), looking at the number of neurons found within each.
== Brain networks ==
Both males and females have consistent active working memory networks composed of both middle frontal gyri, the left cingulate gyrus, the right precuneus, the left inferior and superior parietal lobes, the right claustrum, and the left middle temporal gyrus. Although the same brain networks are used for working memory, specific regions are sex-specific. Sex differences were evident in other networks, as women also tend to have higher activity in the prefrontal and limbic regions, such as the anterior cingulate, bilateral amygdala, and right hippocampus, while men tend to have a distributed network spread out among the cerebellum, portions of the superior parietal lobe, the left insula, and bilateral thalamus.
A 2017 review from the perspective of large-scale brain networks hypothesized that women's higher susceptibility to stress-prone disorders such as post-traumatic stress disorder and major depressive disorder, in which the salience network is theorized to be overactive and to interfere with the executive control network, may be due in part, along with societal exposure to stressors and the coping strategies that are available to women, to underlying sex-based brain differences.
== Neurochemical differences ==
=== Hormones ===
Gonadal hormones, or sex hormones, include androgens (such as testosterone) and estrogens (such as estradiol), which are steroid hormones synthesized primarily in the testes and ovaries, respectively. Sex hormone production is regulated by the gonadotropic hormones luteinizing hormone (LH) and follicle-stimulating hormone (FSH), whose release from the anterior pituitary is stimulated by gonadotropin-releasing hormone (GnRH) from the hypothalamus.
Steroid hormones have several effects on brain development as well as maintenance of homeostasis throughout adulthood. Estrogen receptors have been found in the hypothalamus, pituitary gland, hippocampus, and frontal cortex, indicating the estrogen plays a role in brain development. Gonadal hormone receptors have also been found in the basal fore-brain nuclei.
==== Estrogen and the female brain ====
Estradiol influences cognitive function, specifically by enhancing learning and memory in a dose-sensitive manner. Too much estrogen can have negative effects by weakening performance of learned tasks as well as hindering performance of memory tasks; this can result in females exhibiting poorer performance of such tasks when compared to males.
Ovariectomies, surgeries inducing menopause, or natural menopause cause fluctuating and decreased estrogen levels in women. This in turn can "attenuate the effects" of endogenous opioid peptides. Opioid peptides are known to play a role in emotion and motivation. The content of β-endorphin (β-EP), an endogenous opioid peptide, has been found to decrease (in varying amounts/brain region) post ovariectomy in female rats within the hypothalamus, hippocampus, and pituitary gland. Such a change in β-EP levels could be the cause of mood swings, behavioral disturbances, and hot flashes in post menopausal women.
==== Progesterone and the male and female brain ====
Progesterone is a steroid hormone synthesized in both male and female brains. It contains characteristics found in the chemical nucleus of both estrogen and androgen hormones. As a female sex hormone, progesterone is more significant in females than in males. During the menstrual cycle, progesterone increases just after the ovulatory phase to inhibit luteinizing hormones, such as oxytocin absorption. In men, increased progesterone has been linked to adolescents with suicidal ideation.
==== Testosterone and the male brain ====
The gonadal hormone testosterone is an androgenic, or masculinizing, hormone that is synthesized in both the male testes and female ovaries, at a rate of about 14,000 μg/day and 600 μg/day, respectively. Testosterone exerts organizational effects on the developing brain, many of which are mediated through estrogen receptors following its conversion to estrogen by the enzyme aromatase within the brain.
== See also ==
== References ==
== Further reading ==
Rippon G (28 Feb 2019). The gendered brain: The new neuroscience that shatters the myth of the female brain. Bodley Head. ISBN 978-1-84792-475-9. | Wikipedia/Neuroscience_of_sex_differences |
Rank theory is an evolutionary theory of depression, developed by Anthony Stevens and John Price, and proposes that depression promotes the survival of genes. Depression is an adaptive response to losing status (rank) and losing confidence in the ability to regain it. The adaptive function of the depression is to change behaviour to promote survival for someone who has been defeated. According to rank theory, depression was naturally selected to allow us to accept a subordinate role. The function of this depressive adaptation is to prevent the loser from suffering further defeat in a conflict.
In the face of defeat, a behavioural process swings into action which causes the individual to cease competing and reduce their ambitions. This process is involuntary and results in the loss of energy, depressed mood, sleep disturbance, poor appetite, and loss of confidence, which are typical characteristics of depression. The outward symptoms of depression (facial expressions, constant crying, etc.) signal to others that the loser is not fit to compete, and they also discourage others from attempting to restore the loser's rank.
This acceptance of a lower rank would serve to stabilise an ancestral human community, promoting the survival of any individual (or individual's genes) in the community through affording protection from other human groups, retaining access to resources, and to mates. The adaptive function of accepting a lower rank is twofold: first, it ensures that the loser truly yields and does not attempt to make a comeback, and second, the loser reassures the winner that yielding has truly taken place, so that the conflict ends, with no further damage to the loser. Social harmony is then restored.
== Development ==
Rank theory of depression, initially known as the 'social competition hypothesis', is based on ethological theories of signalling: in order to avoid injury, animals will perform 'appeasement displays' to demonstrate their subordination and lack of desire to engage in further competition. Additionally, rank theory attempts to explain the link between low socioeconomic status and depression through a psychosocial lens.
John Price formulated rank theory after noticing that monkeys became uncommunicative following a competitive loss (e.g. relating to food, allies, or mates). He proposed that humans similarly submit in competitive situations to induce reconciliation. By submitting to their opponent, losers allow a new hierarchy to form, strengthening social cohesion. Depression is therefore a ritualistic behaviour which fulfils an adaptive function: the loser is able to escape physical injury by signalling that they are no longer a threat. This adaptive strategy has been called "Involuntary Defeat Strategy" (IDS) to clarify that losers may demonstrate submissiveness to victors using other strategies, which have not been linked to depression. Although, historically, the Involuntary Defeat Strategy may have also prevented the loss of further material resources (e.g. food, shelter), evolutionary psychologists argue that this explanation is still applicable to modern societies, where humans compete on resources such as attractiveness and competency.
== Application to symptoms ==
Unlike other evolutionary explanations of depression, rank theory is able to explain why depression is incapacitating: by functioning as a substitute for physical damage, incapacitation prevents the 'loser' from posing a threat to the competitor they challenged. Moreover, rank theory aligns with Beck's cognitive triad, which proposes that depressed individuals suffer cognitive distortions which result in pessimistic beliefs. Rank theory explains this pessimism by arguing that 'losers' with low expectations about their abilities are less likely to engage in competition, because they are pessimistic about their chances. The explanation also accounts for common symptoms (e.g. apathy, loss of interest, anhedonia) by arguing they evolved as a form of harm-avoidance.
Psychologists such as Paul Gilbert have sought to explain the differences between depressive states following competition and major depression. Gilbert has suggested that depression resulting from the Involuntary Defeat Strategy is a short-term condition, which becomes more serious due to external events (e.g. victor ignores the attempt at reconciliation) or internal events (e.g. excessive rumination). Rank theorists argue that depression, like vomiting, can become maladaptive when the defence mechanism, designed for the short-term, is overused (see Fig. 1).
=== Arrested flight ===
One factor which may make IDS develop into major depression is arrested flight. When individuals are unable to flee from dangerous situations, this 'entrapment' may intensify the depressive symptoms, making the condition long-term. If the 'de-escalation strategies' used by the loser are overexaggerated, this may result in symptoms such as social anxiety and excessively low self-esteem.
=== Childhood attachment ===
Another factor which may explain why certain individuals are more prone to major depression is the degree of childhood attachment security. Children with insecure attachments, for instance due to being raised in an abusive household, may have experienced a more frequent triggering of the Involuntary Defence Strategy. This results in an overly sensitive IDS, which requires significantly less stimulation to engage in submissive behaviours. Unlike securely attached children, whose IDS functions adaptively by allowing them to accept defeat, insecurely attached children will back down too early, lose confidence in their ability to win competitions, and therefore may be more prone to developing long-term depression.
=== Prevalence in adolescence ===
Rank theorists has also suggested an explanation to account for high depression rates in teenagers. As competition for social approval is particularly salient in teenage peer relations, adolescents may emphasise social comparison more. Rank theorists propose that children with insecure attachments enter the highly socially competitive dynamic of adolescence feeling more submissive or craving a dominant role. Due to fixating on social rank, these adolescents are more sensitive to social competition and are more likely to overuse the IDS, resulting in a higher likelihood of depression.
== Therapeutic implications ==
Although not intended to become a new 'school of therapy', rank theorists have proposed changes to existing therapeutic interventions for depression such as cognitive behavioral therapy and psychodynamic treatment:
Status-changing: Treating depressed individuals as high-status may reduce their self-perception of inferiority
Preventing rumination: Assisting clients in recognising their virtues by magnifying their achievements can reduce the likelihood of IDS developing into maladaptive cycles
Assertiveness: Teaching individuals to stand up for themselves may prevent accumulations of rage and encourage coping with anger more healthily
Strategy-switching: Showing clients that they submit too quickly or not quickly enough (because they don't recognise the vulnerability of their position) may help individuals avoid misusing the IDS
Goal-setting: setting small, achievable goals to build up the client's confidence may prevent a loss of confidence and help clients avoid reinforcing maladaptive cycles
== Criticism ==
The largest limitation of evolutionary explanations of depression, which include rank theory, is the lack of falsifiability. While these theories provide "reasonably parsimonious" explanations, they are not grounded in empirical research, which severely affects their real-world application.
=== Anger ===
As rank theory suggests that depression functions to inhibit aggression and stimulate submissive behaviours, one criticism is rank theory's inability to account for higher levels of anger found in depressed individuals than in controls. However, rank theorists have weakened this argument by arguing that hostility in depressed individuals is just redirected towards 'lower-ranking' individuals in the social hierarchy (e.g. children) or objects (e.g. furniture).
=== Power ===
Another criticism of rank theory is that it may not account for depressed individuals who are socially powerful and exert manipulation over others, despite supposedly engaging in submissive behaviour. To combat this criticism, rank theorists have suggested that depressed individuals only use manipulation on their supporters in order to switch support from being agonistic (i.e. intended to help the individual win in a competition by boasting) to being nurturing (i.e. accepting the individual has lost and also backing down).
=== Mood ===
As individuals at the top of hierarchies may suffer from depression, and not all those on the low end of the hierarchy exhibit depressive symptoms, critics of rank theory have also argued that the mismatch between rank and mood weakens this explanation for depression. However, this argument may over-simplify rank theory, as it does not take into account the social comparison element of rank theory, which suggests that dissatisfaction with one's rank may be due to comparison with peers who have achieved higher social ranks. Moreover, rank theorists have argued that the stress of a low rank may also depend on factors such as lower-rank individuals attempting to usurp you and higher-rank individuals bullying you.
== Further reading ==
Evolutionary Psychiatry: A New Beginning by Anthony Stevens, John Price (published 2000, ISBN 0-415-21978-7)
== References == | Wikipedia/Rank_theory_of_depression |
Motor control is the regulation of movements in organisms that possess a nervous system. Motor control includes conscious voluntary movements, subconscious muscle memory and involuntary reflexes, as well as instinctual taxes.
To control movement, the nervous system must integrate multimodal sensory information (both from the external world as well as proprioception) and elicit the necessary signals to recruit muscles to carry out a goal. This pathway spans many disciplines, including multisensory integration, signal processing, coordination, biomechanics, and cognition, and the computational challenges are often discussed under the term sensorimotor control. Successful motor control is crucial to interacting with the world to carry out goals as well as for posture, balance, and stability.
Some researchers (mostly neuroscientists studying movement, such as Daniel Wolpert and Randy Flanagan) argue that motor control is the reason brains exist at all.
== Neural control of muscle force ==
All movements, e.g. touching your nose, require motor neurons to fire action potentials that results in contraction of muscles. In humans, ~150,000 motor neurons control the contraction of ~600 muscles. To produce movements, a subset of 600 muscles must contract in a temporally precise pattern to produce the right force at the right time.
=== Motor units and force production ===
A single motor neuron and the muscle fibers it innervates are called a motor unit. For example, the rectus femoris contains approximately 1 million muscle fibers, which are controlled by around 1000 motor neurons. Activity in the motor neuron causes contraction in all of the innervated muscle fibers so that they function as a unit. Increasing action potential frequency (spike rate) in the motor neuron increases the muscle fiber contraction force, up to the maximal force. The maximal force depends on the contractile properties of the muscle fibers. Within a motor unit, all the muscle fibers are of the same type (e.g. type I (slow twitch) or Type II fibers (fast twitch)), and motor units of multiple types make up a given muscle. Motor units of a given muscle are collectively referred to as a motor pool.
The force produced in a given muscle thus depends on: 1) How many motor neurons are active, and their spike rates; 2) the contractile properties and number of muscle fibers innervated by the active neurons. To generate more force, increase the spike rates of active motor neurons and/or recruiting more and stronger motor units. In turn, how the muscle force produces limb movement depends on the limb biomechanics, e.g. where the tendon and muscle originate (which bone, and precise location) and where the muscle inserts on the bone that it moves.
=== Recruitment order ===
Motor units within a motor pool are recruited in a stereotypical order, from motor units that produce small amounts of force per spike, to those producing the largest force per spike. The gradient of motor unit force is correlated with a gradient in motor neuron soma size and motor neuron electrical excitability. This relationship was described by Elwood Henneman and is known as Henneman's size principle, a fundamental discovery of neuroscience and an organizing principle of motor control.
For tasks requiring small forces, such as continual adjustment of posture, motor units with fewer muscle fibers that are slowly-contracting, but less fatigueable, are used. As more force is required, motor units with fast twitch, fast-fatigeable muscle fibers are recruited.
High|
| _________________
Force required | /
| |
| |
| _____________|_________________
| __________|_______________________________
Low|__________|__________________________________________
↑ ↑ ↑ Time
Type I Recruit first Type II A Type IIB
== Computational issues of motor control ==
The nervous system produces movement by selecting which motor neurons are activated, and when. The finding that a recruitment order exists within a motor pool is thought to reflect a simplification of the problem: if a particular muscle should produce a particular force, then activate the motor pool along its recruitment hierarchy until that force is produced.
But then how to choose what force to produce in each muscle? The nervous system faces the following issues in solving this problem.
Redundancy. Infinite trajectories of movements can accomplish a goal (e.g. touch my nose). How is a trajectory chosen? Which trajectory is best?
Noise. Noise is defined as small fluctuations that are unrelated to a signal, which can occur in neurons and synaptic connections at any point from sensation to muscle contraction.
Delays. Motor neuron activity precedes muscle contraction, which precedes the movement. Sensory signals also reflect events that have already occurred. Such delays affect the choice of motor program.
Uncertainty. Uncertainty arises because of neural noise, but also because inferences about the state of the world may not be correct (e.g. speed of on coming ball).
Nonstationarity. Even as a movement is being executed, the state of the world changes, even through such simple effects as reactive forces on the rest of the body, causing translation of a joint while it is actuated.
Nonlinearity. The effects of neural activity and muscle contraction are highly non-linear, which the nervous system must account for when predicting the consequences of a pattern of motor neuron activity.
Much ongoing research is dedicated to investigating how the nervous system deals with these issues, both at the behavioral level, as well as how neural circuits in the brain and spinal cord represent and deal with these factors to produce the fluid movements we witness in animals.
"Optimal feedback control" is an influential theoretical framing of these computation issues.
== Model systems for motor control ==
All organisms face the computational challenges above, so neural circuits for motor control have been studied in humans, monkeys, horses, cats, mice, fish lamprey, flies, locusts, and nematodes, among many others. Mammalian model systems like mice and monkeys offer the most straightforward comparative models for human health and disease. They are widely used to study the role of higher brain regions common to vertebrates, including the cerebral cortex, thalamus, basal ganglia and deep brain medullary and reticular circuits for motor control. The genetics and neurophysiology of motor circuits in the spine have also been studied in mammalian model organisms, but protective vertebrae make it difficult to study the functional role of spinal circuits in behaving animals. Here, larval and adult fish have been useful in discovering the functional logic of the local spinal circuits that coordinate motor neuron activity. Invertebrate model organisms do not have the same brain regions as vertebrates, but their brains must solve similar computational issues and thus are thought to have brain regions homologous to those involved in motor control in the vertebrate nervous system, The organization of arthropod nervous systems into ganglia that control each leg as allowed researchers to record from neurons dedicated to moving a specific leg during behavior.
Model systems have also demonstrated the role of central pattern generators in driving rhythmic movements. A central pattern generator is a neural network that can generate rhythmic activity in the absence of an external control signal, such as a signal descending from the brain or feedback signals from sensors in the limbs (e.g. proprioceptors). Evidence suggests that real CPGs exist in several key motor control regions, such as the stomachs of arthropods or the pre-Boetzinger complex that control breathing in humans. Furthermore, as a theoretical concept, CPGs have been useful to frame the possible role of sensory feedback in motor control.
== Sensorimotor feedback ==
=== Response to stimuli ===
The process of becoming aware of a sensory stimulus and using that information to influence an action occurs in stages. Reaction time of simple tasks can be used to reveal information about these stages. Reaction time refers to the period of time between when the stimulus is presented, and the end of the response. Movement time is the time it takes to complete the movement. Some of the first reaction time experiments were carried out by Franciscus Donders, who used the difference in response times to a choice task to determine the length of time needed to process the stimuli and choose the correct response. While this approach is ultimately flawed, it gave rise to the idea that reaction time was made up of a stimulus identification, followed by a response selection, and ultimately culminates in carrying out the correct movement. Further research has provided evidence that these stages do exist, but that the response selection period of any reaction time increases as the number of available choices grows, a relationship known as Hick's law.
=== Closed loop control ===
The classical definition of a closed loop system for human movement comes from Jack A. Adams (1971). A reference of the desired output is compared to the actual output via error detection mechanisms; using feedback, the error is corrected for. Most movements that are carried out during day-to-day activity are formed using a continual process of accessing sensory information and using it to more accurately continue the motion. This type of motor control is called feedback control, as it relies on sensory feedback to control movements. Feedback control is a situated form of motor control, relying on sensory information about performance and specific sensory input from the environment in which the movement is carried out. This sensory input, while processed, does not necessarily cause conscious awareness of the action. Closed loop control: 186 is a feedback based mechanism of motor control, where any act on the environment creates some sort of change that affects future performance through feedback. Closed loop motor control is best suited to continuously controlled actions, but does not work quickly enough for ballistic actions. Ballistic actions are actions that continue to the end without thinking about it, even when they no longer are appropriate. Because feedback control relies on sensory information, it is as slow as sensory processing. These movements are subject to a speed-accuracy trade-off, because sensory processing is being used to control the movement, the faster the movement is carried out, the less accurate it becomes.
=== Open loop control ===
The classical definition from Jack A. Adams is: “An open loop system has no feedback or mechanisms for error regulation. The input events for a system exert their influence, the system effects its transformation on the input and the system has an output...... A traffic light with fixed timing snarls traffic when the load is heavy and impedes the flow when the traffic is light. The system has no compensatory capability.”
Some movements, however, occur too quickly to integrate sensory information, and instead must rely on feed forward control. Open loop control is a feed forward form of motor control, and is used to control rapid, ballistic movements that end before any sensory information can be processed. To best study this type of control, most research focuses on deafferentation studies, often involving cats or monkeys whose sensory nerves have been disconnected from their spinal cords. Monkeys who lost all sensory information from their arms resumed normal behavior after recovering from the deafferentation procedure. Most skills were relearned, but fine motor control became very difficult. It has been shown that the open loop control can be adapted to different disease conditions and can therefore be used to extract signatures of different motor disorders by varying the cost functional governing the system.
== Coordination ==
A core motor control issue is coordinating the various components of the motor system to act in unison to produce movement.
Peripheral neurons receive input from the central nervous system and innervate the muscles. In turn, muscles generate forces which actuate joints. Getting the pieces to work together is a challenging problem for the motor system and how this problem is resolved is an active area of study in motor control research.
=== Reflexes ===
In some cases the coordination of motor components is hard-wired, consisting of fixed neuromuscular pathways that are called reflexes. Reflexes are typically characterized as automatic and fixed motor responses, and they occur on a much faster time scale than what is possible for reactions that depend on perceptual processing. Reflexes play a fundamental role in stabilizing the motor system, providing almost immediate compensation for small perturbations and maintaining fixed execution patterns. Some reflex loops are routed solely through the spinal cord without receiving input from the brain, and thus do not require attention or conscious control. Others involve lower brain areas and can be influenced by prior instructions or intentions, but they remain independent of perceptual processing and online control.
The simplest reflex is the monosynaptic reflex or short-loop reflex, such as the monosynaptic stretch response. In this example, Ia afferent neurons are activated by muscle spindles when they deform due to the stretching of the muscle. In the spinal cord, these afferent neurons synapse directly onto alpha motor neurons that regulate the contraction of the same muscle. Thus, any stretching of a muscle automatically signals a reflexive contraction of that muscle, without any central control. As the name and the description implies, monosynaptic reflexes depend on a single synaptic connection between an afferent sensory neuron and efferent motor neuron. In general the actions of monosynaptic reflexes are fixed and cannot be controlled or influenced by intention or instruction. However, there is some evidence to suggest that the gain or magnitude of these reflexes can be adjusted by context and experience.
Polysynaptic reflexes or long-loop reflexes are reflex arcs which involve more than a single synaptic connection in the spinal cord. These loops may include cortical regions of the brain as well, and are thus slower than their monosynaptic counterparts due to the greater travel time. However, actions controlled by polysynaptic reflex loops are still faster than actions which require perceptual processing.: 171, 578 While the actions of short-loop reflexes are fixed, polysynaptic reflexes can often be regulated by instruction or prior experience. A common example of a long loop reflex is the asymmetrical tonic neck reflex observed in infants.
=== Synergies ===
A motor synergy is a neural organization of a multi-element system that (1) organizes sharing of a task among a set of elemental variables; and (2) ensures co-variation among elemental variables with the purpose to stabilize performance variables. The components of a synergy need not be physically connected, but instead are connected by their response to perceptual information about the particular motor task being executed. Synergies are learned, rather than being hardwired like reflexes, and are organized in a task-dependent manner; a synergy is structured for a particular action and not determined generally for the components themselves. Nikolai Bernstein famously demonstrated synergies at work in the hammering actions of professional blacksmiths. The muscles of the arm controlling the movement of the hammer are informationally linked in such a way that errors and variability in one muscle are automatically compensated for by the actions of the other muscles. These compensatory actions are reflex-like in that they occur faster than perceptual processing would seem to allow, yet they are only present in expert performance, not in novices. In the case of blacksmiths, the synergy in question is organized specifically for hammering actions and is not a general purpose organization of the muscles of the arm. Synergies have two defining characteristics in addition to being task dependent; sharing and flexibility/stability.
"Sharing" requires that the execution of a particular motor task depends on the combined actions of all the components that make up the synergy. Often, there are more components involved than are strictly needed for the particular task (see "Redundancy" below), but the control of that motor task is distributed across all components nonetheless. A simple demonstration comes from a two-finger force production task, where participants are required to generate a fixed amount of force by pushing down on two force plates with two different fingers. In this task, participants generated a particular force output by combining the contributions of independent fingers. While the force produced by any single finger can vary, this variation is constrained by the action of the other such that the desired force is always generated.
Co-variation also provides "flexibility and stability" to motor tasks. Considering again the force production task, if one finger did not produce enough force, it could be compensated for by the other. The components of a motor synergy are expected to change their action to compensate for the errors and variability in other components that could affect the outcome of the motor task. This provides flexibility because it allows for multiple motor solutions to particular tasks, and it provides motor stability by preventing errors in individual motor components from affecting the task itself.
Synergies simplify the computational difficulty of motor control. Coordinating the numerous degrees of freedom in the body is a challenging problem, both because of the tremendous complexity of the motor system, as well as the different levels at which this organization can occur (neural, muscular, kinematic, spatial, etc.). Because the components of a synergy are functionally coupled for a specific task, execution of motor tasks can be accomplished by activating the relevant synergy with a single neural signal. The need to control all of the relevant components independently is removed because organization emerges automatically as a consequence of the systematic covariation of components. Similar to how reflexes are physically connected and thus do not require control of individual components by the central nervous system, actions can be executed through synergies with minimal executive control because they are functionally connected. Beside motor synergies, the term of sensory synergies has recently been introduced. Sensory synergy are believed to play an important role in integrating the mixture of environmental inputs to provide low-dimensional information to the CNS thus guiding the recruitment of motor synergies.
Synergies are fundamental for controlling complex movements, such as the ones of the hand during grasping. Their importance has been demonstrated for both muscle control and in the kinematic domain in several studies, lately on studies including large cohorts of subjects. The relevance of synergies for hand grasps is also enforced by studies on hand grasp taxonomies, showing muscular and kinematic similarities among specific groups of grasps, leading to specific clusters of movements.
=== Motor Programs ===
While synergies represent coordination derived from peripheral interactions of motor components, motor programs are specific, pre-structured motor activation patterns that are generated and executed by a central controller (in the case of a biological organism, the brain).: 227 They represent at top-down approach to motor coordination, rather than the bottom-up approach offered by synergies. Motor programs are executed in an open-loop manner, although sensory information is most likely used to sense the current state of the organism and determine the appropriate goals. However, similar to central pattern generators, once the program has been executed, it cannot be altered online by additional sensory information.
Evidence for the existence of motor programs comes from studies of rapid movement execution and the difficulty associated with changing those movements once they have been initiated. For example, people who are asked to make fast arm swings have extreme difficulty in halting that movement when provided with a "STOP" signal after the movement has been initiated. This reversal difficulty persists even if the stop signal is presented after the initial "GO" signal but before the movement actually begins. This research suggests that once selection and execution of a motor program begins, it must run to completion before another action can be taken. This effect has been found even when the movement that is being executed by a particular motor program is prevented from occurring at all. People who attempt to execute particular movements (such as pushing with the arm), but unknowingly have the action of their body arrested before any movement can actually take place, show the same muscle activation patterns (including stabilizing and support activation that does not actually generate the movement) as when they are allowed to complete their intended action.
Although the evidence for motor programs seems persuasive, there have been several important criticisms of the theory. The first is the problem of storage. If each movement an organism could generate requires its own motor program, it would seem necessary for that organism to possess an unlimited repository of such programs and where these would be kept is not clear. Aside from the enormous memory requirements such a facility would take, no motor program storage area in the brain has yet been identified. The second problem is concerned with novelty in movement. If a specific motor program is required for any particular movement, it is not clear how one would ever produce a novel movement. At best, an individual would have to practice any new movement before executing it with any success, and at worst, would be incapable of new movements because no motor program would exist for new movements. These difficulties have led to a more nuanced notion of motor programs known as generalized motor programs.: 240–257 A generalized motor program is a program for a particular class of action, rather than a specific movement. This program is parameterized by the context of the environment and the current state of the organism.
=== Redundancy ===
An important issue for coordinating the motor system is the problem of the redundancy of motor degrees of freedom. As detailed in the "Synergies" section, many actions and movements can be executed in multiple ways because functional synergies controlling those actions are able to co-vary without changing the outcome of the action. This is possible because there are more motor components involved in the production of actions than are generally required by the physical constraints on that action. For example, the human arm has seven joints which determine the position of the hand in the world. However, only three spatial dimensions are needed to specify any location the hand could be placed in. This excess of kinematic degrees of freedom means that there are multiple arm configurations that correspond to any particular location of the hand.
Some of the earliest and most influential work on the study of motor redundancy came from the Russian physiologist Nikolai Bernstein. Bernstein's research was primarily concerned with understanding how coordination was developed for skilled actions. He observed that the redundancy of the motor system made it possible to execute actions and movements in a multitude of different ways while achieving equivalent outcomes. This equivalency in motor action means that there is no one-to-one correspondence between the desired movements and the coordination of the motor system needed to execute those movements. Any desired movement or action does not have a particular coordination of neurons, muscles, and kinematics that make it possible. This motor equivalency problem became known as the degrees of freedom problem because it is a product of having redundant degrees of freedom available in the motor system.
== Perception in motor control ==
Related, yet distinct from the issue of how the processing of sensory information affects the control of movements and actions is the question of how the perception of the world structures action. Perception is extremely important in motor control because it carries the relevant information about objects, environments and bodies which is used in organizing and executing actions and movements. What is perceived and how the subsequent information is used to organize the motor system is an ongoing area of research.
=== Model based control strategies ===
Most model based strategies of motor control rely on perceptual information, but assume that this information is not always useful, veridical or constant. Optical information is interrupted by eye blinks, motion is obstructed by objects in the environment, distortions can change the appearance of object shape. Model based and representational control strategies are those that rely on accurate internal models of the environment, constructed from a combination of perceptual information and prior knowledge, as the primary source information for planning and executing actions, even in the absence of perceptual information.
==== Inference and indirect perception ====
Many models of the perceptual system assume indirect perception, or the notion that the world that gets perceived is not identical to the actual environment. Environmental information must go through several stages before being perceived, and the transitions between these stages introduce ambiguity. What actually gets perceived is the mind's best guess about what is occurring in the environment based on previous experience. Support for this idea comes from the Ames room illusion, where a distorted room causes the viewer to see objects known to be a constant size as growing or shrinking as they move around the room. The room itself is seen as being square, or at least consisting of right angles, as all previous rooms the perceiver has encountered have had those properties. Another example of this ambiguity comes from the doctrine of specific nerve energies. The doctrine presents the finding that there are distinct nerve types for different types of sensory input, and these nerves respond in a characteristic way regardless of the method of stimulation. That is to say, the color red causes optical nerves to fire in a specific pattern that is processed by the brain as experiencing the color red. However, if that same nerve is electrically stimulated in an identical pattern, the brain could perceive the color red when no corresponding stimuli is present.
==== Forward models ====
Forward models are a predictive internal model of motor control that takes the available perceptual information, combined with a particular motor program, and tries to predict the outcome of the planned motor movement. Forward models structure action by determining how the forces, velocities, and positions of motor components affect changes in the environment and in the individual. It is proposed that forward models help with the Neural control of limb stiffness when individuals interact with their environment. Forward models are thought to use motor programs as input to predict the outcome of an action. An error signal is generated when the predictions made by a forward model do not match the actual outcome of the movement, prompting an update of an existing model and providing a mechanism for learning. These models explain why it is impossible to tickle yourself. A sensation is experienced as ticklish when it is unpredictable. However, forward models predict the outcome of your motor movements, meaning the motion is predictable, and therefore not ticklish.
Evidence for forward models comes from studies of motor adaptation. When a person's goal-directed reaching movements are perturbed by a force field, they gradually, but steadily, adapt the movement of their arm to allow them to again reach their goal. However, they do so in such a way that preserves some high level movement characteristics; bell-shaped velocity profiles, straight line translation of the hand, and smooth, continuous movements. These movement features are recovered, despite the fact that they require startlingly different arm dynamics (i.e. torques and forces). This recovery provides evidence that what is motivating movement is a particular motor plan, and the individual is using a forward model to predict how arm dynamics change the movement of the arm to achieve particular task level characteristics. Differences between the expected arm movement and the observed arm movement produces an error signal which is used as the basis for learning. Additional evidence for forward models comes from experiments which require subjects to determine the location of an effector following an unvisualized movement
==== Inverse models ====
Inverse models predict the necessary movements of motor components to achieve a desired perceptual outcome. They can also take the outcome of a motion and attempt to determine the sequence of motor commands that resulted in that state. These types of models are particularly useful for open loop control, and allow for specific types of movements, such as fixating on a stationary object while the head is moving. Complementary to forward models, inverse models attempt to estimate how to achieve a particular perceptual outcome in order to generate the appropriate motor plan. Because inverse models and forward model are so closely associated, studies of internal models are often used as evidence for the roles of both model types in action.
Motor adaptation studies, therefore, also make a case for inverse models. Motor movements seem to follow predefined "plans" that preserve certain invariant features of the movement. In the reaching task mentioned above, the persistence of bell-shaped velocity profiles and smooth, straight hand trajectories provides evidence for the existence of such plans. Movements that achieve these desired task-level outcomes are estimated by an inverse model. Adaptation therefore proceeds as a process of estimating the necessary movements with an inverse model, simulating with a forward model the outcome of those movement plans, observing the difference between the desired outcome and the actual outcome, and updating the models for a future attempt.
=== Information based control ===
An alternative to model based control is information based control. Informational control strategies organize movements and actions based on perceptual information about the environment, rather than on cognitive models or representations of the world. The actions of the motor system are organized by information about the environment and information about the current state of the agent. Information based control strategies often treat the environment and the organism as a single system, with action proceeding as a natural consequence of the interactions of this system. A core assumption of information based control strategies is that perceptions of the environment are rich in information and veridical for the purposes of producing actions. This runs counter to the assumptions of indirect perception made by model based control strategies.
==== Direct perception ====
Direct perception in the cognitive sense is related to the philosophical notion of naïve or direct realism in that it is predicated on the assumption that what we perceive is what is actually in the world. James J. Gibson is credited with recasting direct perception as ecological perception. While the problem of indirect perception proposes that physical information about object in our environment is not available due to the ambiguity of sensory information, proponents of direct perception (like Gibson) suggest that the relevant information specified in ambient optic array is the distal physical properties of objects. This specifying information reveals the action opportunities the environment affords. These affordances are directly perceivable without ambiguity, and thus preclude the need for internal models or representations of the world. Affordances exist only as a byproduct of the interactions between an agent and its environment, and thus perception is an "ecological" endeavor, depending on the whole agent/environment system rather than on the agent in isolation.
Because affordances are action possibilities, perception is directly connected to the production of actions and movements. The role of perception is to provide information that specifies how actions should be organized and controlled, and the motor system is "tuned" to respond to specific type of information in particular ways. Through this relationship, control of the motor system and the execution of actions is dictated by the information of the environment. As an example, a doorway "affords" passing through, but a wall does not. How one might pass through a doorway is specified by the visual information received from the environment, as well as the information perceived about one's own body. Together, this information determines the pass-ability of a doorway, but not a wall. In addition, the act of moving towards and passing through the doorway generates more information and this in turn specifies further action. The conclusion of direct perception is that actions and perceptions are critically linked and one cannot be fully understood without the other.
==== Behavioral dynamics ====
Building on the assumptions of direct perception behavioral dynamics is a behavioral control theory that treats perceptual organisms as dynamic systems that respond to informational variables with actions, in a functional manner. Under this understanding of behavior, actions unfold as the natural consequence of the interaction between the organisms and the available information about the environment, which specified in body-relevant variables. Much of the research in behavioral dynamics has focused on locomotion, where visually specified information (such as optic flow, time-to-contact, optical expansion, etc.) is used to determine how to navigate the environment Interaction forces between the human and the environment also affect behavioral dynamics as seen in by the Neural control of limb stiffness.
== Planning in motor control ==
=== Individual movement optimization ===
There are several mathematical models that describe how the central nervous system (CNS) derives reaching movements of limbs and eyes. The minimum jerk model states that the CNS minimizes jerk of a limb endpoint trajectory over the time of reaching, which results in a smooth trajectory. However, this model is based solely on the kinematics of movement and does not consider the underlying dynamics of the musculoskeletal system. Hence, the minimum torque-change model was introduced as an alternative, where the CNS minimizes the joint torque change over the time of reaching.
Later it was argued that there is no clear explanation about how could the CNS actually estimate complex quantities such as jerk or torque change and then integrate them over the duration of a trajectory. In response, model based on signal-dependent noise was proposed instead, which states that the CNS selects a trajectory by minimizing the variance of the final position of the limb endpoint. Since there is a motor noise in the neural system that is proportional to the activation of the muscles, the faster movements induce more motor noise and are thus less precise. This is also in line with the Fitts' Law and speed-accuracy trade-off. Optimal control theory was used to further extend the model based on signal-dependent noise, where the CNS optimizes an objective function that consists of a term related to accuracy and additionally a term related to metabolic cost of movement.
Another type of models is based on cost-benefit trade-off, where the objective function includes metabolic cost of movement and a subjective reward related to reaching the target accurately. In this case the reward for a successful reach within the desired target is discounted by the duration of reaching, since the gained reward is perceived less valuable when spending more time on it. However, these models were deterministic and did not account for motor noise, which is an essential property of stochastic motor control that results in speed-accuracy trade-off. To address that, a new model was later proposed to incorporate the motor noise and to unify cost-benefit and speed-accuracy trade-offs.
=== Multi-component movements ===
Some studies observed that the CNS can split a complex movement into sub-movements. The initial sub-movement tends to be fast and imprecise in order to bring the limb endpoint into vicinity of the target as soon as possible. Then, the final sub-movement tends to be slow and precise in order to correct for accumulated error by the first initial sub-movement and to successfully reach the target. A later study further explored how the CNS selects a temporary target of the initial sub-movement in different conditions. For example, when the actual target size decreases and thus complexity increases, the temporary target of the initial sub-movement moves away from the actual target in order to give more space for the final corrective action. Longer reaching distances have a similar effect, since more error is accumulated in the initial sub-movement and thus requiring more complex final correction. In less complex conditions, when the final actual target is large and the movement is short, the CNS tends to use a single movement, without splitting it into multiple competents.
== See also ==
Motor learning
Motor skill
Motor coordination
Motor cortex
Multisensory integration
Proprioception
Sensory processing
Sensory-motor coupling
Two-alternative forced choice
Psychomotor learning
== References ==
== Further reading ==
=== Research in athletes === | Wikipedia/Motor_control |
Media naturalness theory is also known as the psychobiological model. The theory was developed by Ned Kock and attempts to apply Darwinian evolutionary principles to suggest which types of computer-mediated communication will best fit innate human communication capabilities. Media naturalness theory argues that natural selection has resulted in face-to-face communication becoming the most effective way for two people to exchange information.
The theory has been applied to human communication outcomes in various contexts, such as: education, knowledge transfer, communication in virtual environments, e-negotiation, business process improvement, trust and leadership in virtual teamwork, online learning, maintenance of distributed relationships, performance in experimental tasks using various media, and modular production. Its development is also consistent with ideas from the field of evolutionary psychology.
The media naturalness theory builds on the media richness theory's arguments that face-to-face interaction is the richest type of communication medium by providing an evolutionary explanation for the face-to-face medium's degree of richness. Media naturalness theory argues that since ancient hominins communicated primarily face-to-face, evolutionary pressures since that time have led to the development of a brain that is consequently adapted for that form of communication. Kock points out that computer-mediated communication is far too recent a phenomenon to have had the time necessary to shape human cognition and language capabilities via natural selection. In turn, Kock argues that using communication media that suppress key elements found in face-to-face communication, as many electronic communication media do, ends up posing cognitive obstacles to communication, and particularly in the case of complex tasks (e.g., business process redesign, new product development, online learning), because such tasks seem to require more intense communication over extended periods of time than simple tasks.
== Medium naturalness ==
The naturalness of a communication medium is defined by Kock as the degree of similarity of the medium with the face-to-face medium. The face-to-face medium is presented as the medium enabling the highest possible level of communication naturalness, which is characterized by the following five key elements: (1) a high degree of co-location, which would allow the individuals engaged in a communication interaction to see and hear each other; (2) a high degree of synchronicity, which would allow the individuals to quickly exchange communicative stimuli; (3) the ability to convey and observe facial expressions; (4) the ability to convey and observe body language; and (5) the ability to convey and listen to speech.
Media naturalness theory predicts that any electronic communication medium allowing for the exchange of significantly less or more communicative stimuli per unit of time than the face-to-face medium will pose cognitive obstacles to communication. In other words, media naturalness theory places the face-to-face medium at the center of a one-dimensional scale of naturalness, where deviations to the left or right are associated with decreases in naturalness (see Figure 1).
Electronic media that enable the exchange of significantly more communicative stimuli per unit of time than the face-to-face medium are classified by media naturalness theory as having a lower degree of naturalness than the face-to-face medium. As such, those media are predicted to be associated with higher cognitive effort; in this case due primarily to a phenomenon known as information overload, which is characterized by individuals having more communicative stimuli to process than they are able to.
== Main predictions ==
Media naturalness effects on cognitive effort, communication ambiguity, and physiological arousal. Media naturalness theory's main prediction is that, other things being equal, a decrease in the degree of naturalness of a communication medium leads to the following effects in connection with communication interactions in complex tasks: (a) an increase in cognitive effort, (b) an increase in communication ambiguity, and (c) a decrease in physiological arousal.
Naturalness of electronic communication media. Electronic communication media often suppress key face-to-face communication elements, with the goal of creating other advantages. For example, Web-based bulletin boards and discussion groups enable asynchronous (or time-disconnected) communication, but at the same time make it difficult to have the same level of feedback immediacy found in face-to-face communication. That often leads to frustration from users who expect immediate feedback on their postings.
The high importance of speech. Media naturalness theory predicts that the degree to which an electronic communication medium supports an individual's ability to convey and listen to speech is particularly significant in determining its naturalness. The theory predicts, through its speech imperative proposition, that speech enablement influences naturalness significantly more than a medium's degree of support for the use of facial expressions and body language.
Compensatory adaptation. According to media naturalness theory, electronic communication media users can adapt their behavior in such a way as to overcome some of the limitations of those media. That is, individuals who choose to use electronic communication media to accomplish complex collaborative tasks may compensate for the cognitive obstacles associated with the lack of naturalness of the media. One of the ways in which this can be achieved through email is by users composing messages that are redundant and particularly well organized, compared to face-to-face communication. This often contributes to improving the effectiveness of communication, sometimes even beyond that of the face-to-face medium.
== Cognitive effort ==
Human beings possess specialized brain circuits that are adapted for the recognition of faces and the generation and recognition of facial expressions, which artificial intelligence research suggests require complex computations that are difficult to replicate even in powerful computers. The same situation is found in connection with speech generation and recognition. Generation and recognition of facial expressions, and speech generation and recognition, are performed effortlessly by humans.
Cognitive effort is defined in media naturalness theory as the amount of mental activity, or, from a biological perspective, the amount of brain activity involved in a communication interaction. It can be assessed directly, with the use of techniques such as magnetic resonance imaging. Cognitive effort can also be assessed indirectly, based on perceptions of levels of difficulty associated with communicative tasks, as well as through indirect measures such as that of fluency. Fluency is defined as the amount of time taken to convey a certain number of words through different communication media, which is assumed to correlate (and serve as a surrogate measure of) the amount of time taken to convey a certain number of ideas through different media. According to media naturalness theory, a decrease in the degree of naturalness of a communication medium leads to an increase in the amount of cognitive effort required to use the medium for communication.
== Communication ambiguity ==
Individuals brought up in different cultural environments usually possess different information processing schemas that they have learned over their lifetimes. Different schemas make individuals interpret information in different ways, particularly when information is expected but not actually provided.
While different individuals are likely to look for the same types of communicative stimuli, their interpretation of the message being communicated in the absence of those stimuli will be largely based on their learned schemas, which are likely to differ from those held by other individuals (no two individuals, not even identical twins raised together, go through exactly the same experiences during their lives). According to media naturalness theory, a decrease in medium naturalness, caused by the selective suppression of media naturalness elements in a communication medium, leads to an increase in the probability of misinterpretations of communicative cues, and thus an increase in communication ambiguity.
== Physiological arousal ==
To say that our genes influence the formation of a phenotypic trait (i.e., a biological trait that defines a morphological, behavioral, physiological, etc. characteristic) does not mean the same as saying that the trait in question is innate. Very few phenotypic traits are innate (e.g., blood type); the vast majority, including most of those in connection with our biological communication apparatus, need interaction with the environment to be fully and properly developed.
While there is substantial evidence suggesting that our biological communication apparatus is adapted for face-to-face communication, there is also ample evidence that such an apparatus (including the neural functional language system) cannot be fully developed without a significant amount of practice. Thus, according to media naturalness theory, evolution must have shaped brain mechanisms to compel human beings to practice the use of their biological communication apparatus; mechanisms that are similar to those compelling animals to practice those skills that play a key role in connection with survival and mating. Among these mechanisms, one of the most important is that of physiological arousal, which is often associated with excitement and pleasure. Engaging in communication interactions, particularly in face-to-face situations, triggers physiological arousal in human beings. Suppression of media naturalness elements makes communication interactions duller than if those elements were present.
== Speech importance ==
Complex speech was enabled by the evolution of a larynx located relatively low in the neck, which considerably increased the variety of sounds that our species could generate; this is actually one of the most important landmarks in the evolution of the human species. However, that adaptive design also significantly increased our ancestors' chances of choking on ingested food and liquids, and suffering from aerodigestive tract diseases such as gastroesophageal reflux. This leads to an interesting conclusion, which is that complex speech must have been particularly important for effective communication in our evolutionary past, otherwise the related evolutionary costs would prevent it from evolving through natural selection. This argument is similar to that made by Amotz Zahavi in connection with evolutionary handicaps. If a trait evolves to improve the effectiveness in connection with a task, in spite of imposing a survival handicap, then the trait should be a particularly strong determinant of the performance in the task to offset the survival cost it imposes.
Media naturalness theory builds on this evolutionary handicap conclusion to predict that the degree to which an electronic communication medium supports an individual's ability to convey and listen to speech is particularly significant in defining its naturalness. Media naturalness theory predicts, through its speech imperative proposition, that speech enablement influences naturalness significantly more than a medium's degree of support for the use of facial expressions and body language. This prediction is consistent with past research showing that removing speech from an electronic communication medium significantly increases the perceived mental effort associated with using the medium to perform knowledge-intensive tasks. According to this prediction, a medium such as audio conferencing is relatively close to the face-to-face medium in terms of naturalness (see Figure 2).
== Compensatory adaptation ==
Increases in cognitive effort and communication ambiguity are usually accompanied by an interesting behavioral phenomenon, called compensatory adaptation. The phenomenon is characterized by voluntary and involuntary attempts by the individuals involved in a communicative act to compensate for the obstacles posed by an unnatural communication medium. One of the key indications of compensatory adaptation is a decrease in communication fluency, which can be measured through the number of words conveyed per minute through a communication medium. That is, communication fluency is believed to go down as a result of individuals making an effort to adapt their behavior in a compensatory way.
For example, an empirical study suggests that when individuals used instant messaging and face-to-face media to perform complex and knowledge-intensive tasks, the use of the electronic (i.e., instant messaging) medium caused several effects. Those effects were consistent with media naturalness theory, and the compensatory adaptation notion. Among those effects, the electronic medium increased perceived cognitive effort by approximately 40% and perceived communication ambiguity by approximately 80% – as predicted by media naturalness theory. The electronic medium also reduced actual fluency by approximately 80%, and the quality of the task outcomes was not affected, suggesting compensatory adaptation.
== Media compensation theory ==
The 2011 media compensation theory by Hantula, Kock, D'Arcy, and DeRosa proposes a new theory that further refines Kock's media naturalness theory. The authors explain that the media compensation theory has been developed to specifically address two paradoxes:
Virtual communication, work, collaboration, and teams are largely successful (sometimes even more so than face-to-face equivalents) which conflicts with Kock's media naturalness theory; and,
"The human species evolved in small groups using communications modalities in constrained areas, yet use electronic communication media to allow large groups to work together effectively across time and space" (Hantula et al., 2011, p. 358).
The authors grapple with how humans "who have not changed much in many millennia" (Hantula et al., 2011, p. 358) are able to successfully embrace and employ lean media, such as texting, considering their assumption that human evolution has progressed down a path toward, and adeptness for, face-to-face communication.
== Media reduction and compensatory channel expansion ==
Kock and Garza (2011) continue Kock's research on media naturalness by studying whether taking a college course on-line, as opposed to in-person, will negatively impact students’ actual and perceived learning experiences due to differences in the respective media richness and media naturalness afforded by the two different approaches studied. The findings show that the online cohort performed statistically as well as the in-person portion of the study sample. The authors suggest that the study’s findings support Carlson and Zmud’s channel expansion theory (1999), which asserts that humans are capable of adapting to new communication media (Kock & Garza, 2011). Kock & Garza (2011) also argue that a portion of their findings support Kock’s (2004) earlier, since-disputed claim that people are not evolutionarily equipped to communicate through computer mediated communication as well as when they are communicating through richer media such as face-to-face communication. For example, DeClerck and Holtzman dispute an overriding need for visual and auditory cues when they say that “experienced users are able to accurately convey their intended message via digitally-mediated communication, despite the lack of available verbal and non-verbal cues” (2018, p. 116). DeClerk and Holtzman also suggest that text messaging may be more focused than face-to-face communication because it is not cluttered by additional verbal and non-verbal cues that can otherwise tie up “cognitive resources” (2018, p. 111). Additionally, Lisiecka, Rychwalska, Samson, Lucznik, Ziembowicz, Schostek, and Nowak (2016) point out that, although it has been generally accepted that “media other than face-to-face are considered an obstacle rather than an equally effective means of information transfer” (2016, p. 13), the results their study suggest that computer-mediated communication “has become similarly natural and intuitive as face-to-face contacts” (2016, p. 13).
== See also ==
Communication theory
Computer-supported collaboration
Evolutionary psychology
Media richness theory
Social presence theory
Theories of technology
== References ==
== Further reading ==
Daft R.L. (1987). "Message equivocality, media selection, and manager performance: Implications for information systems". MIS Quarterly. 11 (3): 355–366. doi:10.2307/248682. JSTOR 248682.
Dennis A.R.; Fuller R.M.; Valacich J.S. (2008). "Media, tasks, and communication processes: A theory of media synchronicity". MIS Quarterly. 32 (3): 575–600. doi:10.2307/25148857. JSTOR 25148857.
El-Shinnawy M., Markus L. (1998). "Acceptance of communication media in organizations: Richness or features?". IEEE Transactions on Professional Communication. 41 (4): 242–253. doi:10.1109/47.735366.
Lee A.S. (1994). "Electronic mail as a medium for rich communication: An empirical investigation using hermeneutic interpretation". MIS Quarterly. 18 (2): 143–157. doi:10.2307/249762. JSTOR 249762.
Lengel R.H. (1988). "The selection of communication media as an executive skill". Academy of Management Executive. 2 (3): 225–232. doi:10.5465/ame.1988.4277259.
Markus M.L. (1994). "Finding a happy medium: Explaining the negative effects of electronic communication on social life at work". ACM Transactions on Information Systems. 12 (2): 119–149. doi:10.1145/196734.196738. S2CID 151393.
Ngwenyama O.K., Lee A.S. (1997). "Communication richness in electronic mail: Critical social theory and the contextuality of meaning". MIS Quarterly. 21 (2): 145–167. doi:10.2307/249417. JSTOR 249417.
Nunamaker J.F.; Dennis A.R.; Valacich J.S.; Vogel D.R.; George J.F. (1991). "Electronic meeting systems to support group work". Communications of the ACM. 34 (7): 40–61. doi:10.1145/105783.105793. S2CID 10389854.
Pinsonneault A.; Barki H.; Gallupe R.B.; Hoppen N. (1999). "Electronic brainstorming: The illusion of productivity". Information Systems Research. 10 (2): 110–133. doi:10.1287/isre.10.2.110.
Rice R.E. (1993). "Media appropriateness: Using social presence theory to compare traditional and new organizational media". Human Communication Research. 19 (4): 451–484. doi:10.1111/j.1468-2958.1993.tb00309.x.
Robert L.P., Dennis A.R. (2005). "Paradox of richness: A cognitive model of media choice" (PDF). IEEE Transactions on Professional Communication. 48 (1): 10–21. doi:10.1109/tpc.2004.843292. hdl:2027.42/116285. S2CID 14248927.
Sallnas E.L.; Rassmus-Grohn K.; Sjostrom C. (2000). "Supporting presence in collaborative environments by haptic force feedback". ACM Transactions on Computer-Human Interaction. 7 (4): 461–476. doi:10.1145/365058.365086. S2CID 6632654.
Tan B.C.Y.; Wei K.; Huang W.W.; Ng G. (2000). "A dialogue technique to enhance electronic communication in virtual teams". IEEE Transactions on Professional Communication. 43 (2): 153–165. doi:10.1109/47.843643.
Te'eni D (2001). "A cognitive-affective model of organizational communication for designing IT". MIS Quarterly. 25 (2): 251–312. doi:10.2307/3250931. JSTOR 3250931.
Ulijn J.M.; Lincke A.; Karakaya Y. (2001). "Non-face-to-face international business communication: How is national culture reflected in this medium?". IEEE Transactions on Professional Communication. 44 (2): 126–138. doi:10.1109/47.925516.
Van Alstyne M., Brynjolfsson E. (2005). "Global village or cyberbalkans: Modeling and measuring the integration of electronic communities". Management Science. 51 (6): 851–868. doi:10.1287/mnsc.1050.0363. S2CID 17530600.
Walther J.B. (1996). "Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction". Communication Research. 23 (1): 3–43. doi:10.1177/009365096023001001. S2CID 152119884.
Walther J.B.; Slovacek C.; Tidwell L.C. (2001). "Is a picture worth a thousand words? Photographic images in long term and short term virtual teams". Communication Research. 28 (1): 105–134. doi:10.1177/009365001028001004. S2CID 35035846.
Zigurs I., Buckland B.K. (1998). "A theory of task-technology fit and group support systems effectiveness". MIS Quarterly. 22 (3): 313–334. doi:10.2307/249668. JSTOR 249668. S2CID 9155976. | Wikipedia/Media_naturalness_theory |
Dual inheritance theory (DIT), also known as gene–culture coevolution or biocultural evolution, was developed in the 1960s through early 1980s to explain how human behavior is a product of two different and interacting evolutionary processes: genetic evolution and cultural evolution. Genes and culture continually interact in a feedback loop: changes in genes can lead to changes in culture which can then influence genetic selection, and vice versa. One of the theory's central claims is that culture evolves partly through a Darwinian selection process, which dual inheritance theorists often describe by analogy to genetic evolution.
'Culture', in this context, is defined as 'socially learned behavior', and 'social learning' is defined as copying behaviors observed in others or acquiring behaviors through being taught by others. Most of the modelling done in the field relies on the first dynamic (copying), though it can be extended to teaching. Social learning, at its simplest, involves blind copying of behaviors from a model (someone observed behaving), though it is also understood to have many potential biases, including success bias (copying from those who are perceived to be better off), status bias (copying from those with higher status), homophily (copying from those most like ourselves), conformist bias (disproportionately picking up behaviors that more people are performing), etc. Understanding social learning is a system of pattern replication, and understanding that there are different rates of survival for different socially learned cultural variants, this sets up, by definition, an evolutionary structure: cultural evolution.
Because genetic evolution is relatively well understood, most of DIT examines cultural evolution and the interactions between cultural evolution and genetic evolution.
== Theoretical basis ==
DIT holds that genetic and cultural evolution interacted in the evolution of Homo sapiens. DIT recognizes that the natural selection of genotypes is an important component of the evolution of human behavior and that cultural traits can be constrained by genetic imperatives. However, DIT also recognizes that genetic evolution has endowed the human species with a parallel evolutionary process of cultural evolution. DIT makes three main claims:
=== Culture capacities are adaptations ===
The human capacity to store and transmit culture arose from genetically evolved psychological mechanisms. This implies that at some point during the evolution of the human species a type of social learning leading to cumulative cultural evolution was evolutionarily advantageous.
=== Culture evolves ===
Social learning processes give rise to cultural evolution. Cultural traits are transmitted differently from genetic traits and, therefore, result in different population-level effects on behavioral variation.
=== Genes and culture co-evolve ===
Cultural traits alter the social and physical environments under which genetic selection operates. For example, the cultural adoptions of agriculture and dairying have, in humans, caused genetic selection for the traits to digest starch and lactose, respectively. As another example, it is likely that once culture became adaptive, genetic selection caused a refinement of the cognitive architecture that stores and transmits cultural information. This refinement may have further influenced the way culture is stored and the biases that govern its transmission.
DIT also predicts that, under certain situations, cultural evolution may select for traits that are genetically maladaptive. An example of this is the demographic transition, which describes the fall of birth rates within industrialized societies. Dual inheritance theorists hypothesize that the demographic transition may be a result of a prestige bias, where individuals that forgo reproduction to gain more influence in industrial societies are more likely to be chosen as cultural models.
== View of culture ==
People have defined the word "culture" to describe a large set of different phenomena. A definition that sums up what is meant by "culture" in DIT is:
Culture is socially learned information stored in individuals' brains that is capable of affecting behavior.
This view of culture emphasizes population thinking by focusing on the process by which culture is generated and maintained. It also views culture as a dynamic property of individuals, as opposed to a view of culture as a superorganic entity to which individuals must conform. This view's main advantage is that it connects individual-level processes to population-level outcomes.
== Genetic influence on cultural evolution ==
Genes affect cultural evolution via psychological predispositions on cultural learning. Genes encode much of the information needed to form the human brain. Genes constrain the brain's structure and, hence, the ability of the brain to acquire and store culture. Genes may also endow individuals with certain types of transmission bias (described below).
== Cultural influences on genetic evolution ==
Culture can profoundly influence gene frequencies in a population.
Lactase persistence
One of the best known examples is the prevalence of the genotype for adult lactose absorption in human populations, such as Northern Europeans and some African societies, with a long history of raising cattle for milk. Until around 7,500 years ago, lactase production stopped shortly after weaning, and in societies which did not develop dairying, such as East Asians and Amerindians, this is still true today. In areas with lactase persistence, it is believed that by domesticating animals, a source of milk became available while an adult and thus strong selection for lactase persistence could occur; in a Scandinavian population, the estimated selection coefficient was 0.09-0.19. This implies that the cultural practice of raising cattle first for meat and later for milk led to selection for genetic traits for lactose digestion. Recently, analysis of natural selection on the human genome suggests that civilization has accelerated genetic change in humans over the past 10,000 years.
Food processing
Culture has driven changes to the human digestive systems making many digestive organs, such as teeth or stomach, smaller than expected for primates of a similar size, and has been attributed to one of the reasons why humans have such large brains compared to other great apes. This is due to food processing. Early examples of food processing include pounding, marinating and most notably cooking. Pounding meat breaks down the muscle fibres, hence taking away some of the job from the mouth, teeth and jaw. Marinating emulates the action of the stomach with high acid levels. Cooking partially breaks down food making it more easily digestible. Food enters the body effectively partly digested, and as such food processing reduces the work that the digestive system has to do. This means that there is selection for smaller digestive organs as the tissue is energetically expensive, those with smaller digestive organs can process their food but at a lower energetic cost than those with larger organs. Cooking is notable because the energy available from food increases when cooked and this also means less time is spent looking for food.
Humans living on cooked diets spend only a fraction of their day chewing compared to other extant primates living on raw diets. American girls and boys spent on average 7 to 8 percent of their day chewing respectively (1.68 to 1.92 hours per day), compared to chimpanzees, who spend more than 6 hours a day chewing. This frees up time which can be used for hunting. A raw diet means hunting is constrained since time spent hunting is time not spent eating and chewing plant material, but cooking reduces the time required to get the day's energy requirements, allowing for more subsistence activities. Digestibility of cooked carbohydrates is approximately on average 30% higher than digestibility of non-cooked carbohydrates. This increased energy intake, more free time and savings made on tissue used in the digestive system allowed for the selection of genes for larger brain size.
Despite its benefits, brain tissue requires a large amount of calories, hence a main constraint in selection for larger brains is calorie intake. A greater calorie intake can support greater quantities of brain tissue. This is argued to explain why human brains can be much larger than other apes, since humans are the only ape to engage in food processing. The cooking of food has influenced genes to the extent that, research suggests, humans cannot live without cooking. A study on 513 individuals consuming long-term raw diets found that as the percentage of their diet which was made up of raw food and/or the length they had been on a diet of raw food increased, their BMI decreased. This is despite access to many non-thermal processing, like grinding, pounding or heating to 48 °C. (118 °F). With approximately 86 billion neurons in the human brain and 60–70 kg body mass, an exclusively raw diet close to that of what extant primates have would be not viable as, when modelled, it is argued that it would require an infeasible level of more than nine hours of feeding every day. However, this is contested, with alternative modelling showing enough calories could be obtained within 5–6 hours per day. Some scientists and anthropologists point to evidence that brain size in the Homo lineage started to increase well before the advent of cooking due to increased consumption of meat and that basic food processing (slicing) accounts for the size reduction in organs related to chewing. Cornélio et al. argues that improving cooperative abilities and a varying of diet to more meat and seeds improved foraging and hunting efficiency. It is this that allowed for the brain expansion, independent of cooking which they argue came much later, a consequence from the complex cognition that developed. Yet this is still an example of a cultural shift in diet and the resulting genetic evolution. Further criticism comes from the controversy of the archaeological evidence available. Some claim there is a lack of evidence of fire control when brain sizes first started expanding. Wrangham argues that anatomical evidence around the time of the origin of Homo erectus (1.8 million years ago), indicates that the control of fire and hence cooking occurred. At this time, the largest reductions in tooth size in the entirety of human evolution occurred, indicating that softer foods became prevalent in the diet. Also at this time was a narrowing of the pelvis indicating a smaller gut and also there is evidence that there was a loss of the ability to climb which Wrangham argues indicates the control of fire, since sleeping on the ground needs fire to ward off predators. The proposed increases in brain size from food processing will have led to a greater mental capacity for further cultural innovation in food processing which will have increased digestive efficiency further providing more energy for further gains in brain size. This positive feedback loop is argued to have led to the rapid brain size increases seen in the Homo lineage.
== Mechanisms of cultural evolution ==
In DIT, the evolution and maintenance of cultures is described by five major mechanisms: natural selection of cultural variants, random variation, cultural drift, guided variation and transmission bias.
=== Natural selection ===
Differences between cultural phenomena result in differential rates of their spread; similarly, cultural differences among individuals can lead to differential survival and reproduction rates of individuals. The patterns of this selective process depend on transmission biases and can result in behavior that is more adaptive to a given environment.
=== Random variation ===
Random variation arises from errors in the learning, display or recall of cultural information, and is roughly analogous to the process of mutation in genetic evolution.
=== Cultural drift ===
Cultural drift is a process roughly analogous to genetic drift in evolutionary biology. In cultural drift, the frequency of cultural traits in a population may be subject to random fluctuations due to chance variations in which traits are observed and transmitted (sometimes called "sampling error"). These fluctuations might cause cultural variants to disappear from a population. This effect should be especially strong in small populations. A model by Hahn and Bentley shows that cultural drift gives a reasonably good approximation to changes in the popularity of American baby names. Drift processes have also been suggested to explain changes in archaeological pottery and technology patent applications. Changes in the songs of song birds are also thought to arise from drift processes, where distinct dialects in different groups occur due to errors in songbird singing and acquisition by successive generations. Cultural drift is also observed in an early computer model of cultural evolution.
=== Guided variation ===
Cultural traits may be gained in a population through the process of individual learning. Once an individual learns a novel trait, it can be transmitted to other members of the population. The process of guided variation depends on an adaptive standard that determines what cultural variants are learned.
=== Biased transmission ===
Understanding the different ways that culture traits can be transmitted between individuals has been an important part of DIT research since the 1970s. Transmission biases occur when some cultural variants are favored over others during the process of cultural transmission. Boyd and Richerson (1985) defined and analytically modeled a number of possible transmission biases. The list of biases has been refined over the years, especially by Henrich and McElreath.
==== Content bias ====
Content biases result from situations where some aspect of a cultural variant's content makes them more likely to be adopted. Content biases can result from genetic preferences, preferences determined by existing cultural traits, or a combination of the two. For example, food preferences can result from genetic preferences for sugary or fatty foods and socially-learned eating practices and taboos. Content biases are sometimes called "direct biases."
==== Context bias ====
Context biases result from individuals using clues about the social structure of their population to determine what cultural variants to adopt. This determination is made without reference to the content of the variant. There are two major categories of context biases: model-based biases, and frequency-dependent biases.
===== Model-based biases =====
Model-based biases result when an individual is biased to choose a particular "cultural model" to imitate. There are four major categories of model-based biases: prestige bias, skill bias, success bias, and similarity bias. A "prestige bias" results when individuals are more likely to imitate cultural models that are seen as having more prestige. A measure of prestige could be the amount of deference shown to a potential cultural model by other individuals. A "skill bias" results when individuals can directly observe different cultural models performing a learned skill and are more likely to imitate cultural models that perform better at the specific skill. A "success bias" results from individuals preferentially imitating cultural models that they determine are most generally successful (as opposed to successful at a specific skill as in the skill bias.) A "similarity bias" results when individuals are more likely to imitate cultural models that are perceived as being similar to the individual based on specific traits.
===== Frequency-dependent biases =====
Frequency-dependent biases result when an individual is biased to choose particular cultural variants based on their perceived frequency in the population. The most explored frequency-dependent bias is the "conformity bias." Conformity biases result when individuals attempt to copy the mean or the mode cultural variant in the population. Another possible frequency dependent bias is the "rarity bias." The rarity bias results when individuals preferentially choose cultural variants that are less common in the population. The rarity bias is also sometimes called a "nonconformist" or "anti-conformist" bias.
== Social learning and cumulative cultural evolution ==
In DIT, the evolution of culture is dependent on the evolution of social learning. Analytic models show that social learning becomes evolutionarily beneficial when the environment changes with enough frequency that genetic inheritance can not track the changes, but not fast enough that individual learning is more efficient. For environments that have very little variability, social learning is not needed since genes can adapt fast enough to the changes that occur, and innate behaviour is able to deal with the constant environment. In fast changing environments cultural learning would not be useful because what the previous generation knew is now outdated and will provide no benefit in the changed environment, and hence individual learning is more beneficial. It is only in the moderately changing environment where cultural learning becomes useful since each generation shares a mostly similar environment but genes have insufficient time to change to changes in the environment. While other species have social learning, and thus some level of culture, only humans, some birds and chimpanzees are known to have cumulative culture. Boyd and Richerson argue that the evolution of cumulative culture depends on observational learning and is uncommon in other species because it is ineffective when it is rare in a population. They propose that the environmental changes occurring in the Pleistocene may have provided the right environmental conditions. Michael Tomasello argues that cumulative cultural evolution results from a ratchet effect that began when humans developed the cognitive architecture to understand others as mental agents. Furthermore, Tomasello proposed in the 80s that there are some disparities between the observational learning mechanisms found in humans and great apes - which go some way to explain the observable difference between great ape traditions and human types of culture (see Emulation (observational learning)).
== Cultural group selection ==
Although group selection is commonly thought to be nonexistent or unimportant in genetic evolution, DIT predicts that, due to the nature of cultural inheritance, it may be an important force in cultural evolution. Group selection occurs in cultural evolution because conformist biases make it difficult for novel cultural traits to spread through a population (see above section on transmission biases). Conformist bias also helps maintain variation between groups. These two properties, rare in genetic transmission, are necessary for group selection to operate. Based on an earlier model by Cavalli-Sforza and Feldman, Boyd and Richerson show that conformist biases are almost inevitable when traits spread through social learning, implying that group selection is common in cultural evolution. Analysis of small groups in New Guinea imply that cultural group selection might be a good explanation for slowly changing aspects of social structure, but not for rapidly changing fads. The ability of cultural evolution to maintain intergroup diversity is what allows for the study of cultural phylogenetics.
== Historical development ==
In 1876, Friedrich Engels wrote a manuscript titled The Part Played by Labour in the Transition from Ape to Man, accredited as a founding document of DIT; “The approach to gene-culture coevolution first developed by Engels and developed later on by anthropologists…” is described by Stephen Jay Gould as “…the best nineteenth-century case for gene-culture coevolution.” The idea that human cultures undergo a similar evolutionary process as genetic evolution also goes back to Darwin. In the 1960s, Donald T. Campbell published some of the first theoretical work that adapted principles of evolutionary theory to the evolution of cultures. In 1976, two developments in cultural evolutionary theory set the stage for DIT. In that year Richard Dawkins's The Selfish Gene introduced ideas of cultural evolution to a popular audience. Although one of the best-selling science books of all time, because of its lack of mathematical rigor, it had little effect on the development of DIT. Also in 1976, geneticists Marcus Feldman and Luigi Luca Cavalli-Sforza published the first dynamic models of gene–culture coevolution. These models were to form the basis for subsequent work on DIT, heralded by the publication of three seminal books in the 1980s.
The first was Charles Lumsden and E.O. Wilson's Genes, Mind and Culture. This book outlined a series of mathematical models of how genetic evolution might favor the selection of cultural traits and how cultural traits might, in turn, affect the speed of genetic evolution. While it was the first book published describing how genes and culture might coevolve, it had relatively little effect on the further development of DIT. Some critics felt that their models depended too heavily on genetic mechanisms at the expense of cultural mechanisms. Controversy surrounding Wilson's sociobiological theories may also have decreased the lasting effect of this book.
The second 1981 book was Cavalli-Sforza and Feldman's Cultural Transmission and Evolution: A Quantitative Approach. Borrowing heavily from population genetics and epidemiology, this book built a mathematical theory concerning the spread of cultural traits. It describes the evolutionary implications of vertical transmission, passing cultural traits from parents to offspring; oblique transmission, passing cultural traits from any member of an older generation to a younger generation; and horizontal transmission, passing traits between members of the same population.
The next significant DIT publication was Robert Boyd and Peter Richerson's 1985 Culture and the Evolutionary Process. This book presents the now-standard mathematical models of the evolution of social learning under different environmental conditions, the population effects of social learning, various forces of selection on cultural learning rules, different forms of biased transmission and their population-level effects, and conflicts between cultural and genetic evolution. The book's conclusion also outlined areas for future research that are still relevant today.
== Current and future research ==
In their 1985 book, Boyd and Richerson outlined an agenda for future DIT research. This agenda, outlined below, called for the development of both theoretical models and empirical research. DIT has since built a rich tradition of theoretical models over the past two decades. However, there has not been a comparable level of empirical work.
In a 2006 interview Harvard biologist E. O. Wilson expressed disappointment at the little attention afforded to DIT:
"...for some reason I haven't fully fathomed, this most promising frontier of scientific research has attracted very few people and very little effort."
Kevin Laland and Gillian Ruth Brown attribute this lack of attention to DIT's heavy reliance on formal modeling.
"In many ways the most complex and potentially rewarding of all approaches, [DIT], with its multiple processes and cerebral onslaught of sigmas and deltas, may appear too abstract to all but the most enthusiastic reader. Until such a time as the theoretical hieroglyphics can be translated into a respectable empirical science most observers will remain immune to its message."
Economist Herbert Gintis disagrees with this critique, citing empirical work as well as more recent work using techniques from behavioral economics. These behavioral economic techniques have been adapted to test predictions of cultural evolutionary models in laboratory settings as well as studying differences in cooperation in fifteen small-scale societies in the field.
Since one of the goals of DIT is to explain the distribution of human cultural traits, ethnographic and ethnologic techniques may also be useful for testing hypothesis stemming from DIT. Although findings from traditional ethnologic studies have been used to buttress DIT arguments, thus far there have been little ethnographic fieldwork designed to explicitly test these hypotheses.
Herb Gintis has named DIT one of the two major conceptual theories with potential for unifying the behavioral sciences, including economics, biology, anthropology, sociology, psychology and political science. Because it addresses both the genetic and cultural components of human inheritance, Gintis sees DIT models as providing the best explanations for the ultimate cause of human behavior and the best paradigm for integrating those disciplines with evolutionary theory. In a review of competing evolutionary perspectives on human behavior, Laland and Brown see DIT as the best candidate for uniting the other evolutionary perspectives under one theoretical umbrella.
== Relation to other fields ==
=== Sociology and cultural anthropology ===
Two major topics of study in both sociology and cultural anthropology are human cultures and cultural variation.
However, Dual Inheritance theorists charge that both disciplines too often treat culture as a static superorganic entity that dictates human behavior. Cultures are defined by a suite of common traits shared by a large group of people. DIT theorists argue that this doesn't sufficiently explain variation in cultural traits at the individual level. By contrast, DIT models human culture at the individual level and views culture as the result of a dynamic evolutionary process at the population level.
=== Human sociobiology and evolutionary psychology ===
Evolutionary psychologists study the evolved architecture of the human mind. They see it as composed of many different programs that process information, each with assumptions and procedures that were specialized by natural selection to solve a different adaptive problem faced by our hunter-gatherer ancestors (e.g., choosing mates, hunting, avoiding predators, cooperating, using aggression). These evolved programs contain content-rich assumptions about how the world and other people work. When ideas are passed from mind to mind, they are changed by these evolved inference systems (much like messages get changed in a game of telephone). But the changes are not usually random. Evolved programs add and subtract information, reshaping the ideas in ways that make them more "intuitive", more memorable, and more attention-grabbing. In other words, "memes" (ideas) are not precisely like genes. Genes are normally copied faithfully as they are replicated, but ideas normally are not. It's not just that ideas mutate every once in a while, like genes do. Ideas are transformed every time they are passed from mind to mind, because the sender's message is being interpreted by evolved inference systems in the receiver. It is useful for some applications to note, however, that there are ways to pass ideas which are more resilient and involve substantially less mutation, such as by mass distribution of printed media.
There is no necessary contradiction between evolutionary psychology and DIT, but evolutionary psychologists argue that the psychology implicit in many DIT models is too simple; evolved programs have a rich inferential structure not captured by the idea of a "content bias". They also argue that some of the phenomena DIT models attribute to cultural evolution are cases of "evoked culture"—situations in which different evolved programs are activated in different places, in response to cues in the environment.
Sociobiologists try to understand how maximizing genetic fitness, in either the modern era or past environments, can explain human behavior. When faced with a trait that seems maladaptive, some sociobiologists try to determine how the trait actually increases genetic fitness (maybe through kin selection or by speculating about early evolutionary environments). Dual inheritance theorists, in contrast, will consider a variety of genetic and cultural processes in addition to natural selection on genes.
=== Human behavioral ecology ===
Human behavioral ecology (HBE) and DIT have a similar relationship to what ecology and evolutionary biology have in the biological sciences. HBE is more concerned about ecological process and DIT more focused on historical process. One difference is that human behavioral ecologists often assume that culture is a system that produces the most adaptive outcome in a given environment. This implies that similar behavioral traditions should be found in similar environments. However, this is not always the case. A study of African cultures showed that cultural history was a better predictor of cultural traits than local ecological conditions.
=== Memetics ===
Memetics, which comes from the meme idea described in Dawkins's The Selfish Gene, is similar to DIT in that it treats culture as an evolutionary process that is distinct from genetic transmission. However, there are some philosophical differences between memetics and DIT. One difference is that memetics' focus is on the selection potential of discrete replicators (memes), where DIT allows for transmission of both non-replicators and non-discrete cultural variants. DIT does not assume that replicators are necessary for cumulative adaptive evolution. DIT also more strongly emphasizes the role of genetic inheritance in shaping the capacity for cultural evolution. But perhaps the biggest difference is a difference in academic lineage. Memetics as a label is more influential in popular culture than in academia. Critics of memetics argue that it is lacking in empirical support or is conceptually ill-founded, and question whether there is hope for the memetic research program succeeding. Proponents point out that many cultural traits are discrete, and that many existing models of cultural inheritance assume discrete cultural units, and hence involve memes.
== Criticisms ==
Israeli psychologist Liane Gabora has criticised DIT. She argues that traits that are not transmitted by way of a self-assembly code (as in genetic evolution) is misleading, because this second use does not capture the algorithmic structure that makes an inheritance system require a particular kind of mathematical framework.
Other criticisms of the effort to frame culture in tandem with evolution have been leveled by Richard Lewontin, Niles Eldredge, and Stuart Kauffman.
== See also ==
Nature versus nurture – Debate about heredity and environment as determinants of physical or mental development
Adaptive bias – Theory of bias in human reasoning
Cultural selection theory – Study of cultural change modelled on theories of evolutionary biology
Memetics – Study of self-replicating units of culture
Sociocultural evolution – Evolution of societies
== References ==
== Further reading ==
=== Books ===
Lumsden, C. J. and E. O. Wilson. 1981. Genes, Mind, and Culture: The Coevolutionary Process. Cambridge, Massachusetts: Harvard University Press.
Cavalli-Sforza, L. L. and M. Feldman. 1981. Cultural Transmission and Evolution: A Quantitative Approach. Princeton, New Jersey: Princeton University Press.
Boyd, Robert; Richerson, Peter J. (1985). Culture and the Evolutionary Process. University of Chicago Press. ISBN 978-0-226-06931-9.
Durham, W. H. 1991. Coevolution: Genes, Culture and Human Diversity. Stanford, California: Stanford University Press. ISBN 0-8047-1537-8
Tomasello, Michael (1999). The Cultural Origins of Human Cognition. Harvard University Press. ISBN 978-0-674-00582-2.
Shennan, S. J. 2002. Genes, Memes and Human History: Darwinian Archaeology and Cultural Evolution. London: Thames and Hudson.
Laland, Kevin N.; Brown, Gillian R. (2011). Sense and Nonsense: Evolutionary Perspectives on Human Behaviour. OUP Oxford. ISBN 978-0-19-958696-7.
Boyd, R. and P. J. Richerson. 2005. The Origin and Evolution of Cultures. Oxford: Oxford University Press.
Richerson, Peter J.; Boyd, Robert (2008). Not By Genes Alone: How Culture Transformed Human Evolution. University of Chicago Press. ISBN 978-0-226-71213-0.
Henrich, Joseph (2015). The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter. Princeton University Press. ISBN 978-1-4008-7329-6.
Laland, K.H. 2017. Darwin's Unfinished Symphony: How Culture Made the Human Mind. Princeton: Princeton University Press.
Wrangham, Richard (2009). Catching Fire: How Cooking Made Us Human. Basic Books. ISBN 978-0-7867-4478-7.
=== Reviews ===
Smith, E. A. 1999. Three styles in the evolutionary analysis of human behavior. In L. Cronk, N. Chagnon, and W. Irons, (Eds.) Adaptation and Human Behavior: An Anthropological Perspective New York: Aldine de Gruyter.
Henrich, Joseph; McElreath, Richard (January 2003). "The evolution of cultural evolution". Evolutionary Anthropology: Issues, News, and Reviews. 12 (3): 123–135. doi:10.1002/evan.10110.
Mesoudi, Alex; Whiten, Andrew; Laland, Kevin N. (August 2006). "Towards a unified science of cultural evolution". Behavioral and Brain Sciences. 29 (4): 329–347. doi:10.1017/S0140525X06009083. PMID 17094820.
Gintis, H (2006). "A framework for the integration of the behavioral sciences" (PDF). Behavioral and Brain Sciences. 30 (1): 1–61. doi:10.1017/s0140525x07000581. PMID 17475022. S2CID 18887154.
Bentley, R.A., C. Lipo, H.D.G. Maschner and B. Marler 2007. Darwinian Archaeologies. In R.A. Bentley, H.D.G. Maschner & C. Chippendale (Eds.) Handbook of Archaeological Theories. Lanham (MD): AltaMira Press.
McElreath, Richard; Henrich, Joseph (2012). "Modelling cultural evolution". Oxford Handbook of Evolutionary Psychology. pp. 571–586. doi:10.1093/oxfordhb/9780198568308.013.0039. ISBN 978-0-19-856830-8.
Henrich, Joseph; McElreath, Richard (2012). "Dual-inheritance theory: The evolution of human cultural capacities and cultural evolution". Oxford Handbook of Evolutionary Psychology. pp. 555–570. doi:10.1093/oxfordhb/9780198568308.013.0038. ISBN 978-0-19-856830-8.
Sterelny, Kim (2002). Review Genes, Memes and Human History (PDF). Stephen Shennan. London: Thames and Hudson. p. 304.
Laland, K.N.; Odling-Smee, J.; Myles, S. (2010). "How culture shaped the human genome: bringing genetics and the human sciences together". Nature Reviews Genetics. 11 (2): 137–148. doi:10.1038/nrg2734. PMID 20084086. S2CID 10287878.
Ovcharov, Dmitry (2023). "The ideas of genetic and cultural evolution in the Philosophy of the 20th Century: a Historical and philosophical analytical review". Bulletin of the Moscow State Pedagogical University. The "Philosophical Sciences" Series. 45 (1): 79–88. doi:10.25688/2078-9238.2023.45.1.6.
=== Journal articles ===
Boyd, Robert; Richerson, Peter J. (2007). "Culture, Adaptation, and Innateness". The Innate Mind: Volume 2: Culture and Cognition. pp. 23–38. doi:10.1093/acprof:oso/9780195310139.003.0002. ISBN 978-0-19-531013-9.
Richerson, Peter J.; Boyd, Robert (2001). "Built For Speed, Not For Comfort: Darwinian Theory and Human Culture". History and Philosophy of the Life Sciences. 23 (3–4): 425–465. JSTOR 23332522. PMID 12472064.
== External links ==
=== Current DIT researchers ===
Rob Boyd, Department of Anthropology, UCLA
Marcus Feldman Archived 2015-11-28 at the Wayback Machine, Department of Biological Sciences, Stanford
Joe Henrich, Departments of Psychology and Economics, University of British Columbia
Richard McElreath, Anthropology Department, UC Davis
Peter J. Richerson, Department of Environmental Science and Policy, UC Davis
=== Related researchers ===
Liane Gabora Archived 2018-10-02 at the Wayback Machine, Department of Psychology, University of British Columbia
Russell Gray Max Planck Institute for the Science of Human History, Jena, Germany
Herb Gintis Archived 2007-06-12 at the Wayback Machine, Emeritus Professor of Economics, University of Massachusetts & Santa Fe Institute
Kevin Laland Archived 2019-01-09 at the Wayback Machine, School of Biology, University of St. Andrews
Ruth Mace, Department of Anthropology, University College London
Alex Mesoudi Human Biological and Cultural Evolution Group, University of Exeter, UK
Michael Tomasello, Department of Developmental and Comparative Psychology, Max Planck Institute for Evolutionary Anthropology
Peter Turchin Department of Ecology and Evolutionary Biology, University of Connecticut
Mark Collard, Department of Archaeology, Simon Fraser University, and Department of Archaeology, University of Aberdeen | Wikipedia/Dual_inheritance_theory |
Type physicalism (also known as reductive materialism, type identity theory, mind–brain identity theory, and identity theory of mind) is a physicalist theory in the philosophy of mind. It asserts that mental events can be grouped into types, and can then be correlated with types of physical events in the brain. For example, one type of mental event, such as "mental pains" will, presumably, turn out to be describing one type of physical event (like C-fiber firings).
Type physicalism is contrasted with token identity physicalism, which argues that mental events are unlikely to have "steady" or categorical biological correlates. These positions make use of the philosophical type–token distinction (e.g., Two persons having the same "type" of car need not mean that they share a "token", a single vehicle). Type physicalism can now be understood to argue that there is an identity between types (any mental type is identical with some physical type), whereas token identity physicalism says that every token mental state/event/property is identical to some brain state/event/property.
There are other ways a physicalist might criticize type physicalism; eliminative materialism and revisionary materialism question whether science is currently using the best categorisations. Proponents of these views argue that in the same way that talk of demonic possession was questioned with scientific advance, categorisations like "pain" may need to be revised.
== Background ==
According to U. T. Place, one of the popularizers of the idea of type-identity in the 1950s and 1960s, the idea of type-identity physicalism originated in the 1930s with the psychologist E. G. Boring and took nearly a quarter of a century to gain acceptance from the philosophical community. Boring, in a book entitled The Physical Dimensions of Consciousness (1933) wrote that:
To the author a perfect correlation is identity. Two events that always occur together at the same time in the same place, without any temporal or spatial differentiation at all, are not two events but the same event. The mind-body correlations as formulated at present, do not admit of spatial correlation, so they reduce to matters of simple correlation in time. The need for identification is no less urgent in this case (p. 16, quoted in Place [unpublished]).
The barrier to the acceptance of any such vision of the mind, according to Place, was that philosophers and logicians had not yet taken a substantial interest in questions of identity and referential identification in general. The dominant epistemology of the logical positivists at that time was phenomenalism, in the guise of the theory of sense-data. Indeed, Boring himself subscribed to the phenomenalist creed, attempting to reconcile it with an identity theory and this resulted in a reductio ad absurdum of the identity theory, since brain states would have turned out, on this analysis, to be identical to colors, shapes, tones and other sensory experiences.
The revival of interest in the work of Gottlob Frege and his ideas of sense and reference on the part of Herbert Feigl and J. J. C. Smart, along with the discrediting of phenomenalism through the influence of the later Wittgenstein and J. L. Austin, led to a more tolerant climate toward physicalistic and realist ideas. Logical behaviorism emerged as a serious contender to take the place of the Cartesian "ghost in the machine" and, although not lasting very long as a dominant position on the mind/body problem, its elimination of the whole realm of internal mental events was strongly influential in the formation and acceptance of the thesis of type identity.
== Versions of type identity theory ==
There were actually subtle but interesting differences between the three most widely credited formulations of the type-identity thesis, those of Place, Feigl and Smart which were published in several articles in the late 1950s. However, all of the versions share the central idea that the mind is identical to something physical.
=== U. T. Place ===
U. T. Place's (1956) notion of the relation of identity was derived from Bertrand Russell's distinction among several types of is statements: the is of identity, the is of equality and the is of composition. Place's version of the relation of identity is more accurately described as a relation of composition. For Place, higher-level mental events are composed out of lower-level physical events and will eventually be analytically reduced to these. So, to the objection that "sensations" do not mean the same thing as "mental processes", Place could simply reply with the example that "lightning" does not mean the same thing as "electrical discharge" since we determine that something is lightning by looking and seeing it, whereas we determine that something is an electrical discharge through experimentation and testing. Nevertheless, "lightning is an electrical discharge" is true since the one is composed of the other.
=== Feigl and Smart ===
For Feigl (1957) and Smart (1959), on the other hand, the identity was to be interpreted as the identity between the referents of two descriptions (senses) which referred to the same thing, as in "the morning star" and "the evening star" both referring to Venus, a necessary identity. So to the objection about the lack of equality of meaning between "sensation" and "brain process", their response was to invoke this Fregean distinction: "sensations" and "brain" processes do indeed mean different things but they refer to the same physical phenomenon. Moreover, "sensations are brain processes" is a contingent, not a necessary, identity.
== Criticism and replies ==
=== Multiple realizability ===
One of the most influential and common objections to the type identity theory is the argument from multiple realizability. The multiple realizability thesis asserts that mental states can be realized in multiple kinds of systems, not just brains. Since the identity theory identifies mental events with certain brain states, it does not allow for mental states to be realized in organisms or computational systems that do not have a brain. This is in effect an argument that the identity theory is too narrow because it does not allow for organisms without brains to have mental states. However, token identity (where only particular tokens of mental states are identical with particular tokens of physical events) and functionalism both account for multiple realizability.
The response of type identity theorists, such as Smart, to this objection is that, while it may be true that mental events are multiply realizable, this does not demonstrate the falsity of type identity. As Smart states:
"The functionalist second order [causal] state is a state of having some first order state or other which causes or is caused by the behavior to which the functionalist alludes. In this way we have a second order type theory".
The fundamental point is that it is extremely difficult to determine where, on the continuum of first order processes, type identity ends and merely token identities begin. Take Quine's example of English country gardens. In such gardens, the tops of hedges are cut into various shapes, for example the shape of an elf. We can make generalizations over the type elf-shaped hedge only if we abstract away from the concrete details of the individual twigs and branches of each hedge. So, whether we say that two things are of the same type or are tokens of the same type because of subtle differences is just a matter of descriptive abstraction. The type-token distinction is not all or nothing.
Hilary Putnam essentially rejects functionalism because, he believes, it is indeed a second-order type identity theory. Putnam uses multiple realizability against functionalism itself, suggesting that mental events (or kinds, in Putnam's terminology) may be diversely implemented by diverse functional/computational kinds; there may be only a token identification between particular mental kinds and particular functional kinds. Putnam, and many others who have followed him, now tend to identify themselves as generically non-reductive physicalists. Putnam's invocation of multiple realizability does not, of course, directly answer the problem raised by Smart with respect to useful generalizations over types and the flexible nature of the type-token distinction in relation to causal taxonomies in science.
=== Qualia ===
Another frequent objection is that type identity theories fail to account for phenomenal mental states (or qualia), such as having a pain, feeling sad, experiencing nausea. (Qualia are merely the subjective qualities of conscious experience. An example is the way the pain of jarring one's elbow feels to the individual.) Arguments can be found in Saul Kripke and David Chalmers, for example, according to which the identity theorist cannot identify phenomenal mental states with brain states (or any other physical state for that matter) because one has a sort of direct awareness of the nature of such qualitative mental states, and their nature is qualitative in a way that brain states are not. A famous formulation of the qualia objection comes from Frank Jackson in the form of the Mary's room thought experiment. Let us suppose, Jackson suggests, that a particularly brilliant super-scientist named Mary has been locked away in a completely black-and-white room her entire life. Over the years in her colour-deprived world she has studied (via black-and-white books and television) the sciences of neurophysiology, vision and electromagnetics to their fullest extent; eventually Mary learns all the physical facts there are to know about experiencing colour. When Mary is released from her room and experiences colour for the first time, does she learn something new? If we answer "yes" (as Jackson suggests we do) to this question, then we have supposedly denied the truth of type physicalism, for if Mary has exhausted all the physical facts about experiencing colour prior to her release, then her subsequently acquiring some new piece of information about colour upon experiencing its quale reveals that there must be something about the experience of colour which is not captured by the physicalist picture.
The type identity theorist, such as Smart, attempts to explain away such phenomena by insisting that the experiential properties of mental events are topic-neutral. The concept of topic-neutral terms and expressions goes back to Gilbert Ryle, who identified such topic-neutral terms as "if", "or", "not", "because" and "and." If one were to hear these terms alone in the course of a conversation, it would be impossible to tell whether the topic under discussion concerned geology, physics, history, gardening, or selling pizza. For the identity theorist, sense-data and qualia are not real things in the brain (or the physical world in general) but are more like "the average electrician." The average electrician can be further analyzed and explained in terms of real electricians but is not itself a real electrician.
=== Other ===
Type physicalism has also been criticized from an illusionist perspective. Keith Frankish writes that it is "an unstable position, continually on the verge of collapsing into illusionism. The central problem, of course, is that phenomenal properties seem too weird to yield to physical explanation. They resist functional analysis and float free of whatever physical mechanisms are posited to explain them." He proposes instead that phenomenality is an illusion, arguing that it is therefore the illusion rather than phenomenal consciousness itself that requires explanation.
== See also ==
== Notes ==
== References and further reading ==
Chalmers, David (1996). The Conscious Mind, Oxford University Press, New York.
Feigl, Herbert (1958). "The 'Mental' and the 'Physical'" in Feigl, H., Scriven, M. and Maxwell, G. (eds.). Concepts, Theories and the Mind-Body Problem, Minneapolis, Minnesota Studies in the Philosophy of Science, Vol. 2, reprinted with a Postscript in Feigl 1967.
Feigl, Herbert (1967). The 'Mental' and the 'Physical', The Essay and a Postscript, Minneapolis, University of Minnesota Press.
Jackson, Frank (1982) "Epiphenomenal Qualia", Philosophical Quarterly 32, pp. 127–136.
Kripke, Saul (1972/1980). Naming and Necessity, Cambridge, Mass., Harvard University Press. (Originally published in 1972 as "Naming and Necessity".)
Lewis, David (1966). "An Argument for the Identity Theory", Journal of Philosophy, 63, pp. 17–25.
Lewis, David (1980). "Mad Pain and Martian Pain" in Readings in the Philosophy of Psychology, Vol. I, N. Block (ed.), Harvard University Press, pp. 216–222. (Also in Lewis's Philosophical Papers, Vol. 1, Oxford University Press, 1983.)
Morris, Kevin (2019). Physicalism Deconstructed: Levels of Reality and the Mind–Body Problem, Cambridge University Press, Cambridge.
Place, U. T. (1956). "Is Consciousness a Brain Process?", British Journal of Psychology, 47, pp. 44–50.
Place, U. T. (unpublished). "Identity Theories", A Field Guide to the Philosophy of Mind. Società italiana per la filosofia analitica, Marco Nani (ed.). (link Archived 2020-02-23 at the Wayback Machine)
Putnam, Hilary (1988). Representation and Reality. The MIT Press.
Smart, J. J. C. (1959). "Sensations and Brain Processes", Philosophical Review, 68, pp. 141–156.
Smart, J. J. C. (2004). "The Identity Theory of Mind", The Stanford Encyclopedia of Philosophy (Fall 2004 Edition), Edward N. Zalta (ed.). (link)
== External links ==
Collection of links to online papers
Dictionary of the Philosophy of Mind
Internet Encyclopedia of Philosophy
Stanford Encyclopedia of Philosophy | Wikipedia/Identity_Theory |
The hologenome theory of evolution recasts the individual animal or plant (and other multicellular organisms) as a community or a "holobiont" – the host plus all of its symbiotic microbes. Consequently, the collective genomes of the holobiont form a "hologenome". Holobionts and hologenomes are structural entities that replace misnomers in the context of host-microbiota symbioses such as superorganism (i.e., an integrated social unit composed of conspecifics), organ, and metagenome. Variation in the hologenome may encode phenotypic plasticity of the holobiont and can be subject to evolutionary changes caused by selection and drift, if portions of the hologenome are transmitted between generations with reasonable fidelity. One of the important outcomes of recasting the individual as a holobiont subject to evolutionary forces is that genetic variation in the hologenome can be brought about by changes in the host genome and also by changes in the microbiome, including new acquisitions of microbes, horizontal gene transfers, and changes in microbial abundance within hosts. Although there is a rich literature on binary host–microbe symbioses, the hologenome concept distinguishes itself by including the vast symbiotic complexity inherent in many multicellular hosts.
== Origin ==
Lynn Margulis coined the term holobiont in her 1991 book Symbiosis as a Source of Evolutionary Innovation: Speciation and Morphogenesis (MIT Press), though this was not in the context of diverse populations of microbes. The term holobiont is derived from the Ancient Greek ὅλος (hólos, "whole"), and the word biont for a unit of life.
In September 1994, Richard Jefferson coined the term hologenome when he introduced the hologenome theory of evolution at a presentation at Cold Spring Harbor Laboratory. At the CSH Symposium and earlier, the unsettling number and diversity of microbes that were being discovered through the powerful tool of PCR-amplification of 16S ribosomal RNA genes was exciting, but confusing interpretations in diverse studies. A number of speakers referred to microbial contributions to mammalian or plant DNA samples as 'contamination'. In his lecture, Jefferson argued that these were likely not contamination, but rather essential components of the samples that reflected the actual genetic composition of the organism being studied, integral to the complex system in which it lives. This implied that the logic of the organism's performance and capabilities would be embedded only in the hologenome. Observations on the ubiquity of microbes in plant and soil samples as well as laboratory work on molecular genetics of vertebrate-associated microbial enzymes impacting hormone action informed this hypothesis. References was made to work indicating that mating pheromones were only released after skin microbiota activated the precursors.
At the 14th South African Congress of Biochemistry and Molecular Biology in 1997, Jefferson described how the modulation of steroid and other hormone levels by microbial glucuronidases and arylsulfatase profoundly impacted the performance of the composite entity. Following on work done isolating numerous and diverse glucuronidases from microbial samples of African animal feces, and their differential cleavage of hormones, he hypothesized that this phenomenon, microbially-mediated hormone modulation, could underlie evolution of disease and social behavior as well as the holobiont fitness and system resilience. In his lectures, Jefferson coined and defined the term 'Ecotherapeutics', referring to adjustment of the population structure of the microbial composition in plants and animals - the microbiome - and their support ecosystem to improve performance. In 2007, Jefferson followed with a series of posts on the logic of hologenome theory on Cambia's Science as Social Enterprise page.
In 2008, Eugene Rosenberg and Ilana Zilber-Rosenberg apparently independently used the term hologenome and developed the hologenome theory of evolution. This theory was originally based on their observations of Vibrio shiloi-mediated bleaching of the coral Oculina patagonica. Since its first introduction, the theory has been promoted as a fusion of Lamarckism and Darwinism and expanded to all of evolution, not just that of corals. The history of the development of the hologenome theory and the logic undergirding its development was the focus of a cover article by Carrie Arnold in New Scientist in January, 2013. A comprehensive treatment of the theory, including updates by the Rosenbergs on neutrality, pathogenesis and multi-level selection, can be found in their 2013 book.
In 2013, Robert Brucker and Seth Bordenstein re-invigorated the hologenome concept by showing that the gut microbiomes of closely related Nasonia wasp species are distinguishable, and contribute to hybrid death. This set interactions between hosts and microbes in a conceptual continuum with interactions between genes in the same genome. In 2015, Bordenstein and Kevin R. Theis outlined a conceptual framework that aligns with pre-existing theories in biology.
=== Support from vertebrate biology ===
Multicellular life is made possible by the coordination of physically and temporally distinct processes, most prominently through hormones. Hormones mediate critical activities in vertebrates, including ontogeny, somatic and reproductive physiology, sexual development, performance and behaviour.
Many of these hormones – including most steroids and thyroxines – are secreted in inactive form through the endocrine and apocrine systems into epithelial corridors in which microbiota are widespread and diverse, including gut, urinary tract, lung and skin. There, the inactive hormones can be re-activated by cleavage of the glucuronide or sulfate residue, allowing them to be reabsorbed. Thus the concentration and bioavailability of many of the hormones is impacted by microbial cleavage of conjugated intermediaries, itself determined by a diverse population with redundant enzymatic capabilities. Aspects of enterohepatic circulation have been known for decades, but had been viewed as an ancillary effect of detoxification and excretion of metabolites and xenobiotics, including effects on lifetimes of pharmaceuticals, including birth control formulations.
The basic premise of Jefferson's first exposition of the hologenome theory is that a spectrum of hormones can be re-activated and resorbed from epithelia, potentially modulating effective time and dose relationships of many vertebrate hormones. The ability to alter and modulate, amplify and suppress, disseminate and recruit new capabilities as microbially-encoded 'traits' means that sampling, sensing and responding to the environment become intrinsic features and emergent capabilities of the holobiont, with mechanisms that can provide rapid, sensitive, nuanced and persistent performance changes.
Studies by Froebe et al. in 1990 indicating that essential mating pheromones, including androstenols, required activation by skin-associated microbial glucuronidases and sulfatases. In the absence of microbial populations in the skin, no detectable aromatic pheromone was released, as the pro-pheromone remained water-soluble and non-volatile. This effectively meant that the microbes in the skin were essential to produce a mating signal.
=== Support from coral biology ===
Subsequent re-articulation describing the hologenome theory by Rosenberg and Zilber-Rosenberg, published 13 years after Jefferson's definition of the theory, was based on their observations of corals, and the coral probiotic hypothesis.
Coral reefs are the largest structures created by living organisms, and contain abundant and highly complex microbial communities. A coral "head" is a colony of genetically identical polyps, which secrete an exoskeleton near the base. Depending on the species, the exoskeleton may be hard, based on calcium carbonate, or soft and proteinaceous. Over many generations, the colony creates a large skeleton that is characteristic of the species. Diverse forms of life take up residence in a coral colony, including photosynthetic algae such as Symbiodinium, as well as a wide range of bacteria including nitrogen fixers, and chitin decomposers, all of which form an important part of coral nutrition. The association between coral and its microbiota is species dependent, and different bacterial populations are found in mucus, skeleton and tissue from the same coral fragment.
Over the past several decades, major declines in coral populations have occurred. Climate change, water pollution and overfishing are three stress factors that have been described as leading to disease susceptibility. Over twenty different coral diseases have been described, but of these, only a handful have had their causative agents isolated and characterized.
Coral bleaching is the most serious of these diseases. In the Mediterranean Sea, the bleaching of Oculina patagonica was first described in 1994 and, through a rigorous application of Koch's Postulates, determined to be due to infection by Vibrio shiloi. From 1994 to 2002, bacterial bleaching of O. patagonica occurred every summer in the eastern Mediterranean. Surprisingly, however, after 2003, O. patagonica in the eastern Mediterranean has been resistant to V. shiloi infection, although other diseases still cause bleaching.
The surprise stems from the knowledge that corals are long lived, with lifespans on the order of decades, and do not have adaptive immune systems. Their innate immune systems do not produce antibodies, and they should seemingly not be able to respond to new challenges except over evolutionary time scales. Yet multiple researchers have documented variations in bleaching susceptibility that may be termed 'experience-mediated tolerance'. The puzzle of how corals managed to acquire resistance to a specific pathogen led Eugene Rosenberg and Ilana Zilber-Rosenberg to propose the Coral Probiotic Hypothesis. This hypothesis proposes that a dynamic relationship exists between corals and their symbiotic microbial communities. Beneficial mutations can arise and spread among the symbiotic microbes much faster than in the host corals. By altering its microbial composition, the "holobiont" can adapt to changing environmental conditions far more rapidly than by genetic mutation and selection in the host species alone.
Extrapolating the coral probiotic hypothesis to other organisms, including higher plants and animals, led to the Rosenberg's support for and publications around the hologenome theory of evolution.
== Theory ==
=== Definition ===
The framework of the hologenome theory of evolution is as follows (condensed from Rosenberg et al., 2007):
"All animals and plants establish symbiotic relationships with microorganisms."
"Different host species contain different symbiont populations and individuals of the same species can also contain different symbiont populations."
"The association between a host organism and its microbial community affect both the host and its microbiota."
"The genetic information encoded by microorganisms can change under environmental demands more rapidly, and by more processes, than the genetic information encoded by the host organism."
"... the genome of the host can act in consortium with the genomes of the associated symbiotic microorganisms to create a hologenome. This hologenome...can change more rapidly than the host genome alone, thereby conferring greater adaptive potential to the combined holobiont evolution."
"Each of these points taken together [led Rosenberg et al. to propose that] the holobiont with its hologenome should be considered as the unit of natural selection in evolution."
Some authors supplement the above principles with an additional one. If a given holobiont is to be considered a unit of natural selection:
The hologenome must be heritable from generation to generation.
Ten principles of holobionts and hologenomes were presented in PLOS Biology:
I. Holobionts and hologenomes are units of biological organization
II. Holobionts and hologenomes are not organ systems, superorganisms, or metagenomes
III. The hologenome is a comprehensive gene system
IV. The hologenome concept reboots elements of Lamarckian evolution
V. Hologenomic variation integrates all mechanisms of mutation
VI. Hologenomic evolution is most easily understood by equating a gene in the nuclear genome to a microbe in the microbiome
VII. The hologenome concept fits squarely into genetics and accommodates multilevel selection theory
VIII. The hologenome is shaped by selection and neutrality
IX. Hologenomic speciation blends genetics and symbiosis
X. Holobionts and their hologenomes do not change the rules of evolutionary biology
=== Horizontally versus vertically transmitted symbionts ===
Many case studies clearly demonstrate the importance of an organism's associated microbiota to its existence. (For example, see the numerous case studies in the Microbiome article.) However, horizontal versus vertical transmission of endosymbionts must be distinguished. Endosymbionts whose transmission is predominantly vertical may be considered as contributing to the heritable genetic variation present in a host species.
In the case of colonial organisms such as corals, the microbial associations of the colony persist even though individual members of the colony, reproducing asexually, live and die. Corals also have a sexual mode of reproduction, resulting in planktonic larva; it is less clear whether microbial associations persist through this stage of growth. Also, the bacterial community of a colony may change with the seasons.
Many insects maintain heritable obligate symbiosis relationships with bacterial partners. For example, normal development of female wasps of the species Asobara tabida is dependent on Wolbachia infection. If "cured" of the infection, their ovaries degenerate. Transmission of the infection is vertical through the egg cytoplasm.
In contrast, many obligate symbiosis relationships have been described in the literature where transmission of the symbionts is via horizontal transfer. A well-studied example is the nocturnally feeding squid Euprymna scolopes, which camouflages its outline against the moonlit ocean surface by emitting light from its underside with the aid of the symbiotic bacterium Vibrio fischeri. The Rosenbergs cite this example within the context of the hologenome theory of evolution. Squid and bacterium maintain a highly co-evolved relationship. The newly hatched squid collects its bacteria from the sea water, and lateral transfer of symbionts between hosts permits faster transfer of beneficial mutations within a host species than are possible with mutations within the host genome.
=== Primary versus secondary symbionts ===
Another traditional distinction between endosymbionts has been between primary and secondary symbionts. Primary endosymbionts reside in specialized host cells that may be organized into larger, organ-like structures (in insects, the bacteriome). Associations between hosts and primary endosymbionts are usually ancient, with an estimated age of tens to hundreds of millions of years. According to endosymbiotic theory, extreme cases of primary endosymbionts include mitochondria, plastids (including chloroplasts), and possibly other organelles of eukaryotic cells. Primary endosymbionts are usually transmitted exclusively vertically, and the relationship is always mutualistic and generally obligate for both partners. Primary endosymbiosis is surprisingly common. An estimated 15% of insect species, for example, harbor this type of endosymbiont. In contrast, secondary endosymbiosis is often facultative, at least from the host point of view, and the associations are less ancient. Secondary endosymbionts do not reside in specialized host tissues, but may dwell in the body cavity dispersed in fat, muscle, or nervous tissue, or may grow within the gut. Transmission may be via vertical, horizontal, or both vertical and horizontal transfer. The relationship between host and secondary endosymbiont is not necessarily beneficial to the host; indeed, the relationship may be parasitic.
The distinction between vertical and horizontal transfer, and between primary and secondary endosymbiosis is not absolute, but follows a continuum, and may be subject to environmental influences. For example, in the stink bug Nezara viridula, the vertical transmission rate of symbionts, which females provide to offspring by smearing the eggs with gastric caeca, was 100% at 20 °C, but decreased to 8% at 30 °C. Likewise, in aphids, the vertical transmission of bacteriocytes containing the primary endosymbiont Buchnera is drastically reduced at high temperature. In like manner, the distinction between commensal, mutualistic, and parasitic relationships is also not absolute. An example is the relationship between legumes and rhizobial species: N2 uptake is energetically more costly than the uptake of fixed nitrogen from the soil, so soil N is preferred if not limiting. During the early stages of nodule formation, the plant-rhizobial relationship actually resembles a pathogenesis more than it does a mutualistic association.
=== Neo-Lamarckism within a Darwinian context ===
Lamarckism, the concept that an organism can pass on characteristics that it acquired during its lifetime to its offspring (also known as inheritance of acquired characteristics or soft inheritance) incorporated two common ideas of its time:
Use and disuse – individuals lose characteristics they do not require (or use) and develop characteristics that are useful.
Inheritance of acquired traits – individuals inherit the traits of their ancestors.
Although Lamarckian theory was rejected by the neo-Darwinism of the modern evolutionary synthesis in which evolution occurs through random variations being subject to natural selection, the hologenome theory has aspects that harken back to Lamarckian concepts. In addition to the traditionally recognized modes of variation (i.e. sexual recombination, chromosomal rearrangement, mutation), the holobiont allows for two additional mechanisms of variation that are specific to the hologenome theory: (1) changes in the relative population of existing microorganisms (i.e. amplification and reduction) and (2) acquisition of novel strains from the environment, which may be passed on to offspring.
Changes in the relative population of existing microorganisms corresponds to Lamarckian "use and disuse", while the ability to acquire novel strains from the environment, which may be passed on to offspring, corresponds to Lamarckian "inheritance of acquired traits". The hologenome theory, therefore, is said by its proponents to incorporate Lamarckian aspects within a Darwinian framework.
== Additional case studies ==
The pea aphid Acyrthosiphon pisum maintains an obligate symbiotic relationship with the bacterium Buchnera aphidicola, which is transmitted maternally to the embryos that develop within the mother's ovarioles. Pea aphids live on sap, which is rich in sugars but deficient in amino acids. They rely on their Buchnera endosymbiotic population for essential amino acids, supplying in exchange nutrients as well as a protected intracellular environment that allows Buchnera to grow and reproduce. The relationship is actually more complicated than mutual nutrition; some strains of Buchnera increases host thermotolerance, while other strains do not. Both strains are present in field populations, suggesting that under some conditions, increased heat tolerance is advantageous to the host, while under other conditions, decreased heat tolerance but increased cold tolerance may be advantageous. One can consider the variant Buchnera genomes as alleles for the larger hologenome. The association between Buchnera and aphids began about 200 million years ago, with host and symbiont co-evolving since that time; in particular, it has been discovered that genome size in various Buchnera species has become extremely reduced, in some cases down to 450 kb, which is far smaller even than the 580 kb genome of Mycoplasma genitalium.
Development of mating preferences, i.e. sexual selection, is considered to be an early event in speciation. In 1989, Dodd reported mating preferences in Drosophila that were induced by diet. It has recently been demonstrated that when otherwise identical populations of Drosophila were switched in diet between molasses medium and starch medium, that the "molasses flies" preferred to mate with other molasses flies, while the "starch flies" preferred to mate with other starch flies. This mating preference appeared after only one generation and was maintained for at least 37 generations. The origin of these differences were changes in the flies' populations of a particular bacterial symbiont, Lactobacillus plantarum. Antibiotic treatment abolished the induced mating preferences. It has been suggested that the symbiotic bacteria changed the levels of cuticular hydrocarbon sex pheromones, however several other research papers have been unable to replicate this effect.
Zilber-Rosenberg and Rosenberg (2008) have tabulated many of the ways in which symbionts are transmitted and their contributions to the fitness of the holobiont, beginning with mitochondria found in all eukaryotes, chloroplast in plants, and then various associations described in specific systems. The microbial contributions to host fitness included provision of specific amino acids, growth at high temperatures, provision of nutritional needs from cellulose, nitrogen metabolism, recognition signals, more efficient food utilization, protection of eggs and embryos against metabolism, camouflage against predators, photosynthesis, breakdown of complex polymers, stimulation of the immune system, angiogenesis, vitamin synthesis, fiber breakdown, fat storage, supply of minerals from the soil, supply of organics, acceleration of mineralization, carbon cycling, and salt tolerance.
== Criticism ==
The hologenome theory is debated. A major criticism by Ainsworth et al. has been their claim that V. shiloi was misidentified as the causative agent of coral bleaching, and that its presence in bleached O. patagonica was simply that of opportunistic colonization.
If this is true, the original observation that led to Rosenberg's later articulation of the theory would be invalid. On the other hand, Ainsworth et al. performed their samplings in 2005, two years after the Rosenberg group discovered O. patagonica no longer to be susceptible to V. shiloi infection; therefore their finding that bacteria are not the primary cause of present-day bleaching in Mediterranean coral O. patagonica should not be considered surprising. The rigorous satisfaction of Koch's postulates, as employed in Kushmaro et al. (1997), is generally accepted as providing a definitive identification of infectious disease agents.
Baird et al. (2009) have questioned basic assumptions made by Reshef et al. (2006) in presuming that (1) coral generation times are too slow to adjust to novel stresses over the observed time scales, and that (2) the scale of dispersal of coral larvae is too large to allow for adaptation to local environments. They may simply have underestimated the potential rapidity of conventional means of natural selection. In cases of severe stress, multiple cases have been documented of ecologically significant evolutionary change occurring over a handful of generations. Novel adaptive mechanisms such as switching symbionts might not be necessary for corals to adjust to rapid climate change or novel stressors.
Organisms in symbiotic relationships evolve to accommodate each other, and the symbiotic relationship increases the overall fitness of the participant species. Although the hologenome theory is still being debated, it has gained a significant degree of popularity within the scientific community as a way of explaining rapid adaptive changes that are difficult to accommodate within a traditional Darwinian framework.
Definitions and uses of the words holobiont and hologenome also differ between proponents and skeptics, and the misuse of the terms has led to confusions over what comprises evidence related to the hologenome. Ongoing discourse is attempting to clear this confusion. Theis et al. clarify that "critiquing the hologenome concept is not synonymous with critiquing coevolution, and arguing that an entity is not a primary unit of selection dismisses the fact that the hologenome concept has always embraced multilevel selection."
For instance, Chandler and Turelli (2014) criticize the conclusions of Brucker and Bordenstein (2013), noting that their observations are also consistent with an alternative explanation. Brucker and Bordenstein (2014) responded to these criticisms, claiming they were unfounded because of factual inaccuracies and altered arguments and definitions that were not advanced by Brucker and Bordenstein (2013).
Recently, Forest L Rohwer and colleagues developed a novel statistical test to examine the potential for the hologenome theory of evolution in coral species. They found that coral species do not inherit microbial communities, and are instead colonized by a core group of microbes that associate with a diversity of species. The authors conclude: "Identification of these two symbiont communities supports the holobiont model and calls into question the hologenome theory of evolution." However, other studies in coral adhere to the original and pluralistic definitions of holobionts and hologenomes. David Bourne, Kathleen Morrow and Nicole Webster clarify that "The combined genomes of this coral holobiont form a coral hologenome, and genomic interactions within the hologenome ultimately define the coral phenotype."
== References ==
== Further reading ==
For recent literature on holobionts and hologenomes published in an open access platform, see the following reference: Bordenstein SR, Theis KR (August 2015). "Host Biology in Light of the Microbiome: Ten Principles of Holobionts and Hologenomes". PLOS Biology. 13 (8): e1002226. doi:10.1371/journal.pbio.1002226. PMC 4540581. PMID 26284777. | Wikipedia/Hologenome_theory_of_evolution |
In psychology, a dual process theory provides an account of how thought can arise in two different ways, or as a result of two different processes. Often, the two processes consist of an implicit (automatic), unconscious process and an explicit (controlled), conscious process. Verbalized explicit processes or attitudes and actions may change with persuasion or education; though implicit process or attitudes usually take a long amount of time to change with the forming of new habits. Dual process theories can be found in social, personality, cognitive, and clinical psychology. It has also been linked with economics via prospect theory and behavioral economics, and increasingly in sociology through cultural analysis.
== History ==
The foundations of dual process theory are probably ancient. Spinoza (1632-1677) distinguished between the passions and reason. William James (1842-1910) believed that there were two different kinds of thinking: associative and true reasoning. James theorized that empirical thought was used for things like art and design work. For James, images and thoughts would come to mind of past experiences, providing ideas of comparison or abstractions. He claimed that associative knowledge was only from past experiences describing it as "only reproductive". James believed that true reasoning could enable overcoming “unprecedented situations” just as a map could enable navigating past obstacles.
There are various dual process theories that were produced after William James's work. Dual process models are very common in the study of social psychological variables, such as attitude change. Examples include Petty and Cacioppo's elaboration likelihood model (explained below) and Chaiken's heuristic systematic model. According to these models, persuasion may occur after either intense scrutiny or extremely superficial thinking. In cognitive psychology, attention and working memory have also been conceptualized as relying on two distinct processes. Whether the focus be on social psychology or cognitive psychology, there are many examples of dual process theories produced throughout the past. The following just show a glimpse into the variety that can be found.
Peter Wason and Jonathan St B. T. Evans suggested dual process theory in 1974. In Evans' later theory, there are two distinct types of processes: heuristic processes and analytic processes. He suggested that during heuristic processes, an individual chooses which information is relevant to the current situation. Relevant information is then processed further whereas irrelevant information is not. Following the heuristic processes come analytic processes. During analytic processes, the relevant information that is chosen during the heuristic processes is then used to make judgments about the situation.
Richard E. Petty and John Cacioppo proposed a dual process theory focused in the field of social psychology in 1986. Their theory is called the elaboration likelihood model of persuasion. In their theory, there are two different routes to persuasion in making decisions. The first route is known as the central route and this takes place when a person is thinking carefully about a situation, elaborating on the information they are given, and creating an argument. This route occurs when an individual's motivation and ability are high. The second route is known as the peripheral route and this takes place when a person is not thinking carefully about a situation and uses shortcuts to make judgments. This route occurs when an individual's motivation or ability are low.
Steven Sloman produced another interpretation on dual processing in 1996. He believed that associative reasoning takes stimuli and divides it into logical clusters of information based on statistical regularity. He proposed that how you associate is directly proportional to the similarity of past experiences, relying on temporal and similarity relations to determine reasoning rather than an underlying mechanical structure. The other reasoning process in Sloman's opinion was of the Rule-based system. The system functioned on logical structure and variables based upon rule systems to come to conclusions different from that of the associative system. He also believed that the Rule-based system had control over the associative system, though it could only suppress it. This interpretation corresponds well to earlier work on computational models of dual processes of reasoning.
Daniel Kahneman provided further interpretation by differentiating the two styles of processing more, calling them intuition and reasoning in 2003. Intuition (or system 1), similar to associative reasoning, was determined to be fast and automatic, usually with strong emotional bonds included in the reasoning process. Kahneman said that this kind of reasoning was based on formed habits and very difficult to change or manipulate. Reasoning (or system 2) was slower and much more volatile, being subject to conscious judgments and attitudes.
Fritz Strack and Roland Deutsch proposed another dual process theory focused in the field of social psychology in 2004 called the Reflective-Impulsive Model (RIM). According to their model, there are two separate systems that control human behavior: the reflective system and the impulsive system. In the reflective system, symbolic representations and propositional reasoning are used to guide decision-making processes based on knowledge, values, and goals. On the other hand, in the impulsive system, decisions are made using associative processes that are automatic based on temporal and spatial proximity. Unlike other dual-process models, Strack and Deutsch emphasize that both systems operate in parallel and interact with each other.
== Theories ==
=== Dual process learning model ===
Ron Sun proposed a dual-process model of learning (both implicit learning and explicit learning). The model (named CLARION) re-interpreted voluminous behavioral data in psychological studies of implicit learning and skill acquisition in general. The resulting theory is two-level and interactive, based on the idea of the interaction of one-shot explicit rule learning (i.e., explicit learning) and gradual implicit tuning through reinforcement (i.e. implicit learning), and it accounts for many previously unexplained cognitive data and phenomena based on the interaction of implicit and explicit learning.
The Dual Process Learning model can be applied to a group-learning environment. This is called The Dual Objective Model of Cooperative Learning and it requires a group practice that consists of both cognitive and affective skills among the team. It involves active participation by the teacher to monitor the group throughout its entirety until the product has been successfully completed. The teacher focuses on the effectiveness of cognitive and affective practices within the group's cooperative learning environment. The instructor acts as an aide to the group by encouraging their positive affective behavior and ideas. In addition, the teacher remains, continually watching for improvement in the group's development of the product and interactions amongst the students. The teacher will interject to give feedback on ways the students can better contribute affectively or cognitively to the group as a whole. The goal is to foster a sense of community amongst the group while creating a proficient product that is a culmination of each student's unique ideas.
=== Dual coding ===
Using a somewhat different approach, Allan Paivio has developed a dual-coding theory of information processing. According to this model, cognition involves the coordinated activity of two independent, but connected systems, a nonverbal system and a verbal system that is specialized to deal with language. The nonverbal system is hypothesized to have developed earlier in evolution. Both systems rely on different areas of the brain. Paivio has reported evidence that nonverbal, visual images are processed more efficiently and are approximately twice as memorable. Additionally, the verbal and nonverbal systems are additive, so one can improve memory by using both types of information during learning. This additive dual coding claim is compatible with evidence that verbalized thinking does not necessarily overcome common faulty intuitions or heuristics, such as studies showing that thinking aloud during heuristics and biases tests did not necessarily improve performance on the test.
=== Dual-process accounts of reasoning ===
==== Background ====
Dual-process accounts of reasoning postulate that there are two systems or minds in one brain. A current theory is that there are two cognitive systems underlying thinking and reasoning and that these different systems were developed through evolution. These systems are often referred to as "implicit" and "explicit" or by the more neutral "System 1" and "System 2", as coined by Keith Stanovich and Richard West.
The systems have multiple names by which they can be called, as well as many different properties.
==== System 1 ====
John Bargh reconceptualized the notion of an automatic process by breaking down the term "automatic" into four components: awareness, intentionality, efficiency, and controllability. One way for a process to be labeled as automatic is for the person to be unaware of it. There are three ways in which a person may be unaware of a mental process: they can be unaware of the presence of the stimulus (subliminal), how the stimulus is categorized or interpreted (unaware of the activation of stereotype or trait constructs), or the effect the stimulus has on the person's judgments or actions (misattribution). Another way for a mental process to be labeled as automatic is for it to be unintentional. Intentionality refers to the conscious "start up" of a process. An automatic process may begin without the person consciously willing it to start. The third component of automaticity is efficiency. Efficiency refers to the amount of cognitive resources required for a process. An automatic process is efficient because it requires few resources. The fourth component is controllability, referring to the person's conscious ability to stop a process. An automatic process is uncontrollable, meaning that the process will run until completion and the person will not be able to stop it. Bargh conceptualized automaticity as a component view (any combination awareness, intention, efficiency, and control) as opposed to the historical concept of automaticity as an all-or-none dichotomy.
One takeaway from the psychological research on dual process theory is that our System 1 (intuition) is more accurate in areas where we’ve gathered a lot of data with reliable and fast feedback, like social dynamics, or even cognitive domains in which we've become expert or even merely familiar.
==== System 2 in humans ====
System 2 is evolutionarily recent and speculated as specific to humans. It is also known as the explicit system, the rule-based system, the rational system, or the analytic system. It performs the more slow and sequential thinking. It is domain-general, performed in the central working memory system. Because of this, it has a limited capacity and is slower than System 1 which correlates it with general intelligence. It is known as the rational system because it reasons according to logical standards. Some overall properties associated with System 2 are that it is rule-based, analytic, controlled, demanding of cognitive capacity, and slow.
== Social psychology ==
The dual process has impact on social psychology in such domains as stereotyping, categorization, and judgment. Especially, the study of automaticity and of implicit in dual process theories has the most influence on a person's perception. People usually perceive other people's information and categorize them by age, gender, race, or role. According to Neuberg and Fiske (1987) a perceiver who receives a good amount of information about the target person then will use their formal mental category (Unconscious) as a basis for judging the person. When the perceiver is distracted, the perceiver has to pay more attention to target information (Conscious). Categorization is the basic process of stereotyping in which people are categorized into social groups that have specific stereotypes associated with them. It is able to retrieve people's judgment automatically without subjective intention or effort. Attitude can also be activated spontaneously by the object. John Bargh's study offered an alternative view, holding that essentially all attitudes, even weak ones are capable of automatic activation. Whether the attitude is formed automatically or operates with effort and control, it can still bias further processing of information about the object and direct the perceivers' actions with regard to the target. According to Shelly Chaiken, heuristic processing is the activation and application of judgmental rules and heuristics are presumed to be learned and stored in memory. It is used when people are making accessible decisions such as "experts are always right" (system 1) and systematic processing is inactive when individuals make effortful scrutiny of all the relevant information which requires cognitive thinking (system 2). The heuristic and systematic processing then influence the domain of attitude change and social influence.
Unconscious thought theory is the counterintuitive and contested view that the unconscious mind is adapted to highly complex decision making. Where most dual system models define complex reasoning as the domain of effortful conscious thought, UTT argues complex issues are best dealt with unconsciously.
=== Stereotyping ===
Dual process models of stereotyping propose that when we perceive an individual, salient stereotypes pertaining to them are activated automatically. These activated representations will then guide behavior if no other motivation or cognition take place. However, controlled cognitive processes can inhibit the use of stereotypes when there is motivation and cognitive resources to do so. Devine (1989) provided evidence for the dual process theory of stereotyping in a series of three studies. Study 1 linked found prejudice (according to the Modern Racism Scale) was unrelated to knowledge of cultural stereotypes of African Americans. Study 2 showed that subjects used automatically activated stereotypes in judgments regardless of prejudice level (personal belief). Participants were primed with stereotype relevant or non-relevant words and then asked to give hostility ratings of a target with an unspecified race who was performing ambiguously hostile behaviors. Regardless of prejudice level, participants who were primed with more stereotype-relevant words gave higher hostility ratings to the ambiguous target. Study 3 investigated whether people can control stereotype use by activating personal beliefs. Low-prejudice participants asked to list African Americans listed more positive examples than did those high in prejudice.
=== Terror management theory and the dual process model ===
According to psychologists Pyszczynski, Greenberg, & Solomon, the dual process model, in relation to terror management theory, identifies two systems by which the brain manages fear of death: distal and proximal. Distal defenses fall under the system 1 category because it is unconscious whereas proximal defenses fall under the system 2 category because it operates with conscious thought. However, recent work by the ManyLabs project has shown that the mortality salience effect (e.g., reflecting on one's own death encouraging a greater defense of one's own worldview) has failed to replicate (ManyLabs attempt to replicate a seminal theoretical finding across multiple laboratories—in this case some of these labs included input from the original terror management theorists.)
=== Dual process and habituation ===
Habituation can be described as decreased response to a repeated stimulus. According to Groves and Thompson, the process of habituation also mimics a dual process. The dual process theory of behavioral habituation relies on two underlying (non-behavioral) processes; depression and facilitation with the relative strength of one over the other determining whether or not habituation or sensitization is seen in the behavior. Habituation weakens the intensity of a repeated stimulus over time subconsciously. As a result, a person will give the stimulus less conscious attention over time. Conversely, sensitization subconsciously strengthens a stimulus over time, giving the stimulus more conscious attention. Though these two systems are not both conscious, they interact to help people understand their surroundings by strengthening some stimuli and diminishing others.
=== Dual process and steering cognition ===
According to Walker, system 1 functions as a serial cognitive steering processor for system 2, rather than a parallel system. In large-scale repeated studies with school students, Walker tested how students adjusted their imagined self-operation in different curriculum subjects of maths, science and English. He showed that students consistently adjust the biases of their heuristic self-representation to specific states for the different curriculum subjects. The model of cognitive steering proposes that, in order to process epistemically varied environmental data, a heuristic orientation system is required to align varied, incoming environmental data with existing neural algorithmic processes. The brain's associative simulation capacity, centered around the imagination, plays an integrator role to perform this function. Evidence for early-stage concept formation and future self-operation within the hippocampus supports the model. In the cognitive steering model, a conscious state emerges from effortful associative simulation, required to align novel data accurately with remote memory, via later algorithmic processes. By contrast, fast unconscious automaticity is constituted by unregulated simulatory biases, which induce errors in subsequent algorithmic processes. The phrase ‘rubbish in, rubbish out' is used to explain errorful heuristic processing: errors will always occur if the accuracy of initial retrieval and location of data is poorly self-regulated.
=== Application in economic behavior ===
According to Alos-Ferrer and Strack the dual-process theory has relevance in economic decision-making through the multiple-selves model, in which one person's self-concept is composed of multiple selves depending on the context. An example of this is someone who as a student is hard working and intelligent, but as a sibling is caring and supportive. Decision-making involves the use of both automatic and controlled processes, but also depends on the person and situation, and given a person's experiences and current situation the decision process may differ. Given that there are two decision processes with differing goals one is more likely to be more useful in particular situations. For example, a person is presented with a decision involving a selfish but rational motive and a social motive. Depending on the individual one of the motives will be more appealing than the other, but depending on the situation the preference for one motive or the other may change. Using the dual-process theory it is important to consider whether one motive is more automatic than the other, and in this particular case the automaticity would depend on the individual and their experiences. A selfish person may choose the selfish motive with more automaticity than a non-selfish person, and yet a controlled process may still outweigh this based on external factors such as the situation, monetary gains, or societal pressure. Although there is likely to be a stable preference for which motive one will select based on the individual it is important to remember that external factors will influence the decision. Dual process theory also provides a different source of behavioral heterogeneity in economics. It is mostly assumed within economics that this heterogeneity comes from differences in taste and rationality, while dual process theory indicates necessary considerations of which processes are automated and how these different processes may interact within decision making.
=== Moral psychology ===
Moral judgments are said to be explained in part by dual process theory. In moral dilemmas we are presented us with two morally unpalatable options. For example, should we sacrifice one life in order to save many lives or just let many lives be lost? Consider a historical example: should we authorize the use of force against other nations in order to prevent "any future acts of international terrorism" or should we take a more pacifist approach to foreign lives and risk the possibility of terrorist attack? Dual process theorists have argued that sacrificing something of moral value in order to prevent a worse outcome (often called the "utilitarian" option) involves more reflective reasoning than the more pacifist (also known as the "deontological" option). However, some evidence suggests that this is not always the case, that reflection can sometimes increase harm-rejection responses, and that reflection correlates with both the sacrificial and pacifist (but not more anti-social) responses. So some have proposed that tendencies toward sacrificing for the greater good or toward pacifism are better explained by factors besides the two processes proposed by dual process theorists.
=== Religiosity ===
Various studies have found that performance on tests designed to require System 2 thinking (a.k.a., reflection tests) can predict differences in philosophical tendencies, including religiosity (i.e., the degree to which one reports being involved in organized religion). This "analytic atheist" effect has even been found among samples of people that include academic philosophers. Nonetheless, some studies detect this correlation between atheism and reflective, System 2 thinking in only some of the countries that they study, suggesting that it is not just intuitive and reflective thinking that predict variance in religiosity, but also cultural differences.
== Evidence ==
=== Belief bias effect ===
A belief bias is the tendency to judge the strength of arguments based on the plausibility of their conclusion rather than how strongly they support that conclusion. Some evidence suggests that this bias results from competition between logical (System 2) and belief-based (System 1) processes during evaluation of arguments.
Studies on belief-bias effect were first designed by Jonathan Evans to create a conflict between logical reasoning and prior knowledge about the truth of conclusions. Participants are asked to evaluate syllogisms that are: valid arguments with believable conclusions, valid arguments with unbelievable conclusions, invalid arguments with believable conclusions, and invalid arguments with unbelievable conclusions. Participants are told to only agree with conclusions that logically follow from the premises given. The results suggest when the conclusion is believable, people erroneously accept invalid conclusions as valid more often than invalid arguments are accepted which support unpalatable conclusions. This is taken to suggest that System 1 beliefs are interfering with the logic of System 2.
=== Tests with working memory ===
De Neys conducted a study that manipulated working memory capacity while answering syllogistic problems. This was done by burdening executive processes with secondary tasks. Results showed that when System 1 triggered the correct response, the distractor task had no effect on the production of a correct answer which supports the fact that System 1 is automatic and works independently of working memory, but when belief-bias was present (System 1 belief-based response was different from the logically correct System 2 response) the participants performance was impeded by the decreased availability of working memory. This falls in accordance with the knowledge about System 1 and System 2 of the dual-process accounts of reasoning because System 1 was shown to work independent of working memory, and System 2 was impeded due to a lack of working memory space so System 1 took over which resulted in a belief-bias.
=== fMRI studies ===
Vinod Goel and others produced neuropsychological evidence for dual-process accounts of reasoning using fMRI studies. They provided evidence that anatomically distinct parts of the brain were responsible for the two different kinds of reasoning. They found that content-based reasoning caused left temporal hemisphere activation whereas abstract formal problem reasoning activated the parietal system. They concluded that different kinds of reasoning, depending on the semantic content, activated one of two different systems in the brain.
A similar study incorporated fMRI during a belief-bias test. They found that different mental processes were competing for control of the response to the problems given in the belief-bias test. The prefrontal cortex was critical in detecting and resolving conflicts, which are characteristic of System 2, and had already been associated with that System 2. The ventral medial prefrontal cortex, known to be associated with the more intuitive or heuristic responses of System 1, was the area in competition with the prefrontal cortex.
=== Near-infrared spectroscopy ===
Tsujii and Watanabe did a follow-up study to Goel and Dolan's fMRI experiment. They examined the neural correlates on the inferior frontal cortex (IFC) activity in belief-bias reasoning using near-infrared spectroscopy (NIRS). Subjects performed a syllogistic reasoning task, using congruent and incongruent syllogisms, while attending to an attention-demanding secondary task. The interest of the researchers was in how the secondary-tasks changed the activity of the IFC during congruent and incongruent reasoning processes. The results showed that the participants performed better in the congruent test than in the incongruent test (evidence for belief bias); the high demand secondary test impaired the incongruent reasoning more than it impaired the congruent reasoning. NIRS results showed that the right IFC was activated more during incongruent trials. Participants with enhanced right IFC activity performed better on the incongruent reasoning than those with decreased right IFC activity. This study provided some evidence to enhance the fMRI results that the right IFC, specifically, is critical in resolving conflicting reasoning, but that it is also attention-demanding; its effectiveness decreases with loss of attention. The loss of effectiveness in System 2 following loss of attention makes the automatic heuristic System 1 take over, which results in belief bias.
=== Matching bias ===
Matching bias is a non-logical heuristic. The matching bias is described as a tendency to use lexical content matching of the statement about which one is reasoning, to be seen as relevant information and do the opposite as well, ignore relevant information that doesn't match. It mostly affects problems with abstract content. It doesn't involve prior knowledge and beliefs but it is still seen as a System 1 heuristic that competes with the logical System 2.
The Wason selection task provides evidence for the matching bias. The test is designed as a measure of a person's logical thinking ability. Performance on the Wason Selection Task is sensitive to the content and context with which it is presented. If you introduce a negative component into the conditional statement of the Wason Selection Task, e.g. 'If there is an A one side of the card then there is not a 3 on the other side', there is a strong tendency to choose cards that match the items in the negative condition to test, regardless of their logical status. Changing the test to be a test of following rules rather than truth and falsity is another condition where the participants will ignore the logic because they will simply follow the rule, e.g. changing the test to be a test of a police officer looking for underaged drinkers. The original task is more difficult because it requires explicit and abstract logical thought from System 2, and the police officer test is cued by relevant prior knowledge from System 1.
Studies have shown that you can train people to inhibit matching bias which provides neuropsychological evidence for the dual-process theory of reasoning. When you compare trials before and after the training there is evidence for a forward shift in activated brain area. Pre-test results showed activation in locations along the ventral pathway and post-test results showed activation around the ventro-medial prefrontal cortex and anterior cingulate. Matching bias has also been shown to generalise to syllogistic reasoning.
=== Evolution ===
Dual-process theorists claim that System 2, a general purpose reasoning system, evolved late and worked alongside the older autonomous sub-systems of System 1. The success of Homo sapiens lends evidence to their higher cognitive abilities above other hominids. Mithen theorizes that the increase in cognitive ability occurred 50,000 years ago when representational art, imagery, and the design of tools and artefacts are first documented. She hypothesizes that this change was due to the adaptation of System 2.
Most evolutionary psychologists do not agree with dual-process theorists. They claim that the mind is modular, and domain-specific, thus they disagree with the theory of the general reasoning ability of System 2. They have difficulty agreeing that there are two distinct ways of reasoning and that one is evolutionarily old, and the other is new. To ease this discomfort, the theory is that once System 2 evolved, it became a 'long leash' system without much genetic control which allowed humans to pursue their individual goals.
=== Issues with the dual-process account of reasoning ===
The dual-process account of reasoning is an old theory, as noted above. But according to Evans it has adapted itself from the old, logicist paradigm, to the new theories that apply to other kinds of reasoning as well. And the theory seems more influential now than in the past which is questionable. Evans outlined 5 "fallacies":
All dual-process theories are essentially the same. There is a tendency to assume all theories that propose two modes or styles of thinking are related and so they end up all lumped under the umbrella term of "dual-process theories".
There are just two systems underlying System 1 and System 2 processing. There are clearly more than just two cognitive systems underlying people's performance on dual-processing tasks. Hence the change to theorizing that processing is done in two minds that have different evolutionary histories and that each have multiple sub-systems.
System 1 processes are responsible for cognitive biases; System 2 processes are responsible for normatively correct responding. Both System 1 and System 2 processing can lead to normative answers and both can involve cognitive biases.
System 1 processing is contextualised while System 2 processing is abstract. Recent research has found that beliefs and context can influence System 2 processing as well as System 1.
Fast processing indicates the use of System 1 rather than System 2 processes. Just because a processing is fast does not mean it is done by System 1. Experience and different heuristics can influence System 2 processing to go faster.
Another argument against dual-process accounts for reasoning which was outlined by Osman is that the proposed dichotomy of System 1 and System 2 does not adequately accommodate the range of processes accomplished. Moshman proposed that there should be four possible types of processing as opposed to two. They would be implicit heuristic processing, implicit rule-based processing, explicit heuristic processing, and explicit rule-based processing. Another fine-grained division is as follows: implicit action-centered processes, implicit non-action-centered processes, explicit action-centered processes, and explicit non-action-centered processes (that is, a four-way division reflecting both the implicit-explicit distinction and the procedural-declarative distinction).
In response to the question as to whether there are dichotomous processing types, many have instead proposed a single-system framework which incorporates a continuum between implicit and explicit processes.
== Alternative model ==
The dynamic graded continuum (DGC), originally proposed by Cleeremans and Jiménez is an alternative single system framework to the dual-process account of reasoning. It has not been accepted as better than the dual-process theory; it is instead usually used as a comparison with which one can evaluate the dual-process model. The DGC proposes that differences in representation generate variation in forms of reasoning without assuming a multiple system framework. It describes how graded properties of the representations that are generated while reasoning result in the different types of reasoning. It separates terms like implicit and automatic processing where the dual-process model uses the terms interchangeably to refer to the whole of System 1. Instead the DGC uses a continuum of reasoning that moves from implicit, to explicit, to automatic.
== Fuzzy-trace theory ==
According to Charles Brainerd and Valerie Reyna's fuzzy-trace theory of memory and reasoning, people have two memory representations: verbatim and gist. Verbatim is memory for surface information (e.g. the words in this sentence) whereas gist is memory for semantic information (e.g. the meaning of this sentence).
This dual process theory posits that we encode, store, retrieve, and forget the information in these two traces of memory separately and completely independently of each other. Furthermore, the two memory traces decay at different rates: verbatim decays quickly, while gist lasts longer.
In terms of reasoning, fuzzy-trace theory posits that as we mature, we increasingly rely more on gist information over verbatim information. Evidence for this lies in framing experiments where framing effects become stronger when verbatim information (percentages) are replaced with gist descriptions. Other experiments rule out predictions of prospect theory (extended and original) as well as other current theories of judgment and decision making.
== See also ==
Automatic and controlled processes – Categories of cognitive processing
Cognitive inhibition – The mind's ability to tune out irrelevant stimuli
Dual process model of coping
Dual process theory (moral psychology) – Theory of human moral judgment
Opponent-process theory – Psychological and neurological model
== References ==
=== Further reading ===
Kahneman, Daniel (2013) [2011]. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. ISBN 978-0374533557.
== External links ==
Laboratory for Rational Decision Making, Cornell University | Wikipedia/Dual_process_theory |
Naïve physics or folk physics is the untrained human perception of basic physical phenomena. In the field of artificial intelligence the study of naïve physics is a part of the effort to formalize the common knowledge of human beings.
Many ideas of folk physics are simplifications, misunderstandings, or misperceptions of well-understood phenomena, incapable of giving useful predictions of detailed experiments, or simply are contradicted by more thorough observations. They may sometimes be true, be true in certain limited cases, be true as a good first approximation to a more complex effect, or predict the same effect but misunderstand the underlying mechanism.
Naïve physics is characterized by a mostly intuitive understanding humans have about objects in the physical world. Certain notions of the physical world may be innate.
== Examples ==
Some examples of naïve physics include commonly understood, intuitive, or everyday-observed rules of nature:
What goes up must come down
A dropped object falls straight down
A solid object cannot pass through another solid object
A vacuum sucks things towards it
An object is either at rest or moving, in an absolute sense
Two events are either simultaneous or they are not
Many of these and similar ideas formed the basis for the first works in formulating and systematizing physics by Aristotle and the medieval scholastics in Western civilization. In the modern science of physics, they were gradually contradicted by the work of Galileo, Newton, and others. The idea of absolute simultaneity survived until 1905, when the special theory of relativity and its supporting experiments discredited it.
== Psychological research ==
The increasing sophistication of technology makes possible more research on knowledge acquisition. Researchers measure physiological responses such as heart rate and eye movement in order to quantify the reaction to a particular stimulus. Concrete physiological data is helpful when observing infant behavior, because infants cannot use words to explain things (such as their reactions) the way most adults or older children can.
Research in naïve physics relies on technology to measure eye gaze and reaction time in particular. Through observation, researchers know that infants get bored looking at the same stimulus after a certain amount of time. That boredom is called habituation. When an infant is sufficiently habituated to a stimulus, he or she will typically look away, alerting the experimenter to his or her boredom. At this point, the experimenter will introduce another stimulus. The infant will then dishabituate by attending to the new stimulus. In each case, the experimenter measures the time it takes for the infant to habituate to each stimulus.
As an example of the use of this method, research by Susan Hespos and colleagues studied five-month-old infants' responses to the physics of liquids and solids. Infants in this research were shown liquid being poured from one glass to another until they were habituated to the event. That is, they spent less time looking at this event. Then, the infants were shown an event in which the liquid turned to a solid, which tumbled from the glass rather flowed. The infants looked longer at the new event; that is, they dishabituated.
Researchers infer that the longer the infant takes to habituate to a new stimulus, the more it violates his or her expectations of physical phenomena. When an adult observes an optical illusion that seems physically impossible, they will attend to it until it makes sense.
It is commonly believed that our understanding of physical laws emerges strictly from experience. But research shows that infants, who do not yet have such expansive knowledge of the world, have the same extended reaction to events that appear physically impossible. Such studies hypothesize that all people are born with an innate ability to understand the physical world.
Smith and Casati (1994) have reviewed the early history of naïve physics, and especially the role of the Italian psychologist Paolo Bozzi.
=== Types of experiments ===
The basic experimental procedure of a study on naïve physics involves three steps: prediction of the infant's expectation, violation of that expectation, and measurement of the results. As mentioned above, the physically impossible event holds the infant's attention longer, indicating surprise when expectations are violated.
==== Solidity ====
An experiment that tests an infant's knowledge of solidity involves the impossible event of one solid object passing through another. First, the infant is shown a flat, solid square moving from 0° to 180° in an arch formation. Next, a solid block is placed in the path of the screen, preventing it from completing its full range of motion. The infant habituates to this event, as it is what anyone would expect. Then, the experimenter creates the impossible event, and the solid screen passes through the solid block. The infant is confused by the event and attends longer than in probable event trial.
==== Occlusion ====
An occlusion event tests the knowledge that an object exists even if it is not immediately visible. Jean Piaget originally called this concept object permanence. When Piaget formed his developmental theory in the 1950s, he claimed that object permanence is learned, not innate. The children's game peek-a-boo is a classic example of this phenomenon, and one which obscures the true grasp infants have on permanence. To disprove this notion, an experimenter designs an impossible occlusion event. The infant is shown a block and a transparent screen. The infant habituates, then a solid panel is placed in front of the objects to block them from view. When the panel is removed, the block is gone, but the screen remains. The infant is confused because the block has disappeared indicating that they understand that objects maintain location in space and do not simply disappear.
==== Containment ====
A containment event tests the infant's recognition that an object that is bigger than a container cannot fit completely into that container. Elizabeth Spelke, one of the psychologists who founded the naïve physics movement, identified the continuity principle, which conveys an understanding that objects exist continuously in time and space. Both occlusion and containment experiments hinge on the continuity principle. In the experiment, the infant is shown a tall cylinder and a tall cylindrical container. The experimenter demonstrates that the tall cylinder fits into the tall container, and the infant is bored by the expected physical outcome. The experimenter then places the tall cylinder completely into a much shorter cylindrical container, and the impossible event confuses the infant. Extended attention demonstrates the infant's understanding that containers cannot hold objects that exceed them in height.
=== Baillargeon's research ===
The published findings of Renee Baillargeon brought innate knowledge to the forefront in psychological research. Her research method centered on the visual preference technique. Baillargeon and her followers studied how infants show preference to one stimulus over another. Experimenters judge preference by the length of time an infant will stare at a stimulus before habituating. Researchers believe that preference indicates the infant's ability to discriminate between the two events.
== See also ==
Cartoon physics
Common sense
Elizabeth Spelke
Folk psychology
Renee Baillargeon
Weak ontology
== References == | Wikipedia/Naïve_physics |
In mathematical logic, an ω-consistent (or omega-consistent, also called numerically segregative) theory is a theory (collection of sentences) that is not only (syntactically) consistent (that is, does not prove a contradiction), but also avoids proving certain infinite combinations of sentences that are intuitively contradictory. The name is due to Kurt Gödel, who introduced the concept in the course of proving the incompleteness theorem.
== Definition ==
A theory T is said to interpret the language of arithmetic if there is a translation of formulas of arithmetic into the language of T so that T is able to prove the basic axioms of the natural numbers under this translation.
A T that interprets arithmetic is ω-inconsistent if, for some property P of natural numbers (defined by a formula in the language of T), T proves P(0), P(1), P(2), and so on (that is, for every standard natural number n, T proves that P(n) holds), but T also proves that there is some natural number n such that P(n) fails. This may not generate a contradiction within T because T may not be able to prove for any specific value of n that P(n) fails, only that there is such an n. In particular, such n is necessarily a nonstandard integer in any model for T (Quine has thus called such theories "numerically insegregative").
T is ω-consistent if it is not ω-inconsistent.
There is a weaker but closely related property of Σ1-soundness. A theory T is Σ1-sound (or 1-consistent, in another terminology) if every Σ01-sentence provable in T is true in the standard model of arithmetic N (i.e., the structure of the usual natural numbers with addition and multiplication).
If T is strong enough to formalize a reasonable model of computation, Σ1-soundness is equivalent to demanding that whenever T proves that a Turing machine C halts, then C actually halts. Every ω-consistent theory is Σ1-sound, but not vice versa.
More generally, we can define an analogous concept for higher levels of the arithmetical hierarchy. If Γ is a set of arithmetical sentences (typically Σ0n for some n), a theory T is Γ-sound if every Γ-sentence provable in T is true in the standard model. When Γ is the set of all arithmetical formulas, Γ-soundness is called just (arithmetical) soundness.
If the language of T consists only of the language of arithmetic (as opposed to, for example, set theory), then a sound system is one whose model can be thought of as the set ω, the usual set of mathematical natural numbers. The case of general T is different, see ω-logic below.
Σn-soundness has the following computational interpretation: if the theory proves that a program C using a Σn−1-oracle halts, then C actually halts.
== Examples ==
=== Consistent, ω-inconsistent theories ===
Write PA for the theory Peano arithmetic, and Con(PA) for the statement of arithmetic that formalizes the claim "PA is consistent". Con(PA) could be of the form "No natural number n is the Gödel number of a proof in PA that 0=1". Now, the consistency of PA implies the consistency of PA + ¬Con(PA). Indeed, if PA + ¬Con(PA) was inconsistent, then PA alone would prove ¬Con(PA)→0=1, and a reductio ad absurdum in PA would produce a proof of Con(PA). By Gödel's second incompleteness theorem, PA would be inconsistent.
Therefore, assuming that PA is consistent, PA + ¬Con(PA) is consistent too. However, it would not be ω-consistent. This is because, for any particular n, PA, and hence PA + ¬Con(PA), proves that n is not the Gödel number of a proof that 0=1. However, PA + ¬Con(PA) proves that, for some natural number n, n is the Gödel number of such a proof (this is just a direct restatement of the claim ¬Con(PA)).
In this example, the axiom ¬Con(PA) is Σ1, hence the system PA + ¬Con(PA) is in fact Σ1-unsound, not just ω-inconsistent.
=== Arithmetically sound, ω-inconsistent theories ===
Let T be PA together with the axioms c ≠ n for each natural number n, where c is a new constant added to the language. Then T is arithmetically sound (as any nonstandard model of PA can be expanded to a model of T), but ω-inconsistent (as it proves
∃
x
c
=
x
{\displaystyle \exists x\,c=x}
, and c ≠ n for every number n).
Σ1-sound ω-inconsistent theories using only the language of arithmetic can be constructed as follows. Let IΣn be the subtheory of PA with the induction schema restricted to Σn-formulas, for any n > 0. The theory IΣn + 1 is finitely axiomatizable, let thus A be its single axiom, and consider the theory T = IΣn + ¬A. We can assume that A is an instance of the induction schema, which has the form
∀
w
[
B
(
0
,
w
)
∧
∀
x
(
B
(
x
,
w
)
→
B
(
x
+
1
,
w
)
)
→
∀
x
B
(
x
,
w
)
]
.
{\displaystyle \forall w\,[B(0,w)\land \forall x\,(B(x,w)\to B(x+1,w))\to \forall x\,B(x,w)].}
If we denote the formula
∀
w
[
B
(
0
,
w
)
∧
∀
x
(
B
(
x
,
w
)
→
B
(
x
+
1
,
w
)
)
→
B
(
n
,
w
)
]
{\displaystyle \forall w\,[B(0,w)\land \forall x\,(B(x,w)\to B(x+1,w))\to B(n,w)]}
by P(n), then for every natural number n, the theory T (actually, even the pure predicate calculus) proves P(n). On the other hand, T proves the formula
∃
x
¬
P
(
x
)
{\displaystyle \exists x\,\neg P(x)}
, because it is logically equivalent to the axiom ¬A. Therefore, T is ω-inconsistent.
It is possible to show that T is Πn + 3-sound. In fact, it is Πn + 3-conservative over the (obviously sound) theory IΣn. The argument is more complicated (it relies on the provability of the Σn + 2-reflection principle for IΣn in IΣn + 1).
=== Arithmetically unsound, ω-consistent theories ===
Let ω-Con(PA) be the arithmetical sentence formalizing the statement "PA is ω-consistent". Then the theory PA + ¬ω-Con(PA) is unsound (Σ3-unsound, to be precise), but ω-consistent. The argument is similar to the first example: a suitable version of the Hilbert–Bernays–Löb derivability conditions holds for the "provability predicate" ω-Prov(A) = ¬ω-Con(PA + ¬A), hence it satisfies an analogue of Gödel's second incompleteness theorem.
== ω-logic ==
The concept of theories of arithmetic whose integers are the true mathematical integers is captured by ω-logic. Let T be a theory in a countable language that includes a unary predicate symbol N intended to hold just of the natural numbers, as well as specified names 0, 1, 2, ..., one for each (standard) natural number (which may be separate constants, or constant terms such as 0, 1, 1+1, 1+1+1, ..., etc.). Note that T itself could be referring to more general objects, such as real numbers or sets; thus in a model of T the objects satisfying N(x) are those that T interprets as natural numbers, not all of which need be named by one of the specified names.
The system of ω-logic includes all axioms and rules of the usual first-order predicate logic, together with, for each T-formula P(x) with a specified free variable x, an infinitary ω-rule of the form:
From
P
(
0
)
,
P
(
1
)
,
P
(
2
)
,
…
{\displaystyle P(0),P(1),P(2),\ldots }
infer
∀
x
(
N
(
x
)
→
P
(
x
)
)
{\displaystyle \forall x\,(N(x)\to P(x))}
.
That is, if the theory asserts (i.e. proves) P(n) separately for each natural number n given by its specified name, then it also asserts P collectively for all natural numbers at once via the evident finite universally quantified counterpart of the infinitely many antecedents of the rule. For a theory of arithmetic, meaning one with intended domain the natural numbers such as Peano arithmetic, the predicate N is redundant and may be omitted from the language, with the consequent of the rule for each P simplifying to
∀
x
P
(
x
)
{\displaystyle \forall x\,P(x)}
.
An ω-model of T is a model of T whose domain includes the natural numbers and whose specified names and symbol N are standardly interpreted, respectively as those numbers and the predicate having just those numbers as its domain (whence there are no nonstandard numbers). If N is absent from the language then what would have been the domain of N is required to be that of the model, i.e. the model contains only the natural numbers. (Other models of T may interpret these symbols nonstandardly; the domain of N need not even be countable, for example.) These requirements make the ω-rule sound in every ω-model. As a corollary to the omitting types theorem, the converse also holds: the theory T has an ω-model if and only if it is consistent in ω-logic.
There is a close connection of ω-logic to ω-consistency. A theory consistent in ω-logic is also ω-consistent (and arithmetically sound). The converse is false, as consistency in ω-logic is a much stronger notion than ω-consistency. However, the following characterization holds: a theory is ω-consistent if and only if its closure under unnested applications of the ω-rule is consistent.
== Relation to other consistency principles ==
If the theory T is recursively axiomatizable, ω-consistency has the following characterization, due to Craig Smoryński:
T is ω-consistent if and only if
T
+
R
F
N
T
+
T
h
Π
2
0
(
N
)
{\displaystyle T+\mathrm {RFN} _{T}+\mathrm {Th} _{\Pi _{2}^{0}}(\mathbb {N} )}
is consistent.
Here,
T
h
Π
2
0
(
N
)
{\displaystyle \mathrm {Th} _{\Pi _{2}^{0}}(\mathbb {N} )}
is the set of all Π02-sentences valid in the standard model of arithmetic, and
R
F
N
T
{\displaystyle \mathrm {RFN} _{T}}
is the uniform reflection principle for T, which consists of the axioms
∀
x
(
P
r
o
v
T
(
⌜
φ
(
x
˙
)
⌝
)
→
φ
(
x
)
)
{\displaystyle \forall x\,(\mathrm {Prov} _{T}(\ulcorner \varphi ({\dot {x}})\urcorner )\to \varphi (x))}
for every formula
φ
{\displaystyle \varphi }
with one free variable. In particular, a finitely axiomatizable theory T in the language of arithmetic is ω-consistent if and only if T + PA is
Σ
2
0
{\displaystyle \Sigma _{2}^{0}}
-sound.
== Notes ==
== Bibliography ==
Kurt Gödel (1931). 'Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I'. In Monatshefte für Mathematik. Translated into English as On Formally Undecidable Propositions of Principia Mathematica and Related Systems. | Wikipedia/Ω-consistent_theory |
In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning.
== History ==
Natural deduction grew out of a context of dissatisfaction with the axiomatizations of deductive reasoning common to the systems of Hilbert, Frege, and Russell (see, e.g., Hilbert system). Such axiomatizations were most famously used by Russell and Whitehead in their mathematical treatise Principia Mathematica. Spurred on by a series of seminars in Poland in 1926 by Łukasiewicz that advocated a more natural treatment of logic, Jaśkowski made the earliest attempts at defining a more natural deduction, first in 1929 using a diagrammatic notation, and later updating his proposal in a sequence of papers in 1934 and 1935. His proposals led to different notations such as Fitch notation or Suppes' method, for which Lemmon gave a variant now known as Suppes–Lemmon notation.
Natural deduction in its modern form was independently proposed by the German mathematician Gerhard Gentzen in 1933, in a dissertation delivered to the faculty of mathematical sciences of the University of Göttingen. The term natural deduction (or rather, its German equivalent natürliches Schließen) was coined in that paper:
Gentzen was motivated by a desire to establish the consistency of number theory. He was unable to prove the main result required for the consistency result, the cut elimination theorem—the Hauptsatz—directly for natural deduction. For this reason he introduced his alternative system, the sequent calculus, for which he proved the Hauptsatz both for classical and intuitionistic logic. In a series of seminars in 1961 and 1962 Prawitz gave a comprehensive summary of natural deduction calculi, and transported much of Gentzen's work with sequent calculi into the natural deduction framework. His 1965 monograph Natural deduction: a proof-theoretical study was to become a reference work on natural deduction, and included applications for modal and second-order logic.
In natural deduction, a proposition is deduced from a collection of premises by applying inference rules repeatedly. The system presented in this article is a minor variation of Gentzen's or Prawitz's formulation, but with a closer adherence to Martin-Löf's description of logical judgments and connectives.
=== History of notation styles ===
Natural deduction has had a large variety of notation styles, which can make it difficult to recognize a proof if you're not familiar with one of them. To help with this situation, this article has a § Notation section explaining how to read all the notation that it will actually use. This section just explains the historical evolution of notation styles, most of which cannot be shown because there are no illustrations available under a public copyright license – the reader is pointed to the SEP and IEP for pictures.
Gentzen invented natural deduction using tree-shaped proofs – see § Gentzen's tree notation for details.
Jaśkowski changed this to a notation that used various nested boxes.
Fitch changed Jaśkowski method of drawing the boxes, creating Fitch notation.
1940: In a textbook, Quine indicated antecedent dependencies by line numbers in square brackets, anticipating Suppes' 1957 line-number notation.
1950: In a textbook, Quine (1982, pp. 241–255) demonstrated a method of using one or more asterisks to the left of each line of proof to indicate dependencies. This is equivalent to Kleene's vertical bars. (It is not totally clear if Quine's asterisk notation appeared in the original 1950 edition or was added in a later edition.)
1957: An introduction to practical logic theorem proving in a textbook by Suppes (1999, pp. 25–150). This indicated dependencies (i.e. antecedent propositions) by line numbers at the left of each line.
1963: Stoll (1979, pp. 183–190, 215–219) uses sets of line numbers to indicate antecedent dependencies of the lines of sequential logical arguments based on natural deduction inference rules.
1965: The entire textbook by Lemmon (1978) is an introduction to logic proofs using a method based on that of Suppes, what is now known as Suppes–Lemmon notation.
1967: In a textbook, Kleene (2002, pp. 50–58, 128–130) briefly demonstrated two kinds of practical logic proofs, one system using explicit quotations of antecedent propositions on the left of each line, the other system using vertical bar-lines on the left to indicate dependencies.
== Notation ==
Here is a table with the most common notational variants for logical connectives.
=== Gentzen's tree notation ===
Gentzen, who invented natural deduction, had his own notation style for arguments. This will be exemplified by a simple argument below. Let's say we have a simple example argument in propositional logic, such as, "if it's raining then it's cloudly; it is raining; therefore it's cloudy". (This is in modus ponens.) Representing this as a list of propositions, as is common, we would have:
1
)
P
→
Q
{\displaystyle 1)~P\to Q}
2
)
P
{\displaystyle 2)~P}
∴
Q
{\displaystyle \therefore ~Q}
In Gentzen's notation, this would be written like this:
P
→
Q
,
P
Q
{\displaystyle {\frac {P\to Q,P}{Q}}}
The premises are shown above a line, called the inference line, separated by a comma, which indicates combination of premises. The conclusion is written below the inference line. The inference line represents syntactic consequence, sometimes called deductive consequence, which is also symbolized with ⊢. So the above can also be written in one line as
P
→
Q
,
P
⊢
Q
{\displaystyle P\to Q,P\vdash Q}
. (The turnstile, for syntactic consequence, is of lower precedence than the comma, which represents premise combination, which in turn is of lower precedence than the arrow, used for material implication; so no parentheses are needed to interpret this formula.)
Syntactic consequence is contrasted with semantic consequence, which is symbolized with ⊧. In this case, the conclusion follows syntactically because natural deduction is a syntactic proof system, which assumes inference rules as primitives.
Gentzen's style will be used in much of this article. Gentzen's discharging annotations used to internalise hypothetical judgments can be avoided by representing proofs as a tree of sequents Γ ⊢A instead of a tree of judgments that A (is true).
=== Suppes–Lemmon notation ===
Many textbooks use Suppes–Lemmon notation, so this article will also give that – although as of now, this is only included for propositional logic, and the rest of the coverage is given only in Gentzen style. A proof, laid out in accordance with the Suppes–Lemmon notation style, is a sequence of lines containing sentences, where each sentence is either an assumption, or the result of applying a rule of proof to earlier sentences in the sequence. Each line of proof is made up of a sentence of proof, together with its annotation, its assumption set, and the current line number. The assumption set lists the assumptions on which the given sentence of proof depends, which are referenced by the line numbers. The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence. Here's an example proof:
This proof will become clearer when the inference rules and their appropriate annotations are specified – see § Propositional inference rules (Suppes–Lemmon style).
== Propositional language syntax ==
This section defines the formal syntax for a propositional logic language, contrasting the common ways of doing so with a Gentzen-style way of doing so.
=== Common definition styles ===
In classical propositional calculus the formal language
L
{\displaystyle {\mathcal {L}}}
is usually defined (here: by recursion) as follows:
Each propositional variable is a formula.
"
⊥
{\displaystyle \bot }
" is a formula.
If
φ
{\displaystyle \varphi }
and
ψ
{\displaystyle \psi }
are formulae, so are
(
φ
∧
ψ
)
{\displaystyle (\varphi \land \psi )}
,
(
φ
∨
ψ
)
{\displaystyle (\varphi \lor \psi )}
,
(
φ
→
ψ
)
{\displaystyle (\varphi \to \psi )}
,
(
φ
↔
ψ
)
{\displaystyle (\varphi \leftrightarrow \psi )}
.
Nothing else is a formula.
Negation (
¬
{\displaystyle \neg }
) is defined as implication to falsity
¬
ϕ
=
def
ϕ
→
⊥
{\displaystyle \neg \phi \;{\overset {\text{def}}{=}}\;\phi \to \bot }
,
where
⊥
{\displaystyle \bot }
(falsum) represents a contradiction or absolute falsehood.
Older publications, and publications that do not focus on logical systems like minimal, intuitionistic or Hilbert systems, take negation as a primitive logical connective, meaning it is assumed as a basic operation and not defined in terms of other connectives. Some authors, such as Bostock, use
⊥
{\displaystyle \bot }
and
⊤
{\displaystyle \top }
, and also define
¬
{\displaystyle \neg }
as primitives.
=== Gentzen-style definition ===
A syntax definition can also be given using § Gentzen's tree notation, by writing well-formed formulas below the inference line and any schematic variables used by those formulas above it. For instance, the equivalent of rules 3 and 4, from Bostock's definition above, is written as follows:
φ
(
¬
φ
)
φ
ψ
(
φ
∨
ψ
)
φ
ψ
(
φ
∧
ψ
)
φ
ψ
(
φ
→
ψ
)
φ
ψ
(
φ
↔
ψ
)
{\displaystyle {\frac {\varphi }{(\neg \varphi )}}\quad {\frac {\varphi \quad \psi }{(\varphi \lor \psi )}}\quad {\frac {\varphi \quad \psi }{(\varphi \land \psi )}}\quad {\frac {\varphi \quad \psi }{(\varphi \rightarrow \psi )}}\quad {\frac {\varphi \quad \psi }{(\varphi \leftrightarrow \psi )}}}
.
A different notational convention sees the language's syntax as a categorial grammar with the single category "formula", denoted by the symbol
F
{\displaystyle {\mathcal {F}}}
. So any elements of the syntax are introduced by categorizations, for which the notation is
φ
:
F
{\displaystyle \varphi :{\mathcal {F}}}
, meaning "
φ
{\displaystyle \varphi }
is an expression for an object in the category
F
{\displaystyle {\mathcal {F}}}
." The sentence-letters, then, are introduced by categorizations such as
P
:
F
{\displaystyle P:{\mathcal {F}}}
,
Q
:
F
{\displaystyle Q:{\mathcal {F}}}
,
R
:
F
{\displaystyle R:{\mathcal {F}}}
, and so on; the connectives, in turn, are defined by statements similar to the above, but using categorization notation, as seen below:
In the rest of this article, the
φ
:
F
{\displaystyle \varphi :{\mathcal {F}}}
categorization notation will be used for any Gentzen-notation statements defining the language's grammar; any other statements in Gentzen notation will be inferences, asserting that a sequent follows rather than that an expression is a well-formed formula.
== Gentzen-style propositional logic ==
=== Gentzen-style inference rules ===
Let the propositional language
L
{\displaystyle {\mathcal {L}}}
be inductively defined as
Φ
::=
p
1
,
p
2
,
⋯
∣
⊥
∣
(
Φ
→
Φ
)
∣
(
Φ
∧
Φ
)
∣
(
Φ
∨
Φ
)
{\displaystyle \Phi ::=p_{1},p_{2},\dots \mid \bot \mid (\Phi \to \Phi )\mid (\Phi \land \Phi )\mid (\Phi \lor \Phi )}
.
Define negation as
¬
Φ
=
def
(
Φ
→
⊥
)
{\displaystyle \neg \Phi \,{\overset {\text{def}}{=}}\,(\Phi \to \bot )}
.
The following is a list of primitive inference rules for natural deduction in propositional logic
In this table the Greek letters
φ
,
ψ
,
χ
{\displaystyle \varphi ,\psi ,\chi }
are schemata, which range over formulas rather than only over atomic propositions. The name of a rule is given to the right of its formula tree. For instance, the first introduction rule is named
∧
I
{\displaystyle \land _{I}}
, which is short for "conjunction introduction".
Minimal logic: the natural deduction rules are
N
D
M
P
C
=
{
∧
I
,
∧
E
,
∨
I
,
∨
E
,
→
I
,
→
E
}
{\displaystyle ND_{MPC}=\{\land _{I},\land _{E},\lor _{I},\lor _{E},\to _{I},\to _{E}\}}
.
Whithout the rules
⊥
E
{\displaystyle \bot _{E}}
and
¬
¬
E
{\displaystyle \neg \neg _{E}}
the system defines minimal logic (as discussed by Johansson).
Intuitionistic logic: the natural deduction rules are
N
D
I
P
C
=
N
D
M
P
C
∪
{
⊥
E
}
{\displaystyle ND_{IPC}=ND_{MPC}\cup \{\bot _{E}\}}
.
When the rule
⊥
E
{\displaystyle \bot _{E}}
(principle of explosion) is added to the rules for minimal logic, the system defines intuitionistic logic.
The statement
P
→
¬
¬
P
{\displaystyle P\to \neg \neg P}
is valid (already in minimal logic, see example 1 below), unlike the reverse implication which would entail the law of excluded middle.
Classical logic: the natural deduction rules are
N
D
C
P
C
=
N
D
I
P
C
∪
{
¬
¬
E
}
{\displaystyle ND_{CPC}=ND_{IPC}\cup \{\neg \neg _{E}\}}
.
When all listed natural deduction rules are admitted, the system defines classical logic.
=== Gentzen-style example proofs ===
Example 1: Proof, within minimal logic, of
P
→
¬
¬
P
{\displaystyle P\to \neg \neg P}
.
Goal:
P
→
(
(
P
→
⊥
)
→
⊥
)
{\displaystyle P\to ((P\to \bot )\to \bot )}
Proof:
[
P
]
v
[
P
→
⊥
]
u
⊥
→
E
(
(
P
→
⊥
)
→
⊥
)
P
→
(
(
P
→
⊥
)
→
⊥
)
→
I
v
→
I
u
{\displaystyle {\cfrac {{\cfrac {[P]^{v}\qquad [P\to \bot ]^{u}}{\bot }}\to _{E}}{{\cfrac {((P\to \bot )\to \bot )}{P\to ((P\to \bot )\to \bot )}}\to _{I^{v}}}}\to _{I^{u}}}
Example 2: Proof, whithin minimal logic, of
A
→
(
B
→
(
A
∧
B
)
)
{\displaystyle A\to \left(B\to \left(A\land B\right)\right)}
:
[
A
]
u
[
B
]
w
A
∧
B
∧
I
B
→
(
A
∧
B
)
A
→
(
B
→
(
A
∧
B
)
)
→
I
u
→
I
w
{\displaystyle {\cfrac {{\cfrac {[A]^{u}\quad [B]^{w}}{A\land B}}\ \land _{I}}{{\cfrac {B\to \left(A\land B\right)}{A\to \left(B\to \left(A\land B\right)\right)}}\ \to _{I^{u}}}}\ \to _{I^{w}}}
== Fitch-style propositional logic ==
Fitch developed a system of natural deduction which is characterized by
linear presentation of the proof, instead of presentation as a tree;
subordinate proofs, where assumptions could be opened within a subderivation and discharged later.
Later logicians and educators such as Patrick Suppes and E. J. Lemmon rebranded Fitch's system. While they introduced graphical changes—such as replacing indentation with vertical bars—the underlying structure of Fitch-style natural deduction remained intact. These variations are often referred to as the Suppes–Lemmon format, though they are fundamentally based on Fitch's original notation.
== Suppes–Lemmon-style propositional logic ==
=== Suppes–Lemmon-style inference rules ===
The linear presentation used in Fitch- and Suppes–Lemmon-style proofs — with line numbers and vertical alignment/assumption sets — makes subproofs clearly visible. Fitch (sparingly and cautiously) used derived rules. Suppes–Lemmon went further and added derived rules to the toolbox of natural deduction rules.
Suppes introduced natural deduction using Gentzen-style rules.
He defined negation in terms of contradiction:
¬
P
≡
(
P
→
⊥
)
{\displaystyle \neg P\equiv (P\to \bot )}
.
He discussed derived rules explicitly, though not always distinguishing them clearly from primitive ones in layout.
His system is close to minimal, but allows derived steps for brevity.
Lemmon formalized more derived rules. He as well defined negation as implication to falsity:
¬
P
≡
P
→
⊥
{\displaystyle \neg P\equiv P\to \bot }
. This is not stated as a formal definition in Beginning Logic, but it is implicitly assumed throughout the system. Here's how we know:
Use of RAA (Reductio ad Absurdum): Lemmon regularly used RAA in the form: Assume
P
{\displaystyle P}
, derive
⊥
{\displaystyle \bot }
, then conclude
¬
P
{\displaystyle \neg P}
.
This only works if
¬
P
{\displaystyle \neg P}
is understood as
P
→
⊥
{\displaystyle P\to \bot }
.
Proofs involving contradiction: Lemmon used the fact that from
¬
P
∧
P
{\displaystyle \neg P\land P}
one can derive
⊥
{\displaystyle \bot }
.
This requires treating
¬
P
{\displaystyle \neg P}
as
P
→
⊥
{\displaystyle P\to \bot }
, so that modus ponens yields contradiction.
Absence of a primitive “¬” rule: Lemmon did not include a standalone rule for introducing or eliminating ¬. Instead, he derived negation using implication and contradiction.
In the table below, based on Lemmon (1978) and Allen & Hand (2022), Lemmon's derived rules are highlighted. They can be derived from the (non-highlighted) Gentzen rules.
There are nine primitive rules of proof, which are the rule assumption, plus four pairs of introduction and elimination rules for the binary connectives, and the rules of double negation and reductio ad absurdum, of which only one is needed. Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination, and MTT is a commonly given rule, although it is not primitive.
=== Suppes–Lemmon-style examples proof ===
Recall that an example proof was already given when introducing § Suppes–Lemmon notation. This is a second example.
==== Example 2 ====
==== Example 3 ====
The next derivation proves two theorems:
lines 1 - 8 prove within minimal logic:
⊢
M
P
C
¬
¬
(
P
∨
¬
P
)
{\displaystyle \vdash _{MPC}\neg \neg (P\lor \neg P)}
.
lines 1 - 9 prove within classical logic:
⊢
C
P
C
P
∨
¬
P
{\displaystyle \vdash _{CPC}P\lor \neg P}
.
Goals:
lines 1 - 8:
⊢
M
P
C
(
(
P
∨
(
P
→
⊥
)
)
→
⊥
)
→
⊥
{\displaystyle \vdash _{MPC}((P\lor (P\to \bot ))\to \bot )\to \bot }
.
lines 1 - 9:
⊢
C
P
C
P
∨
(
P
→
⊥
)
{\displaystyle \vdash _{CPC}P\lor (P\to \bot )}
.
Remark: Valery Glivenko proved the following theorem:
If
φ
{\displaystyle \varphi }
is a propositional formula, then
φ
{\displaystyle \varphi }
is a classical tautology if and only if
¬
¬
φ
{\displaystyle \neg \neg \varphi }
is an intuitionistic tautology.
This implies that all classical propositional theorems
φ
{\displaystyle \varphi }
can be proved like in this example:
Prove
¬
¬
φ
{\displaystyle \neg \neg \varphi }
within intuitionistic logic (i.e. without
¬
¬
E
{\displaystyle \neg \neg _{E}}
).
Apply
¬
¬
E
{\displaystyle \neg \neg _{E}}
to get
φ
{\displaystyle \varphi }
from
¬
¬
φ
{\displaystyle \neg \neg \varphi }
.
== Consistency, completeness, and normal forms ==
A theory is said to be consistent if falsehood is not provable (from no assumptions) and is complete if every theorem or its negation is provable using the inference rules of the logic. These are statements about the entire logic, and are usually tied to some notion of a model. However, there are local notions of consistency and completeness that are purely syntactic checks on the inference rules, and require no appeals to models. The first of these is local consistency, also known as local reducibility, which says that any derivation containing an introduction of a connective followed immediately by its elimination can be turned into an equivalent derivation without this detour. It is a check on the strength of elimination rules: they must not be so strong that they include knowledge not already contained in their premises. As an example, consider conjunctions.
A
u
B
w
A
∧
B
∧
I
A
∧
E
1
⇒
A
u
{\displaystyle {\begin{aligned}{\cfrac {{\cfrac {{\cfrac {}{A\ }}u\qquad {\cfrac {}{B\ }}w}{A\wedge B\ }}\wedge _{I}}{A\ }}\wedge _{E1}\end{aligned}}\quad \Rightarrow \quad {\cfrac {}{A\ }}u}
Dually, local completeness says that the elimination rules are strong enough to decompose a connective into the forms suitable for its introduction rule. Again for conjunctions:
A
∧
B
u
⇒
A
∧
B
u
A
∧
E
1
A
∧
B
u
B
∧
E
2
A
∧
B
∧
I
{\displaystyle {\cfrac {}{A\wedge B\ }}u\quad \Rightarrow \quad {\begin{aligned}{\cfrac {{\cfrac {{\cfrac {}{A\wedge B\ }}u}{A\ }}\wedge _{E1}\qquad {\cfrac {{\cfrac {}{A\wedge B\ }}u}{B\ }}\wedge _{E2}}{A\wedge B\ }}\wedge _{I}\end{aligned}}}
These notions correspond exactly to β-reduction (beta reduction) and η-conversion (eta conversion) in the lambda calculus, using the Curry–Howard isomorphism. By local completeness, we see that every derivation can be converted to an equivalent derivation where the principal connective is introduced. In fact, if the entire derivation obeys this ordering of eliminations followed by introductions, then it is said to be normal. In a normal derivation all eliminations happen above introductions. In most logics, every derivation has an equivalent normal derivation, called a normal form. The existence of normal forms is generally hard to prove using natural deduction alone, though such accounts do exist in the literature, most notably by Dag Prawitz in 1961. It is much easier to show this indirectly by means of a cut-free sequent calculus presentation.
== First and higher-order extensions ==
The logic of the earlier section is an example of a single-sorted logic, i.e., a logic with a single kind of object: propositions. Many extensions of this simple framework have been proposed; in this section we will extend it with a second sort of individuals or terms. More precisely, we will add a new category, "term", denoted
T
{\displaystyle {\mathcal {T}}}
. We shall fix a countable set
V
{\displaystyle V}
of variables, another countable set
F
{\displaystyle F}
of function symbols, and construct terms with the following formation rules:
v
∈
V
v
:
T
var
F
{\displaystyle {\frac {v\in V}{v:{\mathcal {T}}}}{\hbox{ var}}_{F}}
and
f
∈
F
t
1
:
T
t
2
:
T
⋯
t
n
:
T
f
(
t
1
,
t
2
,
⋯
,
t
n
)
:
T
app
F
{\displaystyle {\frac {f\in F\qquad t_{1}:{\mathcal {T}}\qquad t_{2}:{\mathcal {T}}\qquad \cdots \qquad t_{n}:{\mathcal {T}}}{f(t_{1},t_{2},\cdots ,t_{n}):{\mathcal {T}}}}{\hbox{ app}}_{F}}
For propositions, we consider a third countable set P of predicates, and define atomic predicates over terms with the following formation rule:
ϕ
∈
P
t
1
:
T
t
2
:
T
⋯
t
n
:
T
ϕ
(
t
1
,
t
2
,
⋯
,
t
n
)
:
F
pred
F
{\displaystyle {\frac {\phi \in P\qquad t_{1}:{\mathcal {T}}\qquad t_{2}:{\mathcal {T}}\qquad \cdots \qquad t_{n}:{\mathcal {T}}}{\phi (t_{1},t_{2},\cdots ,t_{n}):{\mathcal {F}}}}{\hbox{ pred}}_{F}}
The first two rules of formation provide a definition of a term that is effectively the same as that defined in term algebra and model theory, although the focus of those fields of study is quite different from natural deduction. The third rule of formation effectively defines an atomic formula, as in first-order logic, and again in model theory.
To these are added a pair of formation rules, defining the notation for quantified propositions; one for universal (∀) and existential (∃) quantification:
x
∈
V
A
:
F
∀
x
.
A
:
F
∀
F
x
∈
V
A
:
F
∃
x
.
A
:
F
∃
F
{\displaystyle {\frac {x\in V\qquad A:{\mathcal {F}}}{\forall x.A:{\mathcal {F}}}}\;\forall _{F}\qquad \qquad {\frac {x\in V\qquad A:{\mathcal {F}}}{\exists x.A:{\mathcal {F}}}}\;\exists _{F}}
The universal quantifier has the introduction and elimination rules:
a
:
T
u
⋮
[
a
/
x
]
A
∀
x
.
A
∀
I
u
,
a
∀
x
.
A
t
:
T
[
t
/
x
]
A
∀
E
{\displaystyle {\cfrac {\begin{array}{c}{\cfrac {}{a:{\mathcal {T}}}}{\text{ u}}\\\vdots \\{}[a/x]A\end{array}}{\forall x.A}}\;\forall _{I^{u,a}}\qquad \qquad {\frac {\forall x.A\qquad t:{\mathcal {T}}}{[t/x]A}}\;\forall _{E}}
The existential quantifier has the introduction and elimination rules:
[
t
/
x
]
A
∃
x
.
A
∃
I
a
:
T
u
[
a
/
x
]
A
v
⏟
⋮
∃
x
.
A
C
C
∃
E
a
,
u
,
v
{\displaystyle {\frac {[t/x]A}{\exists x.A}}\;\exists _{I}\qquad \qquad {\cfrac {\begin{array}{cc}&\underbrace {\,{\cfrac {}{a:{\mathcal {T}}}}{\hbox{ u}}\quad {\cfrac {}{[a/x]A}}{\hbox{ v}}\,} \\&\vdots \\\exists x.A\quad &C\\\end{array}}{C}}\exists _{E^{a,u,v}}}
In these rules, the notation [t/x] A stands for the substitution of t for every (visible) instance of x in A, avoiding capture. As before the superscripts on the name stand for the components that are discharged: the term a cannot occur in the conclusion of ∀I (such terms are known as eigenvariables or parameters), and the hypotheses named u and v in ∃E are localised to the second premise in a hypothetical derivation. Although the propositional logic of earlier sections was decidable, adding the quantifiers makes the logic undecidable.
So far, the quantified extensions are first-order: they distinguish propositions from the kinds of objects quantified over. Higher-order logic takes a different approach and has only a single sort of propositions. The quantifiers have as the domain of quantification the very same sort of propositions, as reflected in the formation rules:
p
:
F
u
⋮
A
:
F
∀
p
.
A
:
F
∀
F
u
p
:
F
u
⋮
A
:
F
∃
p
.
A
:
F
∃
F
u
{\displaystyle {\cfrac {\begin{matrix}{\cfrac {}{p:{\mathcal {F}}}}{\hbox{ u}}\\\vdots \\A:{\mathcal {F}}\\\end{matrix}}{\forall p.A:{\mathcal {F}}}}\;\forall _{F^{u}}\qquad \qquad {\cfrac {\begin{matrix}{\cfrac {}{p:{\mathcal {F}}}}{\hbox{ u}}\\\vdots \\A:{\mathcal {F}}\\\end{matrix}}{\exists p.A:{\mathcal {F}}}}\;\exists _{F^{u}}}
A discussion of the introduction and elimination forms for higher-order logic is beyond the scope of this article. It is possible to be in-between first-order and higher-order logics. For example, second-order logic has two kinds of propositions, one kind quantifying over terms, and the second kind quantifying over propositions of the first kind.
== Proofs and type theory ==
The presentation of natural deduction so far has concentrated on the nature of propositions without giving a formal definition of a proof. To formalise the notion of proof, we alter the presentation of hypothetical derivations slightly. We label the antecedents with proof variables (from some countable set V of variables), and decorate the succedent with the actual proof. The antecedents or hypotheses are separated from the succedent by means of a turnstile (⊢). This modification sometimes goes under the name of localised hypotheses. The following diagram summarises the change.
The collection of hypotheses will be written as Γ when their exact composition is not relevant.
To make proofs explicit, we move from the proof-less judgment "A" to a judgment: "π is a proof of (A)", which is written symbolically as "π : A". Following the standard approach, proofs are specified with their own formation rules for the judgment "π proof". The simplest possible proof is the use of a labelled hypothesis; in this case the evidence is the label itself.
Let us re-examine some of the connectives with explicit proofs. For conjunction, we look at the introduction rule ∧I to discover the form of proofs of conjunction: they must be a pair of proofs of the two conjuncts. Thus:
The elimination rules ∧E1 and ∧E2 select either the left or the right conjunct; thus the proofs are a pair of projections—first (fst) and second (snd).
For implication, the introduction form localises or binds the hypothesis, written using a λ; this corresponds to the discharged label. In the rule, "Γ, u:A" stands for the collection of hypotheses Γ, together with the additional hypothesis u.
With proofs available explicitly, one can manipulate and reason about proofs. The key operation on proofs is the substitution of one proof for an assumption used in another proof. This is commonly known as a substitution theorem, and can be proved by induction on the depth (or structure) of the second judgment.
=== Substitution theorem ===
If Γ ⊢ π1 : A and Γ, u:A ⊢ π2 : B, then Γ ⊢ [π1/u] π2 : B.
So far the judgment "Γ ⊢ π : A" has had a purely logical interpretation. In type theory, the logical view is exchanged for a more computational view of objects. Propositions in the logical interpretation are now viewed as types, and proofs as programs in the lambda calculus. Thus the interpretation of "π : A" is "the program π has type A". The logical connectives are also given a different reading: conjunction is viewed as product (×), implication as the function arrow (→), etc. The differences are only cosmetic, however. Type theory has a natural deduction presentation in terms of formation, introduction and elimination rules; in fact, the reader can easily reconstruct what is known as simple type theory from the previous sections.
The difference between logic and type theory is primarily a shift of focus from the types (propositions) to the programs (proofs). Type theory is chiefly interested in the convertibility or reducibility of programs. For every type, there are canonical programs of that type which are irreducible; these are known as canonical forms or values. If every program can be reduced to a canonical form, then the type theory is said to be normalising (or weakly normalising). If the canonical form is unique, then the theory is said to be strongly normalising. Normalisability is a rare feature of most non-trivial type theories, which is a big departure from the logical world. (Recall that almost every logical derivation has an equivalent normal derivation.) To sketch the reason: in type theories that admit recursive definitions, it is possible to write programs that never reduce to a value; such looping programs can generally be given any type. In particular, the looping program has type ⊥, although there is no logical proof of "⊥". For this reason, the propositions as types; proofs as programs paradigm only works in one direction, if at all: interpreting a type theory as a logic generally gives an inconsistent logic.
=== Example: Dependent Type Theory ===
Like logic, type theory has many extensions and variants, including first-order and higher-order versions. One branch, known as dependent type theory, is used in a number of computer-assisted proof systems. Dependent type theory allows quantifiers to range over programs themselves. These quantified types are written as Π and Σ instead of ∀ and ∃, and have the following formation rules:
These types are generalisations of the arrow and product types, respectively, as witnessed by their introduction and elimination rules.
Dependent type theory in full generality is very powerful: it is able to express almost any conceivable property of programs directly in the types of the program. This generality comes at a steep price — either typechecking is undecidable (extensional type theory), or extensional reasoning is more difficult (intensional type theory). For this reason, some dependent type theories do not allow quantification over arbitrary programs, but rather restrict to programs of a given decidable index domain, for example integers, strings, or linear programs.
Since dependent type theories allow types to depend on programs, a natural question to ask is whether it is possible for programs to depend on types, or any other combination. There are many kinds of answers to such questions. A popular approach in type theory is to allow programs to be quantified over types, also known as parametric polymorphism; of this there are two main kinds: if types and programs are kept separate, then one obtains a somewhat more well-behaved system called predicative polymorphism; if the distinction between program and type is blurred, one obtains the type-theoretic analogue of higher-order logic, also known as impredicative polymorphism. Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the lambda cube of Henk Barendregt.
The intersection of logic and type theory is a vast and active research area. New logics are usually formalised in a general type theoretic setting, known as a logical framework. Popular modern logical frameworks such as the calculus of constructions and LF are based on higher-order dependent type theory, with various trade-offs in terms of decidability and expressive power. These logical frameworks are themselves always specified as natural deduction systems, which is a testament to the versatility of the natural deduction approach.
== Classical and modal logics ==
For simplicity, the logics presented so far have been intuitionistic. Classical logic extends intuitionistic logic with an additional axiom or principle of excluded middle:
For any proposition p, the proposition p ∨ ¬p is true.
This statement is not obviously either an introduction or an elimination; indeed, it involves two distinct connectives. Gentzen's original treatment of excluded middle prescribed one of the following three (equivalent) formulations, which were already present in analogous forms in the systems of Hilbert and Heyting:
(XM3 is merely XM2 expressed in terms of E.) This treatment of excluded middle, in addition to being objectionable from a purist's standpoint, introduces additional complications in the definition of normal forms.
A comparatively more satisfactory treatment of classical natural deduction in terms of introduction and elimination rules alone was first proposed by Parigot in 1992 in the form of a classical lambda calculus called λμ. The key insight of his approach was to replace a truth-centric judgment A with a more classical notion, reminiscent of the sequent calculus: in localised form, instead of Γ ⊢ A, he used Γ ⊢ Δ, with Δ a collection of propositions similar to Γ. Γ was treated as a conjunction, and Δ as a disjunction. This structure is essentially lifted directly from classical sequent calculi, but the innovation in λμ was to give a computational meaning to classical natural deduction proofs in terms of a callcc or a throw/catch mechanism seen in LISP and its descendants. (See also: first class control.)
Another important extension was for modal and other logics that need more than just the basic judgment of truth. These were first described, for the alethic modal logics S4 and S5, in a natural deduction style by Prawitz in 1965, and have since accumulated a large body of related work. To give a simple example, the modal logic S4 requires one new judgment, "A valid", that is categorical with respect to truth:
If "A" (is true) under no assumption that "B" (is true), then "A valid".
This categorical judgment is internalised as a unary connective ◻A (read "necessarily A") with the following introduction and elimination rules:
Note that the premise "A valid" has no defining rules; instead, the categorical definition of validity is used in its place. This mode becomes clearer in the localised form when the hypotheses are explicit. We write "Ω;Γ ⊢ A" where Γ contains the true hypotheses as before, and Ω contains valid hypotheses. On the right there is just a single judgment "A"; validity is not needed here since "Ω ⊢ A valid" is by definition the same as "Ω;⋅ ⊢ A". The introduction and elimination forms are then:
The modal hypotheses have their own version of the hypothesis rule and substitution theorem.
=== Modal substitution theorem ===
If Ω;⋅ ⊢ π1 : A and Ω, u: (A valid) ; Γ ⊢ π2 : C, then Ω;Γ ⊢ [π1/u] π2 : C.
This framework of separating judgments into distinct collections of hypotheses, also known as multi-zoned or polyadic contexts, is very powerful and extensible; it has been applied for many different modal logics, and also for linear and other substructural logics, to give a few examples. However, relatively few systems of modal logic can be formalised directly in natural deduction. To give proof-theoretic characterisations of these systems, extensions such as labelling or systems of deep inference.
The addition of labels to formulae permits much finer control of the conditions under which rules apply, allowing the more flexible techniques of analytic tableaux to be applied, as has been done in the case of labelled deduction. Labels also allow the naming of worlds in Kripke semantics; Simpson (1994) presents an influential technique for converting frame conditions of modal logics in Kripke semantics into inference rules in a natural deduction formalisation of hybrid logic. Stouppa (2004) surveys the application of many proof theories, such as Avron and Pottinger's hypersequents and Belnap's display logic to such modal logics as S5 and B.
== Comparison with sequent calculus ==
The sequent calculus is the chief alternative to natural deduction as a foundation of mathematical logic. In natural deduction the flow of information is bi-directional: elimination rules flow information downwards by deconstruction, and introduction rules flow information upwards by assembly. Thus, a natural deduction proof does not have a purely bottom-up or top-down reading, making it unsuitable for automation in proof search. To address this fact, Gentzen in 1935 proposed his sequent calculus, though he initially intended it as a technical device for clarifying the consistency of predicate logic. Kleene, in his seminal 1952 book Introduction to Metamathematics, gave the first formulation of the sequent calculus in the modern style.
In the sequent calculus all inference rules have a purely bottom-up reading. Inference rules can apply to elements on both sides of the turnstile. (To differentiate from natural deduction, this article uses a double arrow ⇒ instead of the right tack ⊢ for sequents.) The introduction rules of natural deduction are viewed as right rules in the sequent calculus, and are structurally very similar. The elimination rules on the other hand turn into left rules in the sequent calculus. To give an example, consider disjunction; the right rules are familiar:
On the left:
Recall the ∨E rule of natural deduction in localised form:
The proposition A ∨ B, which is the succedent of a premise in ∨E, turns into a hypothesis of the conclusion in the left rule ∨L. Thus, left rules can be seen as a sort of inverted elimination rule. This observation can be illustrated as follows:
In the sequent calculus, the left and right rules are performed in lock-step until one reaches the initial sequent, which corresponds to the meeting point of elimination and introduction rules in natural deduction. These initial rules are superficially similar to the hypothesis rule of natural deduction, but in the sequent calculus they describe a transposition or a handshake of a left and a right proposition:
The correspondence between the sequent calculus and natural deduction is a pair of soundness and completeness theorems, which are both provable by means of an inductive argument.
Soundness of ⇒ wrt. ⊢
If Γ ⇒ A, then Γ ⊢ A.
Completeness of ⇒ wrt. ⊢
If Γ ⊢ A, then Γ ⇒ A.
It is clear by these theorems that the sequent calculus does not change the notion of truth, because the same collection of propositions remain true. Thus, one can use the same proof objects as before in sequent calculus derivations. As an example, consider the conjunctions. The right rule is virtually identical to the introduction rule
The left rule, however, performs some additional substitutions that are not performed in the corresponding elimination rules.
The kinds of proofs generated in the sequent calculus are therefore rather different from those of natural deduction. The sequent calculus produces proofs in what is known as the β-normal η-long form, which corresponds to a canonical representation of the normal form of the natural deduction proof. If one attempts to describe these proofs using natural deduction itself, one obtains what is called the intercalation calculus (first described by John Byrnes), which can be used to formally define the notion of a normal form for natural deduction.
The substitution theorem of natural deduction takes the form of a structural rule or structural theorem known as cut in the sequent calculus.
=== Cut (substitution) ===
If Γ ⇒ π1 : A and Γ, u:A ⇒ π2 : C, then Γ ⇒ [π1/u] π2 : C.
In most well behaved logics, cut is unnecessary as an inference rule, though it remains provable as a meta-theorem; the superfluousness of the cut rule is usually presented as a computational process, known as cut elimination. This has an interesting application for natural deduction; usually it is extremely tedious to prove certain properties directly in natural deduction because of an unbounded number of cases. For example, consider showing that a given proposition is not provable in natural deduction. A simple inductive argument fails because of rules like ∨E or E which can introduce arbitrary propositions. However, we know that the sequent calculus is complete with respect to natural deduction, so it is enough to show this unprovability in the sequent calculus. Now, if cut is not available as an inference rule, then all sequent rules either introduce a connective on the right or the left, so the depth of a sequent derivation is fully bounded by the connectives in the final conclusion. Thus, showing unprovability is much easier, because there are only a finite number of cases to consider, and each case is composed entirely of sub-propositions of the conclusion. A simple instance of this is the global consistency theorem: "⋅ ⊢ ⊥" is not provable. In the sequent calculus version, this is manifestly true because there is no rule that can have "⋅ ⇒ ⊥" as a conclusion! Proof theorists often prefer to work on cut-free sequent calculus formulations because of such properties.
== See also ==
== Notes ==
== References ==
=== General references ===
Allen, Colin; Hand, Michael (2022). Logic Primer (3rd ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-54364-4.
Arthur, Richard T. W. (2017). An Introduction to Logic: Using Natural Deduction, Real Arguments, a Little History, and Some Humour (2nd ed.). Peterborough, Ontario: Broadview Press. ISBN 978-1-55481-332-2. OCLC 962129086.
Ayala-Rincón, Mauricio; de Moura, Flávio L. C. (2017). Applied Logic for Computer Scientists. Undergraduate Topics in Computer Science. Springer. doi:10.1007/978-3-319-51653-0. ISBN 978-3-319-51651-6.
Barker-Plummer, Dave; Barwise, Jon; Etchemendy, John (2011). Language Proof and Logic (2nd ed.). CSLI Publications. ISBN 978-1575866321.
Bostock, David (1997). Intermediate Logic. Oxford ; New York: Clarendon Press ; Oxford University Press. ISBN 978-0-19-875141-0.
Gallier, Jean (2005). "Constructive Logics. Part I: A Tutorial on Proof Systems and Typed λ-Calculi". Archived from the original on 5 July 2017. Retrieved 12 June 2014.
Gentzen, Gerhard Karl Erich (1935a). "Untersuchungen über das logische Schließen. I". Mathematische Zeitschrift. 39 (2): 176–210. doi:10.1007/bf01201353. S2CID 121546341. Archived from the original on 24 December 2015.
— (1964) [1935]. "Investigations into logical deduction". American Philosophical Quarterly. 1 (4): 249–287.
Gentzen, Gerhard Karl Erich (1935b). "Untersuchungen über das logische Schließen. II". Mathematische Zeitschrift. 39 (3): 405–431. doi:10.1007/bf01201363. S2CID 186239837. Archived from the original on 9 July 2012.
— (1965) [1935]. "Investigations into logical deduction". American Philosophical Quarterly. 2 (3): 204–218.
Girard, Jean-Yves (1990). Proofs and Types. Cambridge Tracts in Theoretical Computer Science. Cambridge University Press, Cambridge, England. Archived from the original on 4 July 2016. Retrieved 20 April 2006. Translated and with appendices by Paul Taylor and Yves Lafont.
Hansson, Sven Ove; Hendricks, Vincent F. (2018). Introduction to Formal Philosophy. Springer Undergraduate Texts in Philosophy. Cham: Springer. ISBN 978-3-030-08454-7.
Jaśkowski, Stanisław (1934). On the rules of suppositions in formal logic. Reprinted in Polish logic 1920–39, ed. Storrs McCall.
Johansson, Ingebrigt (1937). "Der Minimalkalkül, ein reduzierter intuitionistischer Formalismus". Compositio Mathematica (in German). 4: 119–136.
Kleene, Stephen Cole (1980) [1952]. Introduction to metamathematics (Eleventh ed.). North-Holland. ISBN 978-0-7204-2103-3.
Kleene, Stephen Cole (2009) [1952]. Introduction to metamathematics. Ishi Press International. ISBN 978-0-923891-57-2.
Kleene, Stephen Cole (2002) [1967]. Mathematical logic. Mineola, New York: Dover Publications. ISBN 978-0-486-42533-7.
Lemmon, Edward John (1978) [1965]. Beginning Logic (Fifth printing, 1985 ed.). Boca Raton, FL: Hackett Publishing Company. ISBN 0915144-50-6.
Magnus, P.D.; Button, Tim; Trueman, Robert; Zach, Richard (2023). forall x: An Introduction to Formal Logic (Fall 2023 ed.). Open Logic Project. Retrieved 4 May 2025.
Martin-Löf, Per (1996). "On the meanings of the logical constants and the justifications of the logical laws" (PDF). Nordic Journal of Philosophical Logic. 1 (1): 11–60. Lecture notes to a short course at Università degli Studi di Siena, April 1983.
Paseau, A. C.; Leek, Robert. "The Compactness Theorem". Internet Encyclopedia of Philosophy. Retrieved 22 March 2024.
Paseau, Alexander; Pregel, Fabian (2023), "Deductivism in the Philosophy of Mathematics", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Fall 2023 ed.), Metaphysics Research Lab, Stanford University, retrieved 22 March 2024
Pelletier, Francis Jeffry; Hazen, Allen (2024), "Natural Deduction Systems in Logic", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Spring 2024 ed.), Metaphysics Research Lab, Stanford University, retrieved 22 March 2024
Pfenning, Frank; Davies, Rowan (2001). "A judgmental reconstruction of modal logic" (PDF). Mathematical Structures in Computer Science. 11 (4): 511–540. CiteSeerX 10.1.1.43.1611. doi:10.1017/S0960129501003322 (inactive 25 February 2025). S2CID 16467268.{{cite journal}}: CS1 maint: DOI inactive as of February 2025 (link)
Prawitz, Dag (1965). Natural Deduction: A Proof-Theoretic Study. Acta Universitatis Stockholmiensis; Stockholm Studies in Philosophy, 3. Stockholm, Göteborg, Uppsala: Almqvist & Wiksell. OCLC 912927896.
Prawitz, Dag (2006). Natural Deduction: A Proof-Theoretic Study. Mineola, New York: Dover Publications. ISBN 9780486446554. OCLC 61296001.
Quine, Willard Van Orman (1981) [1940]. Mathematical logic (Revised ed.). Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-55451-1.
Quine, Willard Van Orman (1982) [1950]. Methods of logic (Fourth ed.). Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-57176-1.
Restall, Greg (2018), "Substructural Logics", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Spring 2018 ed.), Metaphysics Research Lab, Stanford University, retrieved 22 March 2024
Simpson, Alex K. (1994). The Proof Theory and Semantics of Intuitionistic Modal Logic (PDF) (Thesis). Edinburgh Research Archive (ERA). hdl:1842/407.
Stoll, Robert Roth (1979) [1963]. Set Theory and Logic. Mineola, New York: Dover Publications. ISBN 978-0-486-63829-4.
Stouppa, Phiniki (2004). The Design of Modal Proof Theories: The Case of S5. University of Dresden. CiteSeerX 10.1.1.140.1858. MSc thesis.
Suppes, Patrick Colonel (1999) [1957]. Introduction to logic. Mineola, New York: Dover Publications. ISBN 978-0-486-40687-9.
Sutcliffe, Geoff. "Propositional Logic". www.cs.miami.edu. University of Miami. Retrieved 4 May 2025.
Tennant, Neil (1990) [1978]. Natural Logic (1st, repr. with corrections ed.). Edinburgh University Press. ISBN 0852245793.
Van Dalen, Dirk (2013) [1980]. Logic and Structure. Universitext (5 ed.). London, Heidelberg, New York, Dordrecht: Springer. doi:10.1007/978-1-4471-4558-5. ISBN 978-1-4471-4558-5.
von Plato, Jan (2013). Elements of Logical Reasoning (1. publ ed.). Cambridge: Cambridge University Press. ISBN 978-1-107-03659-8.
Weisstein, Eric W. "Connective". mathworld.wolfram.com. Retrieved 22 March 2024.
== External links ==
Indrzejczak, Andrzej. "Natural Deduction". Internet Encyclopedia of Philosophy. Retrieved 4 May 2025.
Laboreo, Daniel Clemente (August 2004). "Introduction to natural deduction" (PDF).
"Domino On Acid". Retrieved 10 December 2023. Natural deduction visualized as a game of dominoes
Pelletier, Francis Jeffry. "A History of Natural Deduction and Elementary Logic Textbooks" (PDF).
"Natural Deduction Systems in Logic" entry by Pelletier, Francis Jeffry; Hazen, Allen in the Stanford Encyclopedia of Philosophy, 29 October 2021
Levy, Michel. "A Propositional Prover". | Wikipedia/Natural_deduction_calculus |
In computer science, a list or sequence is a collection of items that are finite in number and in a particular order. An instance of a list is a computer representation of the mathematical concept of a tuple or finite sequence.
A list may contain the same value more than once, and each occurrence is considered a distinct item.
The term list is also used for several concrete data structures that can be used to implement abstract lists, especially linked lists and arrays. In some contexts, such as in Lisp programming, the term list may refer specifically to a linked list rather than an array. In class-based programming, lists are usually provided as instances of subclasses of a generic "list" class, and traversed via separate iterators.
Many programming languages provide support for list data types, and have special syntax and semantics for lists and list operations. A list can often be constructed by writing the items in sequence, separated by commas, semicolons, and/or spaces, within a pair of delimiters such as parentheses '()', brackets '[]', braces '{}', or angle brackets '<>'. Some languages may allow list types to be indexed or sliced like array types, in which case the data type is more accurately described as an array.
In type theory and functional programming, abstract lists are usually defined inductively by two operations: nil that yields the empty list, and cons, which adds an item at the beginning of a list.
A stream is the potentially infinite analog of a list.: §3.5
== Operations ==
Implementation of the list data structure may provide some of the following operations:
create
test for empty
add item to beginning or end
access the first or last item
access an item by index
== Implementations ==
Lists are typically implemented either as linked lists (either singly or doubly linked) or as arrays, usually variable length or dynamic arrays.
The standard way of implementing lists, originating with the programming language Lisp, is to have each element of the list contain both its value and a pointer indicating the location of the next element in the list. This results in either a linked list or a tree, depending on whether the list has nested sublists. Some older Lisp implementations (such as the Lisp implementation of the Symbolics 3600) also supported "compressed lists" (using CDR coding) which had a special internal representation (invisible to the user). Lists can be manipulated using iteration or recursion. The former is often preferred in imperative programming languages, while the latter is the norm in functional languages.
Lists can be implemented as self-balancing binary search trees holding index-value pairs, providing equal-time access to any element (e.g. all residing in the fringe, and internal nodes storing the right-most child's index, used to guide the search), taking the time logarithmic in the list's size, but as long as it doesn't change much will provide the illusion of random access and enable swap, prefix and append operations in logarithmic time as well.
== Programming language support ==
Some languages do not offer a list data structure, but offer the use of associative arrays or some kind of table to emulate lists. For example, Lua provides tables. Although Lua stores lists that have numerical indices as arrays internally, they still appear as dictionaries.
In Lisp, lists are the fundamental data type and can represent both program code and data. In most dialects, the list of the first three prime numbers could be written as (list 2 3 5). In several dialects of Lisp, including Scheme, a list is a collection of pairs, consisting of a value and a pointer to the next pair (or null value), making a singly linked list.
== Applications ==
Unlike in an array, a list can expand and shrink.
In computing, lists are easier to implement than sets. A finite set in the mathematical sense can be realized as a list with additional restrictions; that is, duplicate elements are disallowed and order is irrelevant. Sorting the list speeds up determining if a given item is already in the set, but in order to ensure the order, it requires more time to add new entry to the list. In efficient implementations, however, sets are implemented using self-balancing binary search trees or hash tables, rather than a list.
Lists also form the basis for other abstract data types including the queue, the stack, and their variations.
== Abstract definition ==
The abstract list type L with elements of some type E (a monomorphic list) is defined by the following functions:
nil: () → L
cons: E × L → L
first: L → E
rest: L → L
with the axioms
first (cons (e, l)) = e
rest (cons (e, l)) = l
for any element e and any list l. It is implicit that
cons (e, l) ≠ l
cons (e, l) ≠ e
cons (e1, l1) = cons (e2, l2) if e1 = e2 and l1 = l2
Note that first (nil ()) and rest (nil ()) are not defined.
These axioms are equivalent to those of the abstract stack data type.
In type theory, the above definition is more simply regarded as an inductive type defined in terms of constructors: nil and cons. In algebraic terms, this can be represented as the transformation 1 + E × L → L. first and rest are then obtained by pattern matching on the cons constructor and separately handling the nil case.
=== The list monad ===
The list type forms a monad with the following functions (using E* rather than L to represent monomorphic lists with elements of type E):
return
:
A
→
A
∗
=
a
↦
cons
a
nil
{\displaystyle {\text{return}}\colon A\to A^{*}=a\mapsto {\text{cons}}\,a\,{\text{nil}}}
bind
:
A
∗
→
(
A
→
B
∗
)
→
B
∗
=
l
↦
f
↦
{
nil
if
l
=
nil
append
(
f
a
)
(
bind
l
′
f
)
if
l
=
cons
a
l
′
{\displaystyle {\text{bind}}\colon A^{*}\to (A\to B^{*})\to B^{*}=l\mapsto f\mapsto {\begin{cases}{\text{nil}}&{\text{if}}\ l={\text{nil}}\\{\text{append}}\,(f\,a)\,({\text{bind}}\,l'\,f)&{\text{if}}\ l={\text{cons}}\,a\,l'\end{cases}}}
where append is defined as:
append
:
A
∗
→
A
∗
→
A
∗
=
l
1
↦
l
2
↦
{
l
2
if
l
1
=
nil
cons
a
(
append
l
1
′
l
2
)
if
l
1
=
cons
a
l
1
′
{\displaystyle {\text{append}}\colon A^{*}\to A^{*}\to A^{*}=l_{1}\mapsto l_{2}\mapsto {\begin{cases}l_{2}&{\text{if}}\ l_{1}={\text{nil}}\\{\text{cons}}\,a\,({\text{append}}\,l_{1}'\,l_{2})&{\text{if}}\ l_{1}={\text{cons}}\,a\,l_{1}'\end{cases}}}
Alternatively, the monad may be defined in terms of operations return, fmap and join, with:
fmap
:
(
A
→
B
)
→
(
A
∗
→
B
∗
)
=
f
↦
l
↦
{
nil
if
l
=
nil
cons
(
f
a
)
(
fmap
f
l
′
)
if
l
=
cons
a
l
′
{\displaystyle {\text{fmap}}\colon (A\to B)\to (A^{*}\to B^{*})=f\mapsto l\mapsto {\begin{cases}{\text{nil}}&{\text{if}}\ l={\text{nil}}\\{\text{cons}}\,(f\,a)({\text{fmap}}f\,l')&{\text{if}}\ l={\text{cons}}\,a\,l'\end{cases}}}
join
:
A
∗
∗
→
A
∗
=
l
↦
{
nil
if
l
=
nil
append
a
(
join
l
′
)
if
l
=
cons
a
l
′
{\displaystyle {\text{join}}\colon {A^{*}}^{*}\to A^{*}=l\mapsto {\begin{cases}{\text{nil}}&{\text{if}}\ l={\text{nil}}\\{\text{append}}\,a\,({\text{join}}\,l')&{\text{if}}\ l={\text{cons}}\,a\,l'\end{cases}}}
Note that fmap, join, append and bind are well-defined, since they're applied to progressively deeper arguments at each recursive call.
The list type is an additive monad, with nil as the monadic zero and append as monadic sum.
Lists form a monoid under the append operation. The identity element of the monoid is the empty list, nil. In fact, this is the free monoid over the set of list elements.
== See also ==
Array data type – Data type that represents an ordered collection of elements (values or variables)Pages displaying short descriptions of redirect targets
Queue – Abstract data type
Set – Abstract data type for storing unique values
Stack – Abstract data type
Stream – Sequence of data items available over time
== References == | Wikipedia/List_(computer_science) |
In mathematical logic, the diagonal lemma (also known as diagonalization lemma, self-reference lemma or fixed point theorem) establishes the existence of self-referential sentences in certain formal theories.
A particular instance of the diagonal lemma was used by Kurt Gödel in 1931 to construct his proof of the incompleteness theorems as well as in 1933 by Tarski to prove his undefinability theorem. In 1934, Carnap was the first to publish the diagonal lemma at some level of generality. The diagonal lemma is named in reference to Cantor's diagonal argument in set and number theory.
The diagonal lemma applies to any sufficiently strong theories capable of representing the diagonal function. Such theories include first-order Peano arithmetic
P
A
{\displaystyle {\mathsf {PA}}}
, the weaker Robinson arithmetic
Q
{\displaystyle {\mathsf {Q}}}
as well as any theory containing
Q
{\displaystyle {\mathsf {Q}}}
(i.e. which interprets it). A common statement of the lemma (as given below) makes the stronger assumption that the theory can represent all recursive functions, but all the theories mentioned have that capacity, as well.
== Background ==
=== Gödel Numbering ===
The diagonal lemma also requires a Gödel numbering
α
{\displaystyle \alpha }
. We write
α
(
φ
)
{\displaystyle \alpha (\varphi )}
for the code assigned to
φ
{\displaystyle \varphi }
by the numbering. For
n
¯
{\displaystyle {\overline {n}}}
, the standard numeral of
n
{\displaystyle n}
(i.e.
0
¯
=
d
f
0
{\displaystyle {\overline {0}}=_{df}{\mathsf {0}}}
and
n
+
1
¯
=
d
f
S
(
n
¯
)
{\displaystyle {\overline {n+1}}=_{df}{\mathsf {S}}({\overline {n}})}
), let
⌜
φ
⌝
{\displaystyle \ulcorner \varphi \urcorner }
be the standard numeral of the code of
φ
{\displaystyle \varphi }
(i.e.
⌜
φ
⌝
{\displaystyle \ulcorner \varphi \urcorner }
is
α
(
φ
)
¯
{\displaystyle {\overline {\alpha (\varphi )}}}
). We assume a standard Gödel numbering
=== Representation Theorem ===
Let
N
{\displaystyle \mathbb {N} }
be the set of natural numbers. A first-order theory
T
{\displaystyle T}
in the language of arithmetic containing
Q
{\displaystyle {\mathsf {Q}}}
represents the
k
{\displaystyle k}
-ary recursive function
f
:
N
k
→
N
{\displaystyle f:\mathbb {N} ^{k}\rightarrow \mathbb {N} }
if there is a formula
φ
f
(
x
1
,
…
,
x
k
,
y
)
{\displaystyle \varphi _{f}(x_{1},\dots ,x_{k},y)}
in the language of
T
{\displaystyle T}
s.t. for all
m
1
,
…
,
m
k
∈
N
{\displaystyle m_{1},\dots ,m_{k}\in \mathbb {N} }
, if
f
(
m
1
,
…
,
m
k
)
=
n
{\displaystyle f(m_{1},\dots ,m_{k})=n}
then
T
⊢
∀
y
(
φ
f
(
m
1
¯
,
…
,
m
k
¯
,
y
)
↔
y
=
n
¯
)
{\displaystyle T\vdash \forall y(\varphi _{f}({\overline {m_{1}}},\dots ,{\overline {m_{k}}},y)\leftrightarrow y={\overline {n}})}
.
The representation theorem is provable, i.e. every recursive function is representable in
T
{\displaystyle T}
.
== The Diagonal Lemma and its proof ==
Diagonal Lemma: Let
T
{\displaystyle T}
be a first-order theory containing
Q
{\displaystyle {\mathsf {Q}}}
(Robinson arithmetic) and let
ψ
(
x
)
{\displaystyle \psi (x)}
be any formula in the language of
T
{\displaystyle T}
with only
x
{\displaystyle x}
as free variable. Then there is a sentence
φ
{\displaystyle \varphi }
in the language of
T
{\displaystyle T}
s.t.
T
⊢
φ
↔
ψ
(
⌜
φ
⌝
)
{\displaystyle T\vdash \varphi \leftrightarrow \psi (\ulcorner \varphi \urcorner )}
.
Intuitively,
φ
{\displaystyle \varphi }
is a self-referential sentence which "says of itself that it has the property
ψ
{\displaystyle \psi }
."
Proof: Let
d
i
a
g
T
:
N
→
N
{\displaystyle diag_{T}:\mathbb {N} \to \mathbb {N} }
be the recursive function which associates the code of each formula
φ
(
x
)
{\displaystyle \varphi (x)}
with only one free variable
x
{\displaystyle x}
in the language of
T
{\displaystyle T}
with the code of the closed formula
φ
(
⌜
φ
⌝
)
{\displaystyle \varphi (\ulcorner \varphi \urcorner )}
(i.e. the substitution of
⌜
φ
⌝
{\displaystyle \ulcorner \varphi \urcorner }
into
φ
{\displaystyle \varphi }
for
x
{\displaystyle x}
) and
0
{\displaystyle 0}
for other arguments. (The fact that
d
i
a
g
T
{\displaystyle diag_{T}}
is recursive depends on the choice of the Gödel numbering, here the standard one.)
By the representation theorem,
T
{\displaystyle T}
represents every recursive function. Thus, there is a formula
δ
(
x
,
y
)
{\displaystyle \delta (x,y)}
be the formula representing
d
i
a
g
T
{\displaystyle diag_{T}}
, in particular, for each
φ
(
x
)
{\displaystyle \varphi (x)}
,
T
⊢
δ
(
⌜
φ
⌝
,
y
)
↔
y
=
⌜
φ
(
⌜
φ
⌝
)
⌝
{\displaystyle T\vdash \delta (\ulcorner \varphi \urcorner ,y)\leftrightarrow y=\ulcorner \varphi (\ulcorner \varphi \urcorner )\urcorner }
.
Let
ψ
(
x
)
{\displaystyle \psi (x)}
be an arbitrary formula with only
x
{\displaystyle x}
as free variable. We now define
χ
(
x
)
{\displaystyle \chi (x)}
as
∃
y
(
δ
(
x
,
y
)
∧
ψ
(
y
)
)
{\displaystyle \exists y(\delta (x,y)\land \psi (y))}
, and let
φ
{\displaystyle \varphi }
be
χ
(
⌜
χ
⌝
)
{\displaystyle \chi (\ulcorner \chi \urcorner )}
. Then the following equivalences are provable in
T
{\displaystyle T}
:
φ
↔
χ
(
⌜
χ
⌝
)
↔
∃
y
(
δ
(
⌜
χ
⌝
,
y
)
∧
ψ
(
y
)
)
↔
∃
y
(
y
=
⌜
χ
(
⌜
χ
⌝
)
⌝
∧
ψ
(
y
)
)
↔
∃
y
(
y
=
⌜
φ
⌝
∧
ψ
(
y
)
)
↔
ψ
(
⌜
φ
⌝
)
{\displaystyle \varphi \leftrightarrow \chi (\ulcorner \chi \urcorner )\leftrightarrow \exists y(\delta (\ulcorner \chi \urcorner ,y)\land \psi (y))\leftrightarrow \exists y(y=\ulcorner \chi (\ulcorner \chi \urcorner )\urcorner \land \psi (y))\leftrightarrow \exists y(y=\ulcorner \varphi \urcorner \land \psi (y))\leftrightarrow \psi (\ulcorner \varphi \urcorner )}
.
== Some Generalizations ==
There are various generalizations of the Diagonal Lemma. We present only three of them; in particular, combinations of the below generalizations yield new generalizations. Let
T
{\displaystyle T}
be a first-order theory containing
Q
{\displaystyle {\mathsf {Q}}}
(Robinson arithmetic).
=== Diagonal Lemma with Parameters ===
Let
ψ
(
x
,
y
1
,
…
,
y
n
)
{\displaystyle \psi (x,y_{1},\dots ,y_{n})}
be any formula with free variables
x
,
y
1
,
…
,
y
n
{\displaystyle x,y_{1},\dots ,y_{n}}
.
Then there is a formula
φ
(
y
1
,
…
y
n
)
{\displaystyle \varphi (y_{1},\dots y_{n})}
with free variables
y
1
,
…
,
y
n
{\displaystyle y_{1},\dots ,y_{n}}
s.t.
T
⊢
φ
(
y
1
,
…
,
y
n
)
↔
ψ
(
⌜
φ
(
y
1
,
…
,
y
n
)
⌝
,
y
1
,
…
,
y
n
)
{\displaystyle T\vdash \varphi (y_{1},\dots ,y_{n})\leftrightarrow \psi (\ulcorner \varphi (y_{1},\dots ,y_{n})\urcorner ,y_{1},\dots ,y_{n})}
.
=== Uniform Diagonal Lemma ===
Let
ψ
(
x
,
y
1
,
…
,
y
n
)
{\displaystyle \psi (x,y_{1},\dots ,y_{n})}
be any formula with free variables
x
,
y
1
,
…
,
y
n
{\displaystyle x,y_{1},\dots ,y_{n}}
.
Then there is a formula
φ
(
y
1
,
…
y
n
)
{\displaystyle \varphi (y_{1},\dots y_{n})}
with free variables
y
1
,
…
,
y
n
{\displaystyle y_{1},\dots ,y_{n}}
s.t. for all
m
1
,
…
,
m
n
∈
N
{\displaystyle m_{1},\dots ,m_{n}\in \mathbb {N} }
,
T
⊢
φ
(
m
1
¯
,
…
,
m
n
¯
)
↔
ψ
(
⌜
φ
(
m
1
¯
,
…
,
m
n
¯
)
⌝
,
m
1
¯
,
…
,
m
n
¯
)
{\displaystyle T\vdash \varphi ({\overline {m_{1}}},\dots ,{\overline {m_{n}}})\leftrightarrow \psi (\ulcorner \varphi ({\overline {m_{1}}},\dots ,{\overline {m_{n}}})\urcorner ,{\overline {m_{1}}},\dots ,{\overline {m_{n}}})}
.
=== Simultaneous Diagonal Lemma ===
Let
ψ
1
(
x
1
,
x
2
)
{\displaystyle \psi _{1}(x_{1},x_{2})}
and
ψ
2
(
x
1
,
x
2
)
{\displaystyle \psi _{2}(x_{1},x_{2})}
be formulae with free variable
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
.
Then there are sentence
φ
1
{\displaystyle \varphi _{1}}
and
φ
2
{\displaystyle \varphi _{2}}
s.t.
T
⊢
φ
1
↔
ψ
1
(
⌜
φ
1
⌝
,
⌜
φ
2
⌝
)
{\displaystyle T\vdash \varphi _{1}\leftrightarrow \psi _{1}(\ulcorner \varphi _{1}\urcorner ,\ulcorner \varphi _{2}\urcorner )}
and
T
⊢
φ
2
↔
ψ
2
(
⌜
φ
1
⌝
,
⌜
φ
2
⌝
)
{\displaystyle T\vdash \varphi _{2}\leftrightarrow \psi _{2}(\ulcorner \varphi _{1}\urcorner ,\ulcorner \varphi _{2}\urcorner )}
.
The case with
n
{\displaystyle n}
many formulae is similar.
== History ==
The lemma is called "diagonal" because it bears some resemblance to Cantor's diagonal argument. The terms "diagonal lemma" or "fixed point" do not appear in Kurt Gödel's 1931 article or in Alfred Tarski's 1936 article.
In 1934, Rudolf Carnap was the first to publish the diagonal lemma in some level of generality, which says that for any formula
ψ
(
x
)
{\displaystyle \psi (x)}
with
x
{\displaystyle x}
as free variable (in a sufficiently expressive language), then there exists a sentence
φ
{\displaystyle \varphi }
such that
φ
↔
ψ
(
⌜
φ
⌝
)
{\displaystyle \varphi \leftrightarrow \psi (\ulcorner \varphi \urcorner )}
is true (in some standard model). Carnap's work was phrased in terms of truth rather than provability (i.e. semantically rather than syntactically). Remark also that the concept of recursive functions was not yet developed in 1934.
The diagonal lemma is closely related to Kleene's recursion theorem in computability theory, and their respective proofs are similar. In 1952, Léon Henkin asked whether sentences that state their own provability are provable. His question led to more general analyses of the diagonal lemma, especially with Löb's theorem and provability logic.
== See also ==
Indirect self-reference
List of fixed point theorems
Self-reference
Self-referential paradoxes
== Notes ==
== References ==
George Boolos and Richard Jeffrey, 1989. Computability and Logic, 3rd ed. Cambridge University Press. ISBN 0-521-38026-X ISBN 0-521-38923-2
Rudolf Carnap, 1934. Logische Syntax der Sprache. (English translation: 2003. The Logical Syntax of Language. Open Court Publishing.)
Haim Gaifman, 2006. 'Naming and Diagonalization: From Cantor to Gödel to Kleene'. Logic Journal of the IGPL, 14: 709–728.
Petr Hájek & Pavel Pudlák, 2016 (first edition 1998). Metamathematics of First-Order Arithmetic. Springer Verlag.
Peter Hinman, 2005. Fundamentals of Mathematical Logic. A K Peters. ISBN 1-56881-262-0
Mendelson, Elliott, 1997. Introduction to Mathematical Logic, 4th ed. Chapman & Hall.
Panu Raatikainen, 2015a. The Diagonalization Lemma. In Stanford Encyclopedia of Philosophy, ed. Zalta.
Panu Raatikainen, 2015b. Gödel's Incompleteness Theorems. In Stanford Encyclopedia of Philosophy, ed. Zalta.
Raymond Smullyan, 1991. Gödel's Incompleteness Theorems. Oxford Univ. Press.
Raymond Smullyan, 1994. Diagonalization and Self-Reference. Oxford Univ. Press.
Craig Smoryński, 2023. 'The early history of formal diagonalization'. Logic Journal of the IGPL, 31.6: 1203–1224.
Alfred Tarski (1936). "Der Wahrheitsbegriff in den formalisierten Sprachen" (PDF). Studia Philosophica. 1: 261–405. Archived from the original (PDF) on 9 January 2014. Retrieved 26 June 2013.
Alfred Tarski, tr. J. H. Woodger, 1983. 'The Concept of Truth in Formalized Languages'. English translation of Tarski's 1936 article. In A. Tarski, ed. J. Corcoran, 1983, Logic, Semantics, Metamathematics, Hackett. | Wikipedia/Diagonal_lemma |
In set theory, Zermelo–Fraenkel set theory, named after mathematicians Ernst Zermelo and Abraham Fraenkel, is an axiomatic system that was proposed in the early twentieth century in order to formulate a theory of sets free of paradoxes such as Russell's paradox. Today, Zermelo–Fraenkel set theory, with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. Zermelo–Fraenkel set theory with the axiom of choice included is abbreviated ZFC, where C stands for "choice", and ZF refers to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded.
Informally, Zermelo–Fraenkel set theory is intended to formalize a single primitive notion, that of a hereditary well-founded set, so that all entities in the universe of discourse are such sets. Thus the axioms of Zermelo–Fraenkel set theory refer only to pure sets and prevent its models from containing urelements (elements that are not themselves sets). Furthermore, proper classes (collections of mathematical objects defined by a property shared by their members where the collections are too big to be sets) can only be treated indirectly. Specifically, Zermelo–Fraenkel set theory does not allow for the existence of a universal set (a set containing all sets) nor for unrestricted comprehension, thereby avoiding Russell's paradox. Von Neumann–Bernays–Gödel set theory (NBG) is a commonly used conservative extension of Zermelo–Fraenkel set theory that does allow explicit treatment of proper classes.
There are many equivalent formulations of the axioms of Zermelo–Fraenkel set theory. Most of the axioms state the existence of particular sets defined from other sets. For example, the axiom of pairing says that given any two sets
a
{\displaystyle a}
and
b
{\displaystyle b}
there is a new set
{
a
,
b
}
{\displaystyle \{a,b\}}
containing exactly
a
{\displaystyle a}
and
b
{\displaystyle b}
. Other axioms describe properties of set membership. A goal of the axioms is that each axiom should be true if interpreted as a statement about the collection of all sets in the von Neumann universe (also known as the cumulative hierarchy).
The metamathematics of Zermelo–Fraenkel set theory has been extensively studied. Landmark results in this area established the logical independence of the axiom of choice from the remaining Zermelo-Fraenkel axioms and of the continuum hypothesis from ZFC. The consistency of a theory such as ZFC cannot be proved within the theory itself, as shown by Gödel's second incompleteness theorem.
== History ==
The modern study of set theory was initiated by Georg Cantor and Richard Dedekind in the 1870s. However, the discovery of paradoxes in naive set theory, such as Russell's paradox, led to the desire for a more rigorous form of set theory that was free of these paradoxes.
In 1908, Ernst Zermelo proposed the first axiomatic set theory, Zermelo set theory. However, as first pointed out by Abraham Fraenkel in a 1921 letter to Zermelo, this theory was incapable of proving the existence of certain sets and cardinal numbers whose existence was taken for granted by most set theorists of the time, notably the cardinal number aleph-omega (
ℵ
ω
{\displaystyle \aleph _{\omega }}
) and the set
{
Z
0
,
P
(
Z
0
)
,
P
(
P
(
Z
0
)
)
,
P
(
P
(
P
(
Z
0
)
)
)
,
.
.
.
}
,
{\displaystyle \{Z_{0},{\mathcal {P}}(Z_{0}),{\mathcal {P}}({\mathcal {P}}(Z_{0})),{\mathcal {P}}({\mathcal {P}}({\mathcal {P}}(Z_{0}))),...\},}
where
Z
0
{\displaystyle Z_{0}}
is any infinite set and
P
{\displaystyle {\mathcal {P}}}
is the power set operation. Moreover, one of Zermelo's axioms invoked a concept, that of a "definite" property, whose operational meaning was not clear. In 1922, Fraenkel and Thoralf Skolem independently proposed operationalizing a "definite" property as one that could be formulated as a well-formed formula in a first-order logic whose atomic formulas were limited to set membership and identity. They also independently proposed replacing the axiom schema of specification with the axiom schema of replacement. Appending this schema, as well as the axiom of regularity (first proposed by John von Neumann), to Zermelo set theory yields the theory denoted by ZF. Adding to ZF either the axiom of choice (AC) or a statement that is equivalent to it yields ZFC.
== Formal language ==
Formally, ZFC is a one-sorted theory in first-order logic. The equality symbol can be treated as either a primitive logical symbol or a high-level abbreviation for having exactly the same elements. The former approach is the most common. The signature has a single predicate symbol, usually denoted
∈
{\displaystyle \in }
, which is a predicate symbol of arity 2 (a binary relation symbol). This symbol symbolizes a set membership relation. For example, the formula
a
∈
b
{\displaystyle a\in b}
means that
a
{\displaystyle a}
is an element of the set
b
{\displaystyle b}
(also read as
a
{\displaystyle a}
is a member of
b
{\displaystyle b}
).
There are different ways to formulate the formal language. Some authors may choose a different set of connectives or quantifiers. For example, the logical connective NAND alone can encode the other connectives, a property known as functional completeness. This section attempts to strike a balance between simplicity and intuitiveness.
The language's alphabet consists of:
A countably infinite amount of variables used for representing sets
The logical connectives
¬
{\displaystyle \lnot }
,
∧
{\displaystyle \land }
,
∨
{\displaystyle \lor }
The quantifier symbols
∀
{\displaystyle \forall }
,
∃
{\displaystyle \exists }
The equality symbol
=
{\displaystyle =}
The set membership symbol
∈
{\displaystyle \in }
Brackets ( )
With this alphabet, the recursive rules for forming well-formed formulae (wff) are as follows:
Let
x
{\displaystyle x}
and
y
{\displaystyle y}
be metavariables for any variables. These are the two ways to build atomic formulae (the simplest wffs):
x
=
y
{\displaystyle x=y}
x
∈
y
{\displaystyle x\in y}
Let
ϕ
{\displaystyle \phi }
and
ψ
{\displaystyle \psi }
be metavariables for any wff, and
x
{\displaystyle x}
be a metavariable for any variable. These are valid wff constructions:
¬
ϕ
{\displaystyle \lnot \phi }
(
ϕ
∧
ψ
)
{\displaystyle (\phi \land \psi )}
(
ϕ
∨
ψ
)
{\displaystyle (\phi \lor \psi )}
∀
x
ϕ
{\displaystyle \forall x\phi }
∃
x
ϕ
{\displaystyle \exists x\phi }
A well-formed formula can be thought as a syntax tree. The leaf nodes are always atomic formulae. Nodes
∧
{\displaystyle \land }
and
∨
{\displaystyle \lor }
have exactly two child nodes, while nodes
¬
{\displaystyle \lnot }
,
∀
x
{\displaystyle \forall x}
and
∃
x
{\displaystyle \exists x}
have exactly one. There are countably infinitely many wffs, however, each wff has a finite number of nodes.
== Axioms ==
There are many equivalent formulations of the ZFC axioms. The following particular axiom set is from Kunen (1980). The axioms in order below are expressed in a mixture of first-order logic and high-level abbreviations.
Axioms 1–8 form ZF, while the axiom 9 turns ZF into ZFC. Following Kunen (1980), we use the equivalent well-ordering theorem in place of the axiom of choice for axiom 9.
All formulations of ZFC imply that at least one set exists. Kunen includes an axiom that directly asserts the existence of a set, although he notes that he does so only "for emphasis". Its omission here can be justified in two ways. First, in the standard semantics of first-order logic in which ZFC is typically formalized, the domain of discourse must be nonempty. Hence, it is a logical theorem of first-order logic that something exists – usually expressed as the assertion that something is identical to itself,
∃
x
(
x
=
x
)
{\displaystyle \exists x(x=x)}
. Consequently, it is a theorem of every first-order theory that something exists. However, as noted above, because in the intended semantics of ZFC, there are only sets, the interpretation of this logical theorem in the context of ZFC is that some set exists. Hence, there is no need for a separate axiom asserting that a set exists. Second, however, even if ZFC is formulated in so-called free logic, in which it is not provable from logic alone that something exists, the axiom of infinity asserts that an infinite set exists. This implies that a set exists, and so, once again, it is superfluous to include an axiom asserting as much.
=== Axiom of extensionality ===
Two sets are equal (are the same set) if they have the same elements.
The converse of this axiom follows from the substitution property of equality. ZFC is constructed in first-order logic. Some formulations of first-order logic include identity; others do not. If the variety of first-order logic in which one is constructing set theory does not include equality "
=
{\displaystyle =}
",
x
=
y
{\displaystyle x=y}
may be defined as an abbreviation for the following formula:
∀
z
[
z
∈
x
⇔
z
∈
y
]
∧
∀
w
[
x
∈
w
⇔
y
∈
w
]
.
{\displaystyle \forall z[z\in x\Leftrightarrow z\in y]\land \forall w[x\in w\Leftrightarrow y\in w].}
In this case, the axiom of extensionality can be reformulated as
which says that if
x
{\displaystyle x}
and
y
{\displaystyle y}
have the same elements, then they belong to the same sets.
=== Axiom of regularity (also called the axiom of foundation) ===
Every non-empty set
x
{\displaystyle x}
contains a member
y
{\displaystyle y}
such that
x
{\displaystyle x}
and
y
{\displaystyle y}
are disjoint sets.
or in modern notation:
∀
x
(
x
≠
∅
⇒
∃
y
(
y
∈
x
∧
y
∩
x
=
∅
)
)
.
{\displaystyle \forall x\,(x\neq \varnothing \Rightarrow \exists y(y\in x\land y\cap x=\varnothing )).}
This (along with the axioms of pairing and union) implies, for example, that no set is an element of itself and that every set has an ordinal rank.
=== Axiom schema of specification (or of separation, or of restricted comprehension) ===
Subsets are commonly constructed using set builder notation. For example, the even integers can be constructed as the subset of the integers
Z
{\displaystyle \mathbb {Z} }
satisfying the congruence modulo predicate
x
≡
0
(
mod
2
)
{\displaystyle x\equiv 0{\pmod {2}}}
:
In general, the subset of a set
z
{\displaystyle z}
obeying a formula
φ
(
x
)
{\displaystyle \varphi (x)}
with one free variable
x
{\displaystyle x}
may be written as:
The axiom schema of specification states that this subset always exists (it is an axiom schema because there is one axiom for each
φ
{\displaystyle \varphi }
). Formally, let
φ
{\displaystyle \varphi }
be any formula in the language of ZFC with all free variables among
x
,
z
,
w
1
,
…
,
w
n
{\displaystyle x,z,w_{1},\ldots ,w_{n}}
(
y
{\displaystyle y}
is not free in
φ
{\displaystyle \varphi }
). Then:
Note that the axiom schema of specification can only construct subsets and does not allow the construction of entities of the more general form:
This restriction is necessary to avoid Russell's paradox (let
y
=
{
x
:
x
∉
x
}
{\displaystyle y=\{x:x\notin x\}}
then
y
∈
y
⇔
y
∉
y
{\displaystyle y\in y\Leftrightarrow y\notin y}
) and its variants that accompany naive set theory with unrestricted comprehension (since under this restriction
y
{\displaystyle y}
only refers to sets within
z
{\displaystyle z}
that don't belong to themselves, and
y
∈
z
{\displaystyle y\in z}
has not been established, even though
y
⊆
z
{\displaystyle y\subseteq z}
is the case, so
y
{\displaystyle y}
stands in a separate position from which it can't refer to or comprehend itself; therefore, in a certain sense, this axiom schema is saying that in order to build a
y
{\displaystyle y}
on the basis of a formula
φ
(
x
)
{\displaystyle \varphi (x)}
, we need to previously restrict the sets
y
{\displaystyle y}
will regard within a set
z
{\displaystyle z}
that leaves
y
{\displaystyle y}
outside so
y
{\displaystyle y}
can't refer to itself; or, in other words, sets shouldn't refer to themselves).
In some other axiomatizations of ZF, this axiom is redundant in that it follows from the axiom schema of replacement and the axiom of the empty set.
On the other hand, the axiom schema of specification can be used to prove the existence of the empty set, denoted
∅
{\displaystyle \varnothing }
, once at least one set is known to exist. One way to do this is to use a property
φ
{\displaystyle \varphi }
which no set has. For example, if
w
{\displaystyle w}
is any existing set, the empty set can be constructed as
Thus, the axiom of the empty set is implied by the nine axioms presented here. The axiom of extensionality implies the empty set is unique (does not depend on
w
{\displaystyle w}
). It is common to make a definitional extension that adds the symbol "
∅
{\displaystyle \varnothing }
" to the language of ZFC.
=== Axiom of pairing ===
If
x
{\displaystyle x}
and
y
{\displaystyle y}
are sets, then there exists a set which contains
x
{\displaystyle x}
and
y
{\displaystyle y}
as elements, for example if x = {1,2} and y = {2,3} then z will be {{1,2},{2,3}}
The axiom schema of specification must be used to reduce this to a set with exactly these two elements. The axiom of pairing is part of Z, but is redundant in ZF because it follows from the axiom schema of replacement if we are given a set with at least two elements. The existence of a set with at least two elements is assured by either the axiom of infinity, or by the axiom schema of specification and the axiom of the power set applied twice to any set.
=== Axiom of union ===
The union over the elements of a set exists. For example, the union over the elements of the set
{
{
1
,
2
}
,
{
2
,
3
}
}
{\displaystyle \{\{1,2\},\{2,3\}\}}
is
{
1
,
2
,
3
}
.
{\displaystyle \{1,2,3\}.}
The axiom of union states that for any set of sets
F
{\displaystyle {\mathcal {F}}}
, there is a set
A
{\displaystyle A}
containing every element that is a member of some member of
F
{\displaystyle {\mathcal {F}}}
:
Although this formula doesn't directly assert the existence of
∪
F
{\displaystyle \cup {\mathcal {F}}}
, the set
∪
F
{\displaystyle \cup {\mathcal {F}}}
can be constructed from
A
{\displaystyle A}
in the above using the axiom schema of specification:
=== Axiom schema of replacement ===
The axiom schema of replacement asserts that the image of a set under any definable function will also fall inside a set.
Formally, let
φ
{\displaystyle \varphi }
be any formula in the language of ZFC whose free variables are among
x
,
y
,
A
,
w
1
,
…
,
w
n
,
{\displaystyle x,y,A,w_{1},\dotsc ,w_{n},}
so that in particular
B
{\displaystyle B}
is not free in
φ
{\displaystyle \varphi }
. Then:
(The unique existential quantifier
∃
!
{\displaystyle \exists !}
denotes the existence of exactly one element such that it follows a given statement.)
In other words, if the relation
φ
{\displaystyle \varphi }
represents a definable function
f
{\displaystyle f}
,
A
{\displaystyle A}
represents its domain, and
f
(
x
)
{\displaystyle f(x)}
is a set for every
x
∈
A
,
{\displaystyle x\in A,}
then the range of
f
{\displaystyle f}
is a subset of some set
B
{\displaystyle B}
. The form stated here, in which
B
{\displaystyle B}
may be larger than strictly necessary, is sometimes called the axiom schema of collection.
=== Axiom of infinity ===
Let
S
(
w
)
{\displaystyle S(w)}
abbreviate
w
∪
{
w
}
,
{\displaystyle w\cup \{w\},}
where
w
{\displaystyle w}
is some set. (We can see that
{
w
}
{\displaystyle \{w\}}
is a valid set by applying the axiom of pairing with
x
=
y
=
w
{\displaystyle x=y=w}
so that the set z is
{
w
}
{\displaystyle \{w\}}
). Then there exists a set X such that the empty set
∅
{\displaystyle \varnothing }
, defined axiomatically, is a member of X and, whenever a set y is a member of X then
S
(
y
)
{\displaystyle S(y)}
is also a member of X.
or in modern notation:
∃
X
[
∅
∈
X
∧
∀
y
(
y
∈
X
⇒
S
(
y
)
∈
X
)
]
.
{\displaystyle \exists X\left[\varnothing \in X\land \forall y(y\in X\Rightarrow S(y)\in X)\right].}
More colloquially, there exists a set X having infinitely many members. (It must be established, however, that these members are all different because if two elements are the same, the sequence will loop around in a finite cycle of sets. The axiom of regularity prevents this from happening.) The minimal set X satisfying the axiom of infinity is the von Neumann ordinal ω which can also be thought of as the set of natural numbers
N
.
{\displaystyle \mathbb {N} .}
=== Axiom of power set ===
By definition, a set
z
{\displaystyle z}
is a subset of a set
x
{\displaystyle x}
if and only if every element of
z
{\displaystyle z}
is also an element of
x
{\displaystyle x}
:
The Axiom of power set states that for any set
x
{\displaystyle x}
, there is a set
y
{\displaystyle y}
that contains every subset of
x
{\displaystyle x}
:
The axiom schema of specification is then used to define the power set
P
(
x
)
{\displaystyle {\mathcal {P}}(x)}
as the subset of such a
y
{\displaystyle y}
containing the subsets of
x
{\displaystyle x}
exactly:
Axioms 1–8 define ZF. Alternative forms of these axioms are often encountered, some of which are listed in Jech (2003). Some ZF axiomatizations include an axiom asserting that the empty set exists. The axioms of pairing, union, replacement, and power set are often stated so that the members of the set
x
{\displaystyle x}
whose existence is being asserted are just those sets which the axiom asserts
x
{\displaystyle x}
must contain.
The following axiom is added to turn ZF into ZFC:
=== Axiom of well-ordering (choice) ===
The last axiom, commonly known as the axiom of choice, is presented here as a property about well-orders, as in Kunen (1980).
For any set
X
{\displaystyle X}
, there exists a binary relation
R
{\displaystyle R}
which well-orders
X
{\displaystyle X}
. This means
R
{\displaystyle R}
is a linear order on
X
{\displaystyle X}
such that every nonempty subset of
X
{\displaystyle X}
has a least element under the order
R
{\displaystyle R}
.
Given axioms 1 – 8, many statements are provably equivalent to axiom 9. The most common of these goes as follows. Let
X
{\displaystyle X}
be a set whose members are all nonempty. Then there exists a function
f
{\displaystyle f}
from
X
{\displaystyle X}
to the union of the members of
X
{\displaystyle X}
, called a "choice function", such that for all
Y
∈
X
{\displaystyle Y\in X}
one has
f
(
Y
)
∈
Y
{\displaystyle f(Y)\in Y}
. A third version of the axiom, also equivalent, is Zorn's lemma.
Since the existence of a choice function when
X
{\displaystyle X}
is a finite set is easily proved from axioms 1–8, AC only matters for certain infinite sets. AC is characterized as nonconstructive because it asserts the existence of a choice function but says nothing about how this choice function is to be "constructed".
== Motivation via the cumulative hierarchy ==
One motivation for the ZFC axioms is the cumulative hierarchy of sets introduced by John von Neumann. In this viewpoint, the universe of set theory is built up in stages, with one stage for each ordinal number. At stage 0, there are no sets yet. At each following stage, a set is added to the universe if all of its elements have been added at previous stages. Thus the empty set is added at stage 1, and the set containing the empty set is added at stage 2. The collection of all sets that are obtained in this way, over all the stages, is known as V. The sets in V can be arranged into a hierarchy by assigning to each set the first stage at which that set was added to V.
It is provable that a set is in V if and only if the set is pure and well-founded. And V satisfies all the axioms of ZFC if the class of ordinals has appropriate reflection properties. For example, suppose that a set x is added at stage α, which means that every element of x was added at a stage earlier than α. Then, every subset of x is also added at (or before) stage α, because all elements of any subset of x were also added before stage α. This means that any subset of x which the axiom of separation can construct is added at (or before) stage α, and that the powerset of x will be added at the next stage after α.
The picture of the universe of sets stratified into the cumulative hierarchy is characteristic of ZFC and related axiomatic set theories such as Von Neumann–Bernays–Gödel set theory (often called NBG) and Morse–Kelley set theory. The cumulative hierarchy is not compatible with other set theories such as New Foundations.
It is possible to change the definition of V so that at each stage, instead of adding all the subsets of the union of the previous stages, subsets are only added if they are definable in a certain sense. This results in a more "narrow" hierarchy, which gives the constructible universe L, which also satisfies all the axioms of ZFC, including the axiom of choice. It is independent from the ZFC axioms whether V = L. Although the structure of L is more regular and well behaved than that of V, few mathematicians argue that V = L should be added to ZFC as an additional "axiom of constructibility".
== Metamathematics ==
=== Virtual classes ===
Proper classes (collections of mathematical objects defined by a property shared by their members which are too big to be sets) can only be treated indirectly in ZF (and thus ZFC).
An alternative to proper classes while staying within ZF and ZFC is the virtual class notational construct introduced by Quine (1969), where the entire construct y ∈ { x | Fx } is simply defined as Fy. This provides a simple notation for classes that can contain sets but need not themselves be sets, while not committing to the ontology of classes (because the notation can be syntactically converted to one that only uses sets). Quine's approach built on the earlier approach of Bernays & Fraenkel (1958). Virtual classes are also used in Levy (2002), Takeuti & Zaring (1982), and in the Metamath implementation of ZFC.
=== Finite axiomatization ===
The axiom schemata of replacement and separation each contain infinitely many instances. Montague (1961) included a result first proved in his 1957 Ph.D. thesis: if ZFC is consistent, it is impossible to axiomatize ZFC using only finitely many axioms. On the other hand, von Neumann–Bernays–Gödel set theory (NBG) can be finitely axiomatized. The ontology of NBG includes proper classes as well as sets; a set is any class that can be a member of another class. NBG and ZFC are equivalent set theories in the sense that any theorem not mentioning classes and provable in one theory can be proved in the other.
=== Consistency ===
Gödel's second incompleteness theorem says that a recursively axiomatizable system that can interpret Robinson arithmetic can prove its own consistency only if it is inconsistent. Moreover, Robinson arithmetic can be interpreted in general set theory, a small fragment of ZFC. Hence the consistency of ZFC cannot be proved within ZFC itself (unless it is actually inconsistent). Thus, to the extent that ZFC is identified with ordinary mathematics, the consistency of ZFC cannot be demonstrated in ordinary mathematics. The consistency of ZFC does follow from the existence of a weakly inaccessible cardinal, which is unprovable in ZFC if ZFC is consistent. Nevertheless, it is deemed unlikely that ZFC harbors an unsuspected contradiction; it is widely believed that if ZFC were inconsistent, that fact would have been uncovered by now. This much is certain – ZFC is immune to the classic paradoxes of naive set theory: Russell's paradox, the Burali-Forti paradox, and Cantor's paradox.
Abian & LaMacchia (1978) studied a subtheory of ZFC consisting of the axioms of extensionality, union, powerset, replacement, and choice. Using models, they proved this subtheory consistent, and proved that each of the axioms of extensionality, replacement, and power set is independent of the four remaining axioms of this subtheory. If this subtheory is augmented with the axiom of infinity, each of the axioms of union, choice, and infinity is independent of the five remaining axioms. Because there are non-well-founded models that satisfy each axiom of ZFC except the axiom of regularity, that axiom is independent of the other ZFC axioms.
If consistent, ZFC cannot prove the existence of the inaccessible cardinals that category theory requires. Huge sets of this nature are possible if ZF is augmented with Tarski's axiom. Assuming that axiom turns the axioms of infinity, power set, and choice (7 – 9 above) into theorems.
=== Independence ===
Many important statements are independent of ZFC. The independence is usually proved by forcing, whereby it is shown that every countable transitive model of ZFC (sometimes augmented with large cardinal axioms) can be expanded to satisfy the statement in question. A different expansion is then shown to satisfy the negation of the statement. An independence proof by forcing automatically proves independence from arithmetical statements, other concrete statements, and large cardinal axioms. Some statements independent of ZFC can be proven to hold in particular inner models, such as in the constructible universe. However, some statements that are true about constructible sets are not consistent with hypothesized large cardinal axioms.
Forcing proves that the following statements are independent of ZFC:
Axiom of constructibility (V=L) (which is also not a ZFC axiom)
Continuum hypothesis
Diamond principle
Martin's axiom (which is not a ZFC axiom)
Suslin hypothesis
Remarks:
The consistency of V=L is provable by inner models but not forcing: every model of ZF can be trimmed to become a model of ZFC + V=L.
The diamond principle implies the continuum hypothesis and the negation of the Suslin hypothesis.
Martin's axiom plus the negation of the continuum hypothesis implies the Suslin hypothesis.
The constructible universe satisfies the generalized continuum hypothesis, the diamond principle, Martin's axiom and the Kurepa hypothesis.
The failure of the Kurepa hypothesis is equiconsistent with the existence of a strongly inaccessible cardinal.
A variation on the method of forcing can also be used to demonstrate the consistency and unprovability of the axiom of choice, i.e., that the axiom of choice is independent of ZF. The consistency of choice can be (relatively) easily verified by proving that the inner model L satisfies choice. (Thus every model of ZF contains a submodel of ZFC, so that Con(ZF) implies Con(ZFC).) Since forcing preserves choice, we cannot directly produce a model contradicting choice from a model satisfying choice. However, we can use forcing to create a model which contains a suitable submodel, namely one satisfying ZF but not C.
Another method of proving independence results, one owing nothing to forcing, is based on Gödel's second incompleteness theorem. This approach employs the statement whose independence is being examined, to prove the existence of a set model of ZFC, in which case Con(ZFC) is true. Since ZFC satisfies the conditions of Gödel's second theorem, the consistency of ZFC is unprovable in ZFC (provided that ZFC is, in fact, consistent). Hence no statement allowing such a proof can be proved in ZFC. This method can prove that the existence of large cardinals is not provable in ZFC, but cannot prove that assuming such cardinals, given ZFC, is free of contradiction.
=== Proposed additions ===
The project to unify set theorists behind additional axioms to resolve the continuum hypothesis or other meta-mathematical ambiguities is sometimes known as "Gödel's program". Mathematicians currently debate which axioms are the most plausible or "self-evident", which axioms are the most useful in various domains, and about to what degree usefulness should be traded off with plausibility; some "multiverse" set theorists argue that usefulness should be the sole ultimate criterion in which axioms to customarily adopt. One school of thought leans on expanding the "iterative" concept of a set to produce a set-theoretic universe with an interesting and complex but reasonably tractable structure by adopting forcing axioms; another school advocates for a tidier, less cluttered universe, perhaps focused on a "core" inner model.
== Criticisms ==
ZFC has been criticized both for being excessively strong and for being excessively weak, as well as for its failure to capture objects such as proper classes and the universal set.
Many mathematical theorems can be proven in much weaker systems than ZFC, such as Peano arithmetic and second-order arithmetic (as explored by the program of reverse mathematics). Saunders Mac Lane and Solomon Feferman have both made this point. Some of "mainstream mathematics" (mathematics not directly connected with axiomatic set theory) is beyond Peano arithmetic and second-order arithmetic, but still, all such mathematics can be carried out in ZC (Zermelo set theory with choice), another theory weaker than ZFC. Much of the power of ZFC, including the axiom of regularity and the axiom schema of replacement, is included primarily to facilitate the study of the set theory itself.
On the other hand, among axiomatic set theories, ZFC is comparatively weak. Unlike New Foundations, ZFC does not admit the existence of a universal set. Hence the universe of sets under ZFC is not closed under the elementary operations of the algebra of sets. Unlike von Neumann–Bernays–Gödel set theory (NBG) and Morse–Kelley set theory (MK), ZFC does not admit the existence of proper classes. A further comparative weakness of ZFC is that the axiom of choice included in ZFC is weaker than the axiom of global choice included in NBG and MK.
There are numerous mathematical statements independent of ZFC. These include the continuum hypothesis, the Whitehead problem, and the normal Moore space conjecture. Some of these conjectures are provable with the addition of axioms such as Martin's axiom or large cardinal axioms to ZFC. Some others are decided in ZF+AD where AD is the axiom of determinacy, a strong supposition incompatible with choice. One attraction of large cardinal axioms is that they enable many results from ZF+AD to be established in ZFC adjoined by some large cardinal axiom. The Mizar system and metamath have adopted Tarski–Grothendieck set theory, an extension of ZFC, so that proofs involving Grothendieck universes (encountered in category theory and algebraic geometry) can be formalized.
== See also ==
Foundations of mathematics
Inner model
Large cardinal axiom
Related axiomatic set theories:
Morse–Kelley set theory
Von Neumann–Bernays–Gödel set theory
Tarski–Grothendieck set theory
Constructive set theory
Internal set theory
== Notes ==
== Bibliography ==
== External links ==
Axioms of set Theory - Lec 02 - Frederic Schuller on YouTube
"ZFC", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Stanford Encyclopedia of Philosophy articles by Joan Bagaria:
Bagaria, Joan (31 January 2023). "Set Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
— (31 January 2023). "Axioms of Zermelo–Fraenkel Set Theory". In — (ed.). Stanford Encyclopedia of Philosophy.
Metamath version of the ZFC axioms — A concise and nonredundant axiomatization. The background first order logic is defined especially to facilitate machine verification of proofs.
A derivation in Metamath of a version of the separation schema from a version of the replacement schema.
Weisstein, Eric W. "Zermelo-Fraenkel Set Theory". MathWorld. | Wikipedia/ZF_set_theory |
In mathematical logic, structural proof theory is the subdiscipline of proof theory that studies proof calculi that support a notion of analytic proof, a kind of proof whose semantic properties are exposed. When all the theorems of a logic formalised in a structural proof theory have analytic proofs, then the proof theory can be used to demonstrate such things as consistency, provide decision procedures, and allow mathematical or computational witnesses to be extracted as counterparts to theorems, the kind of task that is more often given to model theory.
== Analytic proof ==
The notion of analytic proof was introduced into proof theory by Gerhard Gentzen for the sequent calculus; the analytic proofs are those that are cut-free. His natural deduction calculus also supports a notion of analytic proof, as was shown by Dag Prawitz; the definition is slightly more complex—the analytic proofs are the normal forms, which are related to the notion of normal form in term rewriting.
== Structures and connectives ==
The term structure in structural proof theory comes from a technical notion introduced in the sequent calculus: the sequent calculus represents the judgement made at any stage of an inference using special, extra-logical operators called structural operators: in
A
1
,
…
,
A
m
⊢
B
1
,
…
,
B
n
{\displaystyle A_{1},\dots ,A_{m}\vdash B_{1},\dots ,B_{n}}
, the commas to the left of the turnstile are operators normally interpreted as conjunctions, those to the right as disjunctions, whilst the turnstile symbol itself is interpreted as an implication. However, it is important to note that there is a fundamental difference in behaviour between these operators and the logical connectives they are interpreted by in the sequent calculus: the structural operators are used in every rule of the calculus, and are not considered when asking whether the subformula property applies. Furthermore, the logical rules go one way only: logical structure is introduced by logical rules, and cannot be eliminated once created, while structural operators can be introduced and eliminated in the course of a derivation.
The idea of looking at the syntactic features of sequents as special, non-logical operators is not old, and was forced by innovations in proof theory: when the structural operators are as simple as in Getzen's original sequent calculus there is little need to analyse them, but proof calculi of deep inference such as display logic (introduced by Nuel Belnap in 1982) support structural operators as complex as the logical connectives, and demand sophisticated treatment.
== Cut-elimination in the sequent calculus ==
== Natural deduction and the formulae-as-types correspondence ==
== Logical duality and harmony ==
== Hypersequents ==
The hypersequent framework extends the ordinary sequent structure to a multiset of sequents, using an additional structural connective | (called the hypersequent bar) to separate different sequents. It has been used to provide analytic calculi for, e.g., modal, intermediate and substructural logics A hypersequent is a structure
Γ
1
⊢
Δ
1
∣
⋯
∣
Γ
n
⊢
Δ
n
{\displaystyle \Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}}
where each
Γ
i
⊢
Δ
i
{\displaystyle \Gamma _{i}\vdash \Delta _{i}}
is an ordinary sequent, called a component of the hypersequent. As for sequents, hypersequents can be based on sets, multisets, or sequences, and the components can be single-conclusion or multi-conclusion sequents. The formula interpretation of the hypersequents depends on the logic under consideration, but is nearly always some form of disjunction. The most common interpretations are as a simple disjunction
(
⋀
Γ
1
→
⋁
Δ
1
)
∨
⋯
∨
(
⋀
Γ
n
→
⋁
Δ
n
)
{\displaystyle (\bigwedge \Gamma _{1}\rightarrow \bigvee \Delta _{1})\lor \dots \lor (\bigwedge \Gamma _{n}\rightarrow \bigvee \Delta _{n})}
for intermediate logics, or as a disjunction of boxes
◻
(
⋀
Γ
1
→
⋁
Δ
1
)
∨
⋯
∨
◻
(
⋀
Γ
n
→
⋁
Δ
n
)
{\displaystyle \Box (\bigwedge \Gamma _{1}\rightarrow \bigvee \Delta _{1})\lor \dots \lor \Box (\bigwedge \Gamma _{n}\rightarrow \bigvee \Delta _{n})}
for modal logics.
In line with the disjunctive interpretation of the hypersequent bar, essentially all hypersequent calculi include the external structural rules, in particular the external weakening rule
Γ
1
⊢
Δ
1
∣
⋯
∣
Γ
n
⊢
Δ
n
Γ
1
⊢
Δ
1
∣
⋯
∣
Γ
n
⊢
Δ
n
∣
Σ
⊢
Π
{\displaystyle {\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}}{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Sigma \vdash \Pi }}}
and the external contraction rule
Γ
1
⊢
Δ
1
∣
⋯
∣
Γ
n
⊢
Δ
n
∣
Γ
n
⊢
Δ
n
Γ
1
⊢
Δ
1
∣
⋯
∣
Γ
n
⊢
Δ
n
{\displaystyle {\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Gamma _{n}\vdash \Delta _{n}}{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}}}}
The additional expressivity of the hypersequent framework is provided by rules manipulating the hypersequent structure. An important example is provided by the modalised splitting rule
Γ
1
⊢
Δ
1
∣
⋯
∣
Γ
n
⊢
Δ
n
∣
◻
Σ
,
Ω
⊢
◻
Π
,
Θ
Γ
1
⊢
Δ
1
∣
⋯
∣
Γ
n
⊢
Δ
n
∣
◻
Σ
⊢
◻
Π
∣
Ω
⊢
Θ
{\displaystyle {\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Box \Sigma ,\Omega \vdash \Box \Pi ,\Theta }{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Box \Sigma \vdash \Box \Pi \mid \Omega \vdash \Theta }}}
for modal logic S5, where
◻
Σ
{\displaystyle \Box \Sigma }
means that every formula in
◻
Σ
{\displaystyle \Box \Sigma }
is of the form
◻
A
{\displaystyle \Box A}
.
Another example is given by the communication rule for the intermediate logic LC
Γ
1
⊢
Δ
1
∣
⋯
∣
Γ
n
⊢
Δ
n
∣
Ω
⊢
A
Σ
1
⊢
Π
1
∣
⋯
∣
Σ
m
⊢
Π
m
∣
Θ
⊢
B
Γ
1
⊢
Δ
1
∣
⋯
∣
Γ
n
⊢
Δ
n
∣
Σ
1
⊢
Π
1
∣
⋯
∣
Σ
m
⊢
Π
m
∣
Ω
⊢
B
∣
Θ
⊢
A
{\displaystyle {\frac {\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Omega \vdash A\qquad \Sigma _{1}\vdash \Pi _{1}\mid \dots \mid \Sigma _{m}\vdash \Pi _{m}\mid \Theta \vdash B}{\Gamma _{1}\vdash \Delta _{1}\mid \dots \mid \Gamma _{n}\vdash \Delta _{n}\mid \Sigma _{1}\vdash \Pi _{1}\mid \dots \mid \Sigma _{m}\vdash \Pi _{m}\mid \Omega \vdash B\mid \Theta \vdash A}}}
Note that in the communication rule the components are single-conclusion sequents.
== Calculus of structures ==
== Nested sequent calculus ==
The nested sequent calculus is a formalisation that resembles a 2-sided calculus of structures.
== Notes ==
== References ==
Sara Negri; Jan Von Plato (2001). Structural proof theory. Cambridge University Press. ISBN 978-0-521-79307-0.
Anne Sjerp Troelstra; Helmut Schwichtenberg (2000). Basic proof theory (2nd ed.). Cambridge University Press. ISBN 978-0-521-77911-1. | Wikipedia/Structural_proof_theory |
Grounding is a topic in metaphysics. Consider an ordinary physical object, such as a table, and the atoms it is made of. Without the atoms, the table would not exist; thus, the table's existence depends on the existence of the atoms. This kind of dependence is called "grounding" to distinguish it from other kinds of dependence, such as the dependence of an effect on its cause. It is sometimes called metaphysical or ontological dependence.
Grounding can be characterized as a relation between a ground and a grounded entity. The ground exists on a more fundamental level than the grounded entity, in the sense that the grounded entity depends for its existence or its properties on its ground. According to the neo-Aristotelian approach to ontology, the goal of ontology is to determine which entities are fundamental and how the non-fundamental entities depend on them.
== Overview ==
A distinction is typically made between grounding relations and other dependence relations, such as causation or realization. Grounding is often considered to be a form of non-causal determination or priority.
According to some in favor of the idea, things which are less fundamental are grounded in things that are more fundamental.
In chess, for example, if the king is in checkmate, this situation holds because the king is in check and has no legal moves. The fact that the king is in checkmate depends on the fact that the king is in check and has no legal moves. In other words, the first fact is grounded in the second fact.
As another example, consider the property of being either even or prime. The number 4 has this property because it is even. Here "because" does not express a causal relation (where the cause precedes the effect in time). It expresses a grounding relation. The fact that the number 4 is even or prime is grounded in the fact that 4 is even. In other words, the first fact obtains in virtue of the second fact.
== Role in neo-Aristotelian ontology ==
According to the neo-Aristotelian approach to ontology, the goal of ontology is to determine which entities are fundamental and how the non-fundamental entities depend on them. Fundamentality can be expressed in terms of grounding. For example, according to Aristotle, substances have the highest degree of fundamentality because they exist in themselves. Properties, on the other hand, are less fundamental because they depend on substances for their existence. In this example, properties are grounded in substances.
== Role in truthmaker theory ==
The notion of grounding has been used to analyze the relation between truthmakers and truthbearers. The basic idea is that truthbearers (like beliefs, sentences or propositions) are not intrinsically true or false but that their truth depends on something else. For example, the belief that water freezes at 0 °C is true in virtue of the fact that water freezes at 0 °C. In this example, the freezing-fact is the truthmaker of the freezing-belief. Expressed in terms of grounding: the truth of the freezing-belief is grounded in the existence of the freezing-fact.
== References ==
== External links ==
Bliss, Ricki; Trogdon, Kelly. "Metaphysical Grounding". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
Schaffer, Jonathan. "On What Grounds What." In David Manley, David J. Chalmers & Ryan Wasserman (eds.), Metametaphysics: New Essays on the Foundations of Ontology. Oxford University Press. pp. 347–383 (2009).
Grounding. Bibliography edited by Kelly Trogdon.
Metaphysical Grounding: Annotated Bibliography edited by Raul Corazzon. | Wikipedia/Grounding_(metaphysics) |
Abstract object theory (AOT) is a branch of metaphysics regarding abstract objects. Originally devised by metaphysician Edward Zalta in 1981, the theory was an expansion of mathematical Platonism.
== Overview ==
Abstract Objects: An Introduction to Axiomatic Metaphysics (1983) is the title of a publication by Edward Zalta that outlines abstract object theory.
AOT is a dual predication approach (also known as "dual copula strategy") to abstract objects influenced by the contributions of Alexius Meinong and his student Ernst Mally. On Zalta's account, there are two modes of predication: some objects (the ordinary concrete ones around us, like tables and chairs) exemplify properties, while others (abstract objects like numbers, and what others would call "nonexistent objects", like the round square and the mountain made entirely of gold) merely encode them. While the objects that exemplify properties are discovered through traditional empirical means, a simple set of axioms allows us to know about objects that encode properties. For every set of properties, there is exactly one object that encodes exactly that set of properties and no others. This allows for a formalized ontology.
A notable feature of AOT is that several notable paradoxes in naive predication theory (namely Romane Clark's paradox undermining the earliest version of Héctor-Neri Castañeda's guise theory, Alan McMichael's paradox, and Daniel Kirchner's paradox) do not arise within it. AOT employs restricted abstraction schemata to avoid such paradoxes.
In 2007, Zalta and Branden Fitelson introduced the term computational metaphysics to describe the implementation and investigation of formal, axiomatic metaphysics in an automated reasoning environment.
== See also ==
== Notes ==
== References ==
== Further reading == | Wikipedia/Computational_metaphysics |
A disease is a particular abnormal condition that adversely affects the structure or function of all or part of an organism and is not immediately due to any external injury. Diseases are often known to be medical conditions that are associated with specific signs and symptoms. A disease may be caused by external factors such as pathogens or by internal dysfunctions. For example, internal dysfunctions of the immune system can produce a variety of different diseases, including various forms of immunodeficiency, hypersensitivity, allergies, and autoimmune disorders.
In humans, disease is often used more broadly to refer to any condition that causes pain, dysfunction, distress, social problems, or death to the person affected, or similar problems for those in contact with the person. In this broader sense, it sometimes includes injuries, disabilities, disorders, syndromes, infections, isolated symptoms, deviant behaviors, and atypical variations of structure and function, while in other contexts and for other purposes these may be considered distinguishable categories. Diseases can affect people not only physically but also mentally, as contracting and living with a disease can alter the affected person's perspective on life.
Death due to disease is called death by natural causes. There are four main types of disease: infectious diseases, deficiency diseases, hereditary diseases (including both genetic and non-genetic hereditary diseases), and physiological diseases. Diseases can also be classified in other ways, such as communicable versus non-communicable diseases. The deadliest diseases in humans are coronary artery disease (blood flow obstruction), followed by cerebrovascular disease and lower respiratory infections. In developed countries, the diseases that cause the most sickness overall are neuropsychiatric conditions, such as depression and anxiety.
Pathology, the study of disease, includes etiology, or the study of cause.
== Terminology ==
=== Concepts ===
In many cases, terms such as disease, disorder, morbidity, sickness and illness are used interchangeably; however, there are situations when specific terms are considered preferable.
Disease
The term disease broadly refers to any condition that impairs the normal functioning of the body. For this reason, diseases are associated with the dysfunction of the body's normal homeostatic processes. Commonly, the term is used to refer specifically to infectious diseases, which are clinically evident diseases that result from the presence of pathogenic microbial agents, including viruses, bacteria, fungi, protozoa, multicellular organisms, and aberrant proteins known as prions. An infection or colonization that does not and will not produce clinically evident impairment of normal functioning, such as the presence of the normal bacteria and yeasts in the gut, or of a passenger virus, is not considered a disease. By contrast, an infection that is asymptomatic during its incubation period, but expected to produce symptoms later, is usually considered a disease. Non-infectious diseases are all other diseases, including most forms of cancer, heart disease, and genetic disease.
Acquired disease
An acquired disease is one that began at some point during one's lifetime, as opposed to disease that was already present at birth, which is congenital disease. Acquired sounds like it could mean "caught via contagion", but it simply means acquired sometime after birth. It also sounds like it could imply secondary disease, but acquired disease can be primary disease.
Acute disease
An acute disease is one of a short-term nature (acute); the term sometimes also connotes a fulminant nature
Chronic condition or chronic disease
A chronic disease is one that persists over time, often for at least six months, but may also include illnesses that are expected to last for the entirety of one's natural life.
Congenital disorder or congenital disease
A congenital disorder is one that is present at birth. It is often a genetic disease or disorder and can be inherited. It can also be the result of a vertically transmitted infection from the mother, such as HIV/AIDS.
Genetic disease
A genetic disorder or disease is caused by one or more genetic mutations. It is often inherited, but some mutations are random and de novo.
Hereditary or inherited disease
A hereditary disease is a type of genetic disease caused by genetic mutations that are hereditary (and can run in families)
Iatrogenic disease
An iatrogenic disease or condition is one that is caused by medical intervention, whether as a side effect of a treatment or as an inadvertent outcome.
Idiopathic disease
An idiopathic disease has an unknown cause or source. As medical science has advanced, many diseases with entirely unknown causes have had some aspects of their sources explained and therefore shed their idiopathic status. For example, when germs were discovered, it became known that they were a cause of infection, but particular germs and diseases had not been linked. In another example, it is known that autoimmunity is the cause of some forms of diabetes mellitus type 1, even though the particular molecular pathways by which it works are not yet understood. It is also common to know certain factors are associated with certain diseases; however, association does not necessarily imply causality. For example, a third factor might be causing both the disease, and the associated phenomenon.
Incurable disease
A disease that cannot be cured. Incurable diseases are not necessarily terminal diseases, and sometimes a disease's symptoms can be treated sufficiently for the disease to have little or no impact on quality of life.
Primary disease
A primary disease is a disease that is due to a root cause of illness, as opposed to secondary disease, which is a sequela, or complication that is caused by the primary disease. For example, a common cold is a primary disease, where rhinitis is a possible secondary disease, or sequela. A doctor must determine what primary disease, a cold or bacterial infection, is causing a patient's secondary rhinitis when deciding whether or not to prescribe antibiotics.
Secondary disease
A secondary disease is a disease that is a sequela or complication of a prior, causal disease, which is referred to as the primary disease or simply the underlying cause (root cause). For example, a bacterial infection can be primary, wherein a healthy person is exposed to bacteria and becomes infected, or it can be secondary to a primary cause, that predisposes the body to infection. For example, a primary viral infection that weakens the immune system could lead to a secondary bacterial infection. Similarly, a primary burn that creates an open wound could provide an entry point for bacteria, and lead to a secondary bacterial infection.
Terminal disease
A terminal disease is one that is expected to have the inevitable result of death. Previously, AIDS was a terminal disease; it is now incurable, but can be managed indefinitely using medications.
Illness
The terms illness and sickness are both generally used as synonyms for disease; however, the term illness is occasionally used to refer specifically to the patient's personal experience of their disease. In this model, it is possible for a person to have a disease without being ill (to have an objectively definable, but asymptomatic, medical condition, such as a subclinical infection, or to have a clinically apparent physical impairment but not feel sick or distressed by it), and to be ill without being diseased (such as when a person perceives a normal experience as a medical condition, or medicalizes a non-disease situation in their life – for example, a person who feels unwell as a result of embarrassment, and who interprets those feelings as sickness rather than normal emotions). Symptoms of illness are often not directly the result of infection, but a collection of evolved responses – sickness behavior by the body – that helps clear infection and promote recovery. Such aspects of illness can include lethargy, depression, loss of appetite, sleepiness, hyperalgesia, and inability to concentrate.
Disorder
A disorder is a functional abnormality or disturbance that may or may not show specific signs and symptoms. Medical disorders can be categorized into mental disorders, physical disorders, genetic disorders, emotional and behavioral disorders, and functional disorders. The term disorder is often considered more value-neutral and less stigmatizing than the terms disease or illness, and therefore is preferred terminology in some circumstances. In mental health, the term mental disorder is used as a way of acknowledging the complex interaction of biological, social, and psychological factors in psychiatric conditions; however, the term disorder is also used in many other areas of medicine, primarily to identify physical disorders that are not caused by infectious organisms, such as metabolic disorders.
Medical condition or health condition
A medical condition or health condition is a broad concept that includes all diseases, lesions, disorders, or nonpathologic condition that normally receives medical treatment, such as pregnancy or childbirth. While the term medical condition generally includes mental illnesses, in some contexts the term is used specifically to denote any illness, injury, or disease except for mental illnesses. The Diagnostic and Statistical Manual of Mental Disorders (DSM), the widely used psychiatric manual that defines all mental disorders, uses the term general medical condition to refer to all diseases, illnesses, and injuries except for mental disorders. This usage is also commonly seen in the psychiatric literature. Some health insurance policies also define a medical condition as any illness, injury, or disease except for psychiatric illnesses.
As it is more value-neutral than terms like disease, the term medical condition is sometimes preferred by people with health issues that they do not consider deleterious. However, by emphasizing the medical nature of the condition, this term is sometimes rejected, such as by proponents of the autism rights movement.
The term medical condition is also a synonym for medical state, in which case it describes an individual patient's current state from a medical standpoint. This usage appears in statements that describe a patient as being in critical condition, for example.
Morbidity
Morbidity (from Latin morbidus 'sick, unhealthy') is a diseased state, disability, or poor health due to any cause. The term may refer to the existence of any form of disease, or to the degree that the health condition affects the patient. Among severely ill patients, the level of morbidity is often measured by ICU scoring systems. Comorbidity, or co-existing disease, is the simultaneous presence of two or more medical conditions, such as schizophrenia and substance abuse.
In epidemiology and actuarial science, the term morbidity (also morbidity rate or morbidity frequency) can refer to either the incidence rate, the prevalence of a disease or medical condition, or the percentage of people who experience a given condition within a given timeframe (e.g., 20% of people will get influenza in a year). This measure of sickness is contrasted with the mortality rate of a condition, which is the proportion of people dying during a given time interval. Morbidity rates are used in actuarial professions, such as health insurance, life insurance, and long-term care insurance, to determine the premiums charged to customers. Morbidity rates help insurers predict the likelihood that an insured will contract or develop any number of specified diseases.
Pathosis or pathology
Pathosis (plural pathoses) is synonymous with disease. The word pathology also has this sense, in which it is commonly used by physicians in the medical literature, although some editors prefer to reserve pathology to its other senses. Sometimes a slight connotative shade causes preference for pathology or pathosis implying "some [as yet poorly analyzed] pathophysiologic process" rather than disease implying "a specific disease entity as defined by diagnostic criteria being already met". This is hard to quantify denotatively, but it explains why cognitive synonymy is not invariable.
Syndrome
A syndrome is the association of several signs and symptoms, or other characteristics that often occur together, regardless of whether the cause is known. Some syndromes such as Down syndrome are known to have only one cause (an extra chromosome at birth). Others such as Parkinsonian syndrome are known to have multiple possible causes. Acute coronary syndrome, for example, is not a single disease itself but is rather the manifestation of any of several diseases including myocardial infarction secondary to coronary artery disease. In yet other syndromes, however, the cause is unknown. A familiar syndrome name often remains in use even after an underlying cause has been found or when there are a number of different possible primary causes. Examples of the first-mentioned type are that Turner syndrome and DiGeorge syndrome are still often called by the "syndrome" name despite that they can also be viewed as disease entities and not solely as sets of signs and symptoms.
Predisease
Predisease is a subclinical or prodromal vanguard of a disease. Prediabetes and prehypertension are common examples. The nosology or epistemology of predisease is contentious, though, because there is seldom a bright line differentiating a legitimate concern for subclinical or premonitory status and the conflict of interest–driven over-medicalization (e.g., by pharmaceutical manufacturers) or de-medicalization (e.g., by medical and disability insurers). Identifying legitimate predisease can result in useful preventive measures, such as motivating the person to get a healthy amount of physical exercise, but labeling a healthy person with an unfounded notion of predisease can result in overtreatment, such as taking drugs that only help people with severe disease or paying for treatments with a poor benefit–cost ratio.
One review proposed three criteria for predisease:
a high risk for progression to disease making one "far more likely to develop" it than others are- for example, a pre-cancer will almost certainly turn into cancer over time
actionability for risk reduction – for example, removal of the precancerous tissue prevents it from turning into a potentially deadly cancer
benefit that outweighs the harm of any interventions taken – removing the precancerous tissue prevents cancer, and thus prevents a potential death from cancer.
=== Types by body system ===
Mental
Mental illness is a broad, generic label for a category of illnesses that may include affective or emotional instability, behavioral dysregulation, cognitive dysfunction or impairment. Specific illnesses known as mental illnesses include major depression, generalized anxiety disorders, schizophrenia, and attention deficit hyperactivity disorder, to name a few. Mental illness can be of biological (e.g., anatomical, chemical, or genetic) or psychological (e.g., trauma or conflict) origin. It can impair the affected person's ability to work or study and can harm interpersonal relationship.
Organic
An organic disease is one caused by a physical or physiological change to some tissue or organ of the body. The term sometimes excludes infections. It is commonly used in contrast with mental disorders. It includes emotional and behavioral disorders if they are due to changes to the physical structures or functioning of the body, such as after a stroke or a traumatic brain injury, but not if they are due to psychosocial issues.
=== Stages ===
In an infectious disease, the incubation period is the time between infection and the appearance of symptoms. The latency period is the time between infection and the ability of the disease to spread to another person, which may precede, follow, or be simultaneous with the appearance of symptoms. Some viruses also exhibit a dormant phase, called viral latency, in which the virus hides in the body in an inactive state. For example, varicella zoster virus causes chickenpox in the acute phase; after recovery from chickenpox, the virus may remain dormant in nerve cells for many years, and later cause herpes zoster (shingles).
Acute disease
An acute disease is a short-lived disease, like the common cold.
Chronic disease
A chronic disease is one that lasts for a long time, usually at least six months. During that time, it may be constantly present, or it may go into remission and periodically relapse. A chronic disease may be stable (does not get any worse) or it may be progressive (gets worse over time). Some chronic diseases can be permanently cured. Most chronic diseases can be beneficially treated, even if they cannot be permanently cured.
Clinical disease
One that has clinical consequences; in other words, the stage of the disease that produces the characteristic signs and symptoms of that disease. AIDS is the clinical disease stage of HIV infection.
Cure
A cure is the end of a medical condition or a treatment that is very likely to end it, while remission refers to the disappearance, possibly temporarily, of symptoms. Complete remission is the best possible outcome for incurable diseases.
Flare-up
A flare-up can refer to either the recurrence of symptoms or an onset of more severe symptoms.
Progressive disease
Progressive disease is a disease whose typical natural course is the worsening of the disease until death, serious debility, or organ failure occurs. Slowly progressive diseases are also chronic diseases; many are also degenerative diseases. The opposite of progressive disease is stable disease or static disease: a medical condition that exists, but does not get better or worse.
Refractory disease
A refractory disease is a disease that resists treatment, especially an individual case that resists treatment more than is normal for the specific disease in question.
Subclinical disease
Also called silent disease, silent stage, or asymptomatic disease. This is a stage in some diseases before the symptoms are first noted.
Terminal phase
If a person will die soon from a disease, regardless of whether that disease typically causes death, then the stage between the earlier disease process and active dying is the terminal phase.
Recovery
Recovery can refer to the repairing of physical processes (tissues, organs etc.) and the resumption of healthy functioning after damage causing processes have been cured.
=== Extent ===
Localized disease
A localized disease is one that affects only one part of the body, such as athlete's foot or an eye infection.
Disseminated disease
A disseminated disease has spread to other parts; with cancer, this is usually called metastatic disease.
Systemic disease
A systemic disease is a disease that affects the entire body, such as influenza or high blood pressure.
== Classification ==
Diseases may be classified by cause, pathogenesis (mechanism by which the disease is caused), or by symptoms. Alternatively, diseases may be classified according to the organ system involved, though this is often complicated since many diseases affect more than one organ.
A chief difficulty in nosology is that diseases often cannot be defined and classified clearly, especially when cause or pathogenesis are unknown. Thus diagnostic terms often only reflect a symptom or set of symptoms (syndrome).
Classical classification of human disease derives from the observational correlation between pathological analysis and clinical syndromes. Today it is preferred to classify them by their cause if it is known.
The most known and used classification of diseases is the World Health Organization's ICD. This is periodically updated. Currently, the last publication is the ICD-11.
== Causes ==
Diseases can be caused by any number of factors and may be acquired or congenital. Microorganisms, genetics, the environment or a combination of these can contribute to a diseased state.
Only some diseases such as influenza are contagious and commonly believed infectious. The microorganisms that cause these diseases are known as pathogens and include varieties of bacteria, viruses, protozoa, and fungi. Infectious diseases can be transmitted, e.g. by hand-to-mouth contact with infectious material on surfaces, by bites of insects or other carriers of the disease, and from contaminated water or food (often via fecal contamination), etc. Also, there are sexually transmitted diseases. In some cases, microorganisms that are not readily spread from person to person play a role, while other diseases can be prevented or ameliorated with appropriate nutrition or other lifestyle changes.
Some diseases, such as most (but not all) forms of cancer, heart disease, and mental disorders, are non-infectious diseases. Many non-infectious diseases have a partly or completely genetic basis (see genetic disorder) and may thus be transmitted from one generation to another.
Social determinants of health are the social conditions in which people live that determine their health. Illnesses are generally related to social, economic, political, and environmental circumstances. Social determinants of health have been recognized by several health organizations such as the Public Health Agency of Canada and the World Health Organization to greatly influence collective and personal well-being. The World Health Organization's Social Determinants Council also recognizes Social determinants of health in poverty.
When the cause of a disease is poorly understood, societies tend to mythologize the disease or use it as a metaphor or symbol of whatever that culture considers evil. For example, until the bacterial cause of tuberculosis was discovered in 1882, experts variously ascribed the disease to heredity, a sedentary lifestyle, depressed mood, and overindulgence in sex, rich food, or alcohol, all of which were social ills at the time.
When a disease is caused by a pathogenic organism (e.g., when malaria is caused by Plasmodium), one should not confuse the pathogen (the cause of the disease) with disease itself. For example, West Nile virus (the pathogen) causes West Nile fever (the disease). The misuse of basic definitions in epidemiology is frequent in scientific publications.
=== Types of causes ===
Airborne
An airborne disease is any disease that is caused by pathogens and transmitted through the air.
Foodborne
Foodborne illness or food poisoning is any illness resulting from the consumption of food contaminated with pathogenic bacteria, toxins, viruses, prions or parasites.
Infectious
Infectious diseases, also known as transmissible diseases or communicable diseases, comprise clinically evident illness (i.e., characteristic medical signs or symptoms of disease) resulting from the infection, presence and growth of pathogenic biological agents in an individual host organism. Included in this category are contagious diseases – an infection, such as influenza or the common cold, that commonly spreads from one person to another – and communicable diseases – a disease that can spread from one person to another, but does not necessarily spread through everyday contact.
Lifestyle
A lifestyle disease is any disease that appears to increase in frequency as countries become more industrialized and people live longer, especially if the risk factors include behavioral choices like a sedentary lifestyle or a diet high in unhealthful foods such as refined carbohydrates, trans fats, or alcoholic beverages.
Non-communicable
A non-communicable disease is a medical condition or disease that is non-transmissible. Non-communicable diseases cannot be spread directly from one person to another. Heart disease and cancer are examples of non-communicable diseases in humans.
== Prevention ==
Many diseases and disorders can be prevented through a variety of means. These include sanitation, proper nutrition, adequate exercise, vaccinations and other self-care and public health measures, such as obligatory face mask mandates.
== Treatments ==
Medical therapies or treatments are efforts to cure or improve a disease or other health problems. In the medical field, therapy is synonymous with the word treatment. Among psychologists, the term may refer specifically to psychotherapy or "talk therapy". Common treatments include medications, surgery, medical devices, and self-care. Treatments may be provided by an organized health care system, or informally, by the patient or family members.
Preventive healthcare is a way to avoid an injury, sickness, or disease in the first place. A treatment or cure is applied after a medical problem has already started. A treatment attempts to improve or remove a problem, but treatments may not produce permanent cures, especially in chronic diseases. Cures are a subset of treatments that reverse diseases completely or end medical problems permanently. Many diseases that cannot be completely cured are still treatable. Pain management (also called pain medicine) is that branch of medicine employing an interdisciplinary approach to the relief of pain and improvement in the quality of life of those living with pain.
Treatment for medical emergencies must be provided promptly, often through an emergency department or, in less critical situations, through an urgent care facility.
== Epidemiology ==
Epidemiology is the study of the factors that cause or encourage diseases. Some diseases are more common in certain geographic areas, among people with certain genetic or socioeconomic characteristics, or at different times of the year.
Epidemiology is considered a cornerstone methodology of public health research and is highly regarded in evidence-based medicine for identifying risk factors for diseases. In the study of communicable and non-communicable diseases, the work of epidemiologists ranges from outbreak investigation to study design, data collection, and analysis including the development of statistical models to test hypotheses and the documentation of results for submission to peer-reviewed journals. Epidemiologists also study the interaction of diseases in a population, a condition known as a syndemic. Epidemiologists rely on a number of other scientific disciplines such as biology (to better understand disease processes), biostatistics (the current raw information available), Geographic Information Science (to store data and map disease patterns) and social science disciplines (to better understand proximate and distal risk factors). Epidemiology can help identify causes as well as guide prevention efforts.
In studying diseases, epidemiology faces the challenge of defining them. Especially for poorly understood diseases, different groups might use significantly different definitions. Without an agreed-on definition, different researchers may report different numbers of cases and characteristics of the disease.
Some morbidity databases are compiled with data supplied by states and territories health authorities, at national levels or larger scale (such as European Hospital Morbidity Database (HMDB)) which may contain hospital discharge data by detailed diagnosis, age and sex. The European HMDB data was submitted by European countries to the World Health Organization Regional Office for Europe.
=== Burdens of disease ===
Disease burden is the impact of a health problem in an area measured by financial cost, mortality, morbidity, or other indicators.
There are several measures used to quantify the burden imposed by diseases on people. The years of potential life lost (YPLL) is a simple estimate of the number of years that a person's life was shortened due to a disease. For example, if a person dies at the age of 65 from a disease, and would probably have lived until age 80 without that disease, then that disease has caused a loss of 15 years of potential life. YPLL measurements do not account for how disabled a person is before dying, so the measurement treats a person who dies suddenly and a person who died at the same age after decades of illness as equivalent. In 2004, the World Health Organization calculated that 932 million years of potential life were lost to premature death.
The quality-adjusted life year (QALY) and disability-adjusted life year (DALY) metrics are similar but take into account whether the person was healthy after diagnosis. In addition to the number of years lost due to premature death, these measurements add part of the years lost to being sick. Unlike YPLL, these measurements show the burden imposed on people who are very sick, but who live a normal lifespan. A disease that has high morbidity, but low mortality, has a high DALY and a low YPLL. In 2004, the World Health Organization calculated that 1.5 billion disability-adjusted life years were lost to disease and injury. In the developed world, heart disease and stroke cause the most loss of life, but neuropsychiatric conditions like major depressive disorder cause the most years lost to being sick.
== Society and culture ==
How a society responds to diseases is the subject of medical sociology.
A condition may be considered a disease in some cultures or eras but not in others. For example, obesity was associated with prosperity and abundance, and this perception persists in many African regions, especially since the beginning of the HIV/AIDS. Epilepsy is considered a sign of spiritual gifts among the Hmong people.
Sickness confers the social legitimization of certain benefits, such as illness benefits, work avoidance, and being looked after by others. The person who is sick takes on a social role called the sick role. A person who responds to a dreaded disease, such as cancer, in a culturally acceptable fashion may be publicly and privately honored with higher social status. In return for these benefits, the sick person is obligated to seek treatment and work to become well once more. As a comparison, consider pregnancy, which is not interpreted as a disease or sickness, even if the mother and baby may both benefit from medical care.
Most religions grant exceptions from religious duties to people who are sick. For example, one whose life would be endangered by fasting on Yom Kippur or during the month of Ramadan is exempted from the requirement, or even forbidden from participating. People who are sick are also exempted from social duties. For example, ill health is the only socially acceptable reason for an American to refuse an invitation to the White House.
The identification of a condition as a disease, rather than as simply a variation of human structure or function, can have significant social or economic implications. The controversial recognition of diseases such as repetitive stress injury (RSI) and post-traumatic stress disorder (PTSD) has had a number of positive and negative effects on the financial and other responsibilities of governments, corporations, and institutions towards individuals, as well as on the individuals themselves. The social implication of viewing aging as a disease could be profound, though this classification is not yet widespread.
Lepers were people who were historically shunned because they had an infectious disease, and the term "leper" still evokes social stigma. Fear of disease can still be a widespread social phenomenon, though not all diseases evoke extreme social stigma.
Social standing and economic status affect health. Diseases of poverty are diseases that are associated with poverty and low social status; diseases of affluence are diseases that are associated with high social and economic status. Which diseases are associated with which states vary according to time, place, and technology. Some diseases, such as diabetes mellitus, may be associated with both poverty (poor food choices) and affluence (long lifespans and sedentary lifestyles), through different mechanisms. The term lifestyle diseases describes diseases associated with longevity and that are more common among older people. For example, cancer is far more common in societies in which most members live until they reach the age of 80 than in societies in which most members die before they reach the age of 50.
=== Language of disease ===
An illness narrative is a way of organizing a medical experience into a coherent story that illustrates the sick individual's personal experience.
People use metaphors to make sense of their experiences with disease. The metaphors move disease from an objective thing that exists to an affective experience. The most popular metaphors draw on military concepts: Disease is an enemy that must be feared, fought, battled, and routed. The patient or the healthcare provider is a warrior, rather than a passive victim or bystander. The agents of communicable diseases are invaders; non-communicable diseases constitute internal insurrection or civil war. Because the threat is urgent, perhaps a matter of life and death, unthinkably radical, even oppressive, measures are society's and the patient's moral duty as they courageously mobilize to struggle against destruction. The War on Cancer is an example of this metaphorical use of language. This language is empowering to some patients, but leaves others feeling like they are failures.
Another class of metaphors describes the experience of illness as a journey: The person travels to or from a place of disease, and changes himself, discovers new information, or increases his experience along the way. He may travel "on the road to recovery" or make changes to "get on the right track" or choose "pathways". Some are explicitly immigration-themed: the patient has been exiled from the home territory of health to the land of the ill, changing identity and relationships in the process. This language is more common among British healthcare professionals than the language of physical aggression.
Some metaphors are disease-specific. Slavery is a common metaphor for addictions: The alcoholic is enslaved by drink, and the smoker is captive to nicotine. Some cancer patients treat the loss of their hair from chemotherapy as a metonymy or metaphor for all the losses caused by the disease.
Some diseases are used as metaphors for social ills: "Cancer" is a common description for anything that is endemic and destructive in society, such as poverty, injustice, or racism. AIDS was seen as a divine judgment for moral decadence, and only by purging itself from the "pollution" of the "invader" could society become healthy again. More recently, when AIDS seemed less threatening, this type of emotive language was applied to avian flu and type 2 diabetes mellitus. Authors in the 19th century commonly used tuberculosis as a symbol and a metaphor for transcendence. People with the disease were portrayed in literature as having risen above daily life to become ephemeral objects of spiritual or artistic achievement. In the 20th century, after its cause was better understood, the same disease became the emblem of poverty, squalor, and other social problems.
== See also ==
== References ==
== External links ==
"Man and Disease", BBC Radio 4 discussion with Anne Hardy, David Bradley & Chris Dye (In Our Time, 15 December 2002)
CTD The Comparative Toxicogenomics Database is a scientific resource connecting chemicals, genes, and human diseases.
Free online health-risk assessment by Your Disease Risk at Washington University in St. Louis
Health Topics A–Z, fact sheets about many common diseases at the Centers for Disease Control
Health Topics, MedlinePlus descriptions of most diseases, with access to current research articles.
NLM Comprehensive database from the US National Library of Medicine
OMIM Comprehensive information on genes that cause disease at Online Mendelian Inheritance in Man
Report: The global burden of disease from the World Health Organization (WHO), 2004
The Merck Manual containing detailed description of most diseases | Wikipedia/Disease |
A pragmatic theory of truth is a theory of truth within the philosophies of pragmatism and pragmaticism. Pragmatic theories of truth were first posited by Charles Sanders Peirce, William James, and John Dewey. The common features of these theories are a reliance on the pragmatic maxim as a means of clarifying the meanings of difficult concepts such as truth; and an emphasis on the fact that belief, certainty, knowledge, or truth is the result of an inquiry.
== Background ==
Pragmatic theories of truth developed from the earlier ideas of ancient philosophy, the Scholastics. Pragmatic ideas about truth are often confused with the quite distinct notions of "logic and inquiry", "judging what is true", and "truth predicates".
=== Logic and inquiry ===
In one classical formulation, truth is defined as the good of logic, where logic is a normative science, that is, an inquiry into a good or a value that seeks knowledge of it and the means to achieve it. In this view, truth cannot be discussed to much effect outside the context of inquiry, knowledge, and logic, all very broadly considered.
Most inquiries into the character of truth begin with a notion of an informative, meaningful, or significant element, the truth of whose information, meaning, or significance may be put into question and needs to be evaluated. Depending on the context, this element might be called an artefact, expression, image, impression, lyric, mark, performance, picture, sentence, sign, string, symbol, text, thought, token, utterance, word, work, and so on. Whatever the case, one has the task of judging whether the bearers of information, meaning, or significance are indeed truth-bearers. This judgment is typically expressed in the form of a specific truth predicate, whose positive application to a sign, or so on, asserts that the sign is true.
=== Truth predicates ===
Theories of truth may be described according to several dimensions of description that affect the character of the predicate "true". The truth predicates that are used in different theories may be classified by the number of things that have to be mentioned in order to assess the truth of a sign, counting the sign itself as the first thing.
In formal logic, this number is called the arity of the predicate. The kinds of truth predicates may then be subdivided according to any number of more specific characters that various theorists recognize as important.
A monadic truth predicate is one that applies to its main subject — typically a concrete representation or its abstract content — independently of reference to anything else. In this case one can say that a truthbearer is true in and of itself.
A dyadic truth predicate is one that applies to its main subject only in reference to something else, a second subject. Most commonly, the auxiliary subject is either an object, an interpreter, or a language to which the representation bears some relation.
A triadic truth predicate is one that applies to its main subject only in reference to a second and a third subject. For example, in a pragmatic theory of truth, one has to specify both the object of the sign, and either its interpreter or another sign called the interpretant before one can say that the sign is true of its object to its interpreting agent or sign.
Several qualifications must be kept in mind with respect to any such radically simple scheme of classification, as real practice seldom presents any pure types, and there are settings in which it is useful to speak of a theory of truth that is "almost" k-adic, or that "would be" k-adic if certain details can be abstracted away and neglected in a particular context of discussion. That said, given the generic division of truth predicates according to their arity, further species can be differentiated within each genus according to a number of more refined features.
The truth predicate of interest in a typical correspondence theory of truth tells of a relation between representations and objective states of affairs, and is therefore expressed, for the most part, by a dyadic predicate. In general terms, one says that a representation is true of an objective situation, more briefly, that a sign is true of an object. The nature of the correspondence may vary from theory to theory in this family. The correspondence can be fairly arbitrary or it can take on the character of an analogy, an icon, or a morphism, whereby a representation is rendered true of its object by the existence of corresponding elements and a similar structure.
== Peirce ==
Very little in Peirce's thought can be understood in its proper light without understanding that he thinks all thoughts are signs, and thus, according to his theory of thought, no thought is understandable outside the context of a sign relation. Sign relations taken collectively are the subject matter of a theory of signs. So Peirce's semiotic, his theory of sign relations, is key to understanding his entire philosophy of pragmatic thinking and thought.
In his contribution to the article "Truth and Falsity and Error" for Baldwin's Dictionary of Philosophy and Psychology (1901), Peirce defines truth in the following way:
Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth. (Peirce 1901, see Collected Papers (CP) 5.565).
This statement emphasizes Peirce's view that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and "reference to the future", are essential to a proper conception of truth. Although Peirce occasionally uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation, he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he follows long tradition in relegating to a lower status than real definitions.
That truth is the correspondence of a representation with its object is, as Kant says, merely the nominal definition of it. Truth belongs exclusively to propositions. A proposition has a subject (or set of subjects) and a predicate. The subject is a sign; the predicate is a sign; and the proposition is a sign that the predicate is a sign of that of which the subject is a sign. If it be so, it is true. But what does this correspondence or reference of the sign, to its object, consist in? (Peirce 1906, CP 5.553).
Here Peirce makes a statement that is decisive for understanding the relationship between his pragmatic definition of truth and any theory of truth that leaves it solely and simply a matter of representations corresponding with their objects. Peirce, like Kant before him, recognizes Aristotle's distinction between a nominal definition, a definition in name only, and a real definition, one that states the function of the concept, the reason for conceiving it, and so indicates the essence, the underlying substance of its object. This tells us the sense in which Peirce entertained a correspondence theory of truth, namely, a purely nominal sense. To get beneath the superficiality of the nominal definition it is necessary to analyze the notion of correspondence in greater depth.
In preparing for this task, Peirce makes use of an allegorical story, omitted here, the moral of which is that there is no use seeking a conception of truth that we cannot conceive ourselves being able to capture in a humanly conceivable concept. So we might as well proceed on the assumption that we have a real hope of comprehending the answer, of being able to "handle the truth" when the time comes. Bearing that in mind, the problem of defining truth reduces to the following form:
Now thought is of the nature of a sign. In that case, then, if we can find out the right method of thinking and can follow it out — the right method of transforming signs — then truth can be nothing more nor less than the last result to which the following out of this method would ultimately carry us. In that case, that to which the representation should conform, is itself something in the nature of a representation, or sign — something noumenal, intelligible, conceivable, and utterly unlike a thing-in-itself. (Peirce 1906, CP 5.553).
Peirce's theory of truth depends on two other, intimately related subject matters, his theory of sign relations and his theory of inquiry. Inquiry is a special case of semiosis, a process that transforms signs into signs while maintaining a specific relationship to an object, which object may be located outside the trajectory of signs or else be found at the end of it. Inquiry includes all forms of belief revision and logical inference, including scientific method, what Peirce here means by "the right method of transforming signs". A sign-to-sign transaction relating to an object is a transaction that involves three parties, or a relation that involves three roles. This is called a ternary or triadic relation in logic. Consequently, pragmatic theories of truth are largely expressed in terms of triadic truth predicates.
The statement above tells us one more thing: Peirce, having started out in accord with Kant, is here giving notice that he is parting ways with the Kantian idea that the ultimate object of a representation is an unknowable thing-in-itself. Peirce would say that the object is knowable, in fact, it is known in the form of its representation, however imperfectly or partially.
Reality and truth are coordinate concepts in pragmatic thinking, each being defined in relation to the other, and both together as they participate in the time evolution of inquiry. Inquiry is not a disembodied process, nor the occupation of a singular individual, but the common life of an unbounded community.
The real, then, is that which, sooner or later, information and reasoning would finally result in, and which is therefore independent of the vagaries of me and you. Thus, the very origin of the conception of reality shows that this conception essentially involves the notion of a COMMUNITY, without definite limits, and capable of a definite increase of knowledge. (Peirce 1868, CP 5.311).
Different minds may set out with the most antagonistic views, but the progress of investigation carries them by a force outside of themselves to one and the same conclusion. This activity of thought by which we are carried, not where we wish, but to a foreordained goal, is like the operation of destiny. No modification of the point of view taken, no selection of other facts for study, no natural bent of mind even, can enable a man to escape the predestinate opinion. This great law is embodied in the conception of truth and reality. The opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth, and the object represented in this opinion is the real. That is the way I would explain reality. (Peirce 1878, CP 5.407).
== James ==
William James's version of the pragmatic theory is often summarized by his statement that "the 'true' is only the expedient in our way of thinking, just as the 'right' is only the expedient in our way of behaving." By this, James meant that truth is a quality the value of which is confirmed by its effectiveness when applying concepts to actual practice (thus, "pragmatic"). James's pragmatic theory is a synthesis of correspondence theory of truth and coherence theory of truth, with an added dimension. Truth is verifiable to the extent that thoughts and statements correspond with actual things, as well as "hangs together," or coheres, fits as pieces of a puzzle might fit together, and these are in turn verified by the observed results of the application of an idea to actual practice. James said that "all true processes must lead to the face of directly verifying sensible experiences somewhere.": 83 He also extended his pragmatic theory well beyond the scope of scientific verifiability, and even into the realm of the mystical: "On pragmatic principles, if the hypothesis of God works satisfactorily in the widest sense of the word, then it is 'true.' ": 115
"Truth, as any dictionary will tell you, is a property of certain of our ideas. It means their 'agreement', as falsity means their disagreement, with 'reality'. Pragmatists and intellectualists both accept this definition as a matter of course. They begin to quarrel only after the question is raised as to what may precisely be meant by the term 'agreement', and what by the term 'reality', when reality is taken as something for our ideas to agree with.": 76
Pragmatism, James clarifies, is not a new philosophy. He states that it instead focuses on discerning truth between contrasting schools of thought. “To understand truth, he argues, we must consider the pragmatic ‘cash-value’ of having true beliefs and the practical difference of having true ideas." By using the term ‘cash-value,’ James refers to the practical consequences that come from discerning the truth behind arguments, through the pragmatic method, that should yield no desirable answer. In such cases, the pragmatic method must “try to interpret each notion by tracing its respective practical consequences." William James uses an analogy of a squirrel on a tree to further explain the pragmatic method.
James imagines a squirrel on a tree. If it clung to one side of the tree, and a person stood on the other, and as the person walked around the tree so too did the squirrel as to never be seen by the person, would the person rightly be walking around the squirrel? “’Depends on what you practically mean by ‘going round’ the squirrel. If you mean passing from the north of him to the east, then to the south, then to the west, then to the north of him again, obviously the man does go round him… but on the contrary if you mean being first in front of him, then behind him, then on his left, then finally in front again, it is quite obvious that the man fails to go round him." In such arguments, where no practical consequences can be found after making a distinction, the argument should be dropped. If, however, the argument was to yield one result which clearly holds greater consequences, then that side should be agreed upon solely for its intrinsic value. Although James never actually clarifies what “practical consequences” are, he does mention how the best way to find division between possible consequences is by first practically defining what each side of the argument means. In terms of James’s example, he says: “You are both right and both wrong according as you conceive the verb ‘to go round’ in one practical fashion or the other." Thus the pragmatic theory seeks to find truth through the division and practical consequences between contrasting sides to establish which side is correct.
William James (1907) begins his chapter on "Pragmatism's Conception of Truth" in much the same letter and spirit as the above selection from Peirce (1906), noting the nominal definition of truth as a plausible point of departure, but immediately observing that the pragmatist's quest for the meaning of truth can only begin, not end there.
"The popular notion is that a true idea must copy its reality. Like other popular views, this one follows the analogy of the most usual experience. Our true ideas of sensible things do indeed copy them. Shut your eyes and think of yonder clock on the wall, and you get just such a true picture or copy of its dial. But your idea of its 'works' (unless you are a clockmaker) is much less of a copy, yet it passes muster, for it in no way clashes with reality. Even though it should shrink to the mere word 'works', that word still serves you truly; and when you speak of the 'time-keeping function' of the clock, or of its spring's 'elasticity', it is hard to see exactly what your ideas can copy.": 77
James exhibits a knack for popular expression that Peirce seldom sought, and here his analysis of correspondence by way of a simple thought experiment cuts right to the quick of the first major question to ask about it, namely: To what extent is the notion of correspondence involved in truth covered by the ideas of analogues, copies, or iconic images of the thing represented? The answer is that the iconic aspect of correspondence can be taken literally only in regard to sensory experiences of the more precisely eidetic sort. When it comes to the kind of correspondence that might be said to exist between a symbol, a word like "works", and its object, the springs and catches of the clock on the wall, then the pragmatist recognizes that a more than nominal account of the matter still has a lot more explaining to do.
=== Making truth ===
Instead of truth being ready-made for us, James asserts we and reality jointly "make" truth. This idea has two senses: (1) truth is mutable, (often attributed to William James and F.C.S. Schiller); and (2) truth is relative to a conceptual scheme (more widely accepted in Pragmatism).
(1) Mutability of truth
"Truth" is not readily defined in Pragmatism. Can beliefs pass from being true to being untrue and back? For James, beliefs are not true until they have been made true by verification. James believed propositions become true over the long term through proving their utility in a person's specific situation. The opposite of this process is not falsification, but rather the belief ceases to be a "live option." F.C.S. Schiller, on the other hand, clearly asserted beliefs could pass into and out of truth on a situational basis. Schiller held that truth was relative to specific problems. If I want to know how to return home safely, the true answer will be whatever is useful to solving that problem. Later on, when faced with a different problem, what I came to believe with the earlier problem may now be false. As my problems change, and as the most useful way to solve a problem shifts, so does the property of truth.
C.S. Peirce considered the idea that beliefs are true at one time but false at another (or true for one person but false for another) to be one of the "seeds of death" by which James allowed his pragmatism to become "infected." For Peirce the pragmatic view implies theoretical claims should be tied to verification processes (i.e. they should be subject to test). They shouldn't be tied to our specific problems or life needs. Truth is defined, for Peirce, as what would be the ultimate outcome (not any outcome in real time) of inquiry by a (usually scientific) community of investigators. William James, while agreeing with this definition, also characterized truthfulness as a species of the good: if something is true it is trustworthy and reliable and will remain so in every conceivable situation. Both Peirce and Dewey connect the definitions of truth and warranted assertability. Hilary Putnam also developed his internal realism around the idea a belief is true if it is ideally justified in epistemic terms. About James' and Schiller's view, Putnam says:
Truth cannot simply be rational acceptability for one fundamental reason; truth is supposed to be a property of a statement that cannot be lost, whereas justification can be lost. The statement 'The earth is flat' was, very likely, rationally acceptable 3000 years ago; but it is not rationally acceptable today. Yet it would be wrong to say that 'the earth is flat' was true 3,000 years ago; for that would mean that the earth has changed its shape. (Putnam 1981, p. 55)
Rorty has also weighed in against James and Schiller:
Truth is, to be sure, an absolute notion, in the following sense: "true for me but not for you" and "true in my culture but not in yours" are weird, pointless locutions. So is "true then, but not now." ... James would, indeed, have done better to say that phrases like "the good in the way of belief" and "what it is better for us to believe" are interchangeable with "justified" rather than with "true." (Rorty 1998, p. 2)
(2) Conceptual relativity
With James and Schiller we make things true by verifying them—a view rejected by most pragmatists. However, nearly all pragmatists do accept the idea there can be no truths without a conceptual scheme to express those truths. That is,
Unless we decide upon how we are going to use concepts like 'object', 'existence' etc., the question 'how many objects exist' does not really make any sense. But once we decide the use of these concepts, the answer to the above-mentioned question within that use or 'version', to put in Nelson Goodman's phrase, is no more a matter of 'convention'. (Maitra 2003 p. 40)
F.C.S. Schiller used the analogy of a chair to make clear what he meant by the phrase that truth is made: just as a carpenter makes a chair out of existing materials and doesn't create it out of nothing, truth is a transformation of our experience—but this doesn't imply reality is something we're free to construct or imagine as we please.
== Dewey ==
John Dewey, less broadly than William James but much more broadly than Charles Peirce, held that inquiry, whether scientific, technical, sociological, philosophical or cultural, is self-corrective over time if openly submitted for testing by a community of inquirers in order to clarify, justify, refine and/or refute proposed truths. In his Logic: The Theory of Inquiry (1938), Dewey gave the following definition of inquiry:
Inquiry is the controlled or directed transformation of an indeterminate situation into one that is so determinate in its constituent distinctions and relations as to convert the elements of the original situation into a unified whole. (Dewey, p. 108).
The index of the same book has exactly one entry under the heading truth, and it refers to the following footnote:
The best definition of truth from the logical standpoint which is known to me is that by Peirce: "The opinion which is fated to be ultimately agreed to by all who investigate is what we mean by the truth, and the object represented in this opinion is the real [CP 5.407]. (Dewey, 343 n).
Dewey says more of what he understands by truth in terms of his preferred concept of warranted assertibility as the end-in-view and conclusion of inquiry (Dewey, 14–15).
== Mead ==
== Criticisms ==
Several objections are commonly made to pragmatist account of truth, of either sort.
First, due originally to Bertrand Russell (1907) in a discussion of James's theory, is that pragmatism mixes up the notion of truth with epistemology. Pragmatism describes an indicator or a sign of truth. It really cannot be regarded as a theory of the meaning of the word "true". There's a difference between stating an indicator and giving the meaning. For example, when the streetlights turn on at the end of a day, that's an indicator, a sign, that evening is coming on. It would be an obvious mistake to say that the word "evening" just means "the time that the streetlights turn on". In the same way, while it might be an indicator of truth, that a proposition is part of that perfect science at the ideal limit of inquiry, that just isn't what "true" means.
Russell's objection is that pragmatism mixes up an indicator of truth with the meaning of the predicate 'true'. There is a difference between the two and pragmatism confuses them. In this pragmatism is akin to Berkeley's view that to be is to be perceived, which similarly confuses an indication or proof of that something exists with the meaning of the word 'exists', or with what it is for something to exist.
Other objections to pragmatism include how we define what it means to say a belief "works", or that it is "useful to believe". The vague usage of these terms, first popularized by James, has led to much debate.
A viable, more sophisticated consensus theory of truth, a mixture of Peircean theory with speech-act theory and social theory, is that presented and defended by Jürgen Habermas, which sets out the universal pragmatic conditions of ideal consensus and responds to many objections to earlier versions of a pragmatic, consensus theory of truth. Habermas distinguishes explicitly between factual consensus, i.e. the beliefs that happen to hold in a particular community, and rational consensus, i.e. consensus attained in conditions approximating an "ideal speech situation", in which inquirers or members of a community suspend or bracket prevailing beliefs and engage in rational discourse aimed at truth and governed by the force of the better argument, under conditions in which all participants in discourse have equal opportunities to engage in constative (assertions of fact), normative, and expressive speech acts, and in which discourse is not distorted by the intervention of power or the internalization of systematic blocks to communication.
Recent Peirceans, Cheryl Misak, and Robert B. Talisse have attempted to formulate Peirce's theory of truth in a way that improves on Habermas and provides an epistemological conception of deliberative democracy.
== Notes and references ==
== Further reading ==
Allen, James Sloan, ed. William James on Habit, Will, Truth, and the Meaning of Life. Frederic C. Beil, Publisher, Savannah, GA.
Awbrey, Jon, and Awbrey, Susan (1995), "Interpretation as Action: The Risk of Inquiry", Inquiry: Critical Thinking Across the Disciplines 15, 40–52. Eprint
Baldwin, J.M. (1901–1905), Dictionary of Philosophy and Psychology, 3 volumes in 4, New York, NY.
Dewey, John (1929), The Quest for Certainty: A Study of the Relation of Knowledge and Action, Minton, Balch, and Company, New York, NY. Reprinted, pp. 1–254 in John Dewey, The Later Works, 1925–1953, Volume 4: 1929, Jo Ann Boydston (ed.), Harriet Furst Simon (text. ed.), Stephen Toulmin (intro.), Southern Illinois University Press, Carbondale and Edwardsville, IL, 1984.
Dewey, John (1938), Logic: The Theory of Inquiry, Henry Holt and Company, New York, NY, 1938. Reprinted, pp. 1–527 in John Dewey, The Later Works, 1925–1953, Volume 12: 1938, Jo Ann Boydston (ed.), Kathleen Poulos (text. ed.), Ernest Nagel (intro.), Southern Illinois University Press, Carbondale and Edwardsville, IL, 1986.
Ferm, Vergilius (1962), "Consensus Gentium", p. 64 in Runes (1962).
Haack, Susan (1993), Evidence and Inquiry: Towards Reconstruction in Epistemology, Blackwell Publishers, Oxford, UK.
Habermas, Jürgen (1976), "What Is Universal Pragmatics?", 1st published, "Was heißt Universalpragmatik?", Sprachpragmatik und Philosophie, Karl-Otto Apel (ed.), Suhrkamp Verlag, Frankfurt am Main. Reprinted, pp. 1–68 in Jürgen Habermas, Communication and the Evolution of Society, Thomas McCarthy (trans.), Beacon Press, Boston, MA, 1979.
Habermas, Jürgen (1979), Communication and the Evolution of Society, Thomas McCarthy (trans.), Beacon Press, Boston, MA.
Habermas, Jürgen (1990), Moral Consciousness and Communicative Action, Christian Lenhardt and Shierry Weber Nicholsen (trans.), Thomas McCarthy (intro.), MIT Press, Cambridge, MA.
Habermas, Jürgen (2003), Truth and Justification, Barbara Fultner (trans.), MIT Press, Cambridge, MA.
James, William (1907), Pragmatism, A New Name for Some Old Ways of Thinking, Popular Lectures on Philosophy, Longmans, Green, and Company, New York, NY.
James, William (1909), The Meaning of Truth, A Sequel to 'Pragmatism', Longmans, Green, and Company, New York, NY.
Kant, Immanuel (1800), Introduction to Logic. Reprinted, Thomas Kingsmill Abbott (trans.), Dennis Sweet (intro.), Barnes and Noble, New York, NY, 2005.
Peirce, C.S., Writings of Charles S. Peirce, A Chronological Edition, Peirce Edition Project (eds.), Indiana University Press, Bloomington and Indianoplis, IN, 1981–. Volume 1 (1857–1866), 1981. Volume 2 (1867–1871), 1984. Volume 3 (1872–1878), 1986. Cited as W volume:page.
Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 1931–1935, 1958. Cited as CP vol.para.
Peirce, C.S., The Essential Peirce, Selected Philosophical Writings, Volume 1 (1867–1893), Nathan Houser and Christian Kloesel (eds.), Indiana University Press, Bloomington and Indianapolis, IN, 1992. Cited as EP 1:page.
Peirce, C.S., The Essential Peirce, Selected Philosophical Writings, Volume 2 (1893–1913), Peirce Edition Project (eds.), Indiana University Press, Bloomington and Indianapolis, IN, 1998. Cited as EP 2:page.
Peirce, C.S. (1868), "Some Consequences of Four Incapacities", Journal of Speculative Philosophy 2 (1868), 140–157. Reprinted (CP 5.264–317), (W 2:211–242), (EP 1:28–55). Eprint. NB. Misprints in CP and Eprint copy.
Peirce, C.S. (1877), "The Fixation of Belief", Popular Science Monthly 12 (1877), 1–15. Reprinted (CP 5.358–387), (W 3:242–257), (EP 1:109–123). Eprint.
Peirce, C.S. (1878), "How to Make Our Ideas Clear", Popular Science Monthly 12 (1878), 286–302. Reprinted (CP 5.388–410), (W 3:257–276)), (EP 1:124–141).
Peirce, C.S. (1901), section entitled "Logical", pp. 718–720 in "Truth and Falsity and Error", pp. 716–720 in J.M. Baldwin (ed.), Dictionary of Philosophy and Psychology, vol. 2. Google Books Eprint. Reprinted (CP 5.565–573).
Peirce, C.S. (1905), "What Pragmatism Is", The Monist 15, 161–181. Reprinted (CP 5.411–437), (EP 2:331–345). Internet Archive Eprint.
Peirce, C.S. (1906), "Basis of Pragmaticism", first published in Collected Papers, CP 1.573–574 and 5.549–554.
Rescher, Nicholas (1995), Pluralism: Against the Demand for Consensus, Oxford University Press, Oxford, UK.
Rorty, R. (1979), Philosophy and the Mirror of Nature, Princeton University Press, Princeton, NJ.
Runes, Dagobert D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ. Cited as DOP. | Wikipedia/Pragmatic_theory_of_truth |
In philosophy, specifically in the area of metaphysics, counterpart theory is an alternative to standard (Kripkean) possible-worlds semantics for interpreting quantified modal logic. Counterpart theory still presupposes possible worlds, but differs in certain important respects from the Kripkean view. The form of the theory most commonly cited was developed by David Lewis, first in a paper and later in his book On the Plurality of Worlds.
== Differences from the Kripkean view ==
Counterpart theory (hereafter "CT"), as formulated by Lewis, requires that individuals exist in only one world. The standard account of possible worlds assumes that a modal statement about an individual (e.g., "it is possible that x is y") means that there is a possible world, W, where the individual x has the property y; in this case there is only one individual, x, at issue. On the contrary, counterpart theory supposes that this statement is really saying that there is a possible world, W, wherein exists an individual that is not x itself, but rather a distinct individual 'x' different from but nonetheless similar to x. So, when I state that I might have been a banker (rather than a philosopher) according to counterpart theory I am saying not that I exist in another possible world where I am a banker, but rather my counterpart does. Nevertheless, this statement about my counterpart is still held to ground the truth of the statement that I might have been a banker. The requirement that any individual exist in only one world is to avoid what Lewis termed the "problem of accidental intrinsics" which (he held) would require a single individual to both have and simultaneously not have particular properties.
The counterpart theoretic formalization of modal discourse also departs from the standard formulation by eschewing use of modality operators (Necessarily, Possibly) in favor of quantifiers that range over worlds and 'counterparts' of individuals in those worlds. Lewis put forth a set of primitive predicates and a number of axioms governing CT and a scheme for translating standard modal claims in the language of quantified modal logic into his CT.
In addition to interpreting modal claims about objects and possible worlds, CT can also be applied to the identity of a single object at different points in time. The view that an object can retain its identity over time is often called endurantism, and it claims that objects are ‘wholly present’ at different moments (see the counterpart relation, below). An opposing view is that any object in time is made up of temporal parts or is perduring.
Lewis' view on possible worlds is sometimes called modal realism.
=== The basics ===
The possibilities that CT is supposed to describe are “ways a world might be” (Lewis 1986:86) or more exactly:
(1) absolutely every way that a world could possibly be is a way that some world is, and
(2) absolutely every way that a part of a world could possibly be is a way that some part of some world is. (Lewis 1986:86.)
Add also the following “principle of recombination,” which Lewis describes this way: “patching together parts of different possible worlds yields another possible world […]. [A]nything can coexist with anything else, […] provided they occupy distinct spatiotemporal positions.” (Lewis 1986:87-88). But these possibilities should be restricted by CT.
== The counterpart relation ==
The counterpart relation (hereafter C-relation) differs from the notion of identity. Identity is a reflexive, symmetric, and transitive relation. The counterpart relation is only a similarity relation; it needn’t be transitive or symmetric. The C-relation is also known as genidentity (Carnap 1967), I-relation (Lewis 1983), and the unity relation (Perry 1975).
If identity is shared between objects in different possible worlds then the same object can be said to exist in different possible worlds (a trans-world object, that is, a series of objects sharing a single identity).
=== Parthood relation ===
An important part of the way Lewis’s worlds deliver possibilities is the use of the parthood relation. This gives some neat formal machinery, mereology. This is an axiomatic system that uses formal logic to describe the relationship between parts and wholes, and between parts within a whole. Especially important, and most reasonable, according to Lewis, is the strongest form that accepts the existence of mereological sums or the thesis of unrestricted mereological composition (Lewis 1986:211-213).
== The formal theory ==
As a formal theory, counterpart theory can be used to translate sentences into modal quantificational logic. Sentences that seem to be quantifying over possible individuals should be translated into CT. (Explicit primitives and axioms have not yet been stated for the temporal or spatial use of CT.) Let CT be stated in quantificational logic and contain the following primitives:
W
x
{\displaystyle Wx}
(
x
{\displaystyle x}
is a possible world)
I
x
y
{\displaystyle Ixy}
(
x
{\displaystyle x}
is in possible world
y
{\displaystyle y}
)
A
x
{\displaystyle Ax}
(
x
{\displaystyle x}
is actual)
C
x
y
{\displaystyle Cxy}
(
x
{\displaystyle x}
is a counterpart of
y
{\displaystyle y}
)
We have the following axioms (taken from Lewis 1968):
A1.
(
I
x
y
→
W
y
)
{\displaystyle (Ixy\rightarrow Wy)}
(Nothing is in anything except a world)
A2.
(
(
I
x
y
∧
I
x
z
)
→
y
=
z
)
{\displaystyle ((Ixy\land Ixz)\rightarrow y=z)}
(Nothing is in two worlds)
A3.
(
C
x
y
→
∃
z
I
x
z
)
{\displaystyle (Cxy\rightarrow \exists z\,Ixz)}
(Whatever is a counterpart is in a world)
A4.
(
C
x
y
→
∃
z
I
y
z
)
{\displaystyle (Cxy\rightarrow \exists z\,Iyz)}
(Whatever has a counterpart is in a world)
A5.
(
(
I
x
y
∧
I
z
y
∧
C
x
z
)
→
x
=
z
)
{\displaystyle ((Ixy\land Izy\land Cxz)\rightarrow x=z)}
(Nothing is a counterpart of anything else in its world)
A6.
(
I
x
y
→
C
x
x
)
{\displaystyle (Ixy\rightarrow Cxx)}
(Anything in a world is a counterpart of itself)
A7.
∃
x
(
W
x
∧
∀
y
(
I
y
x
↔
A
y
)
)
{\displaystyle \exists x\,(Wx\land \forall y\,(Iyx\leftrightarrow Ay))}
(Some world contains all and only actual things)
A8.
∃
x
A
x
{\displaystyle \exists x\,Ax}
(Something is actual)
It is an uncontroversial assumption to assume that the primitives and the axioms A1 through A8 make the standard counterpart system.
=== Comments on the axioms ===
A1 excludes individuals that exist in no world at all. The way an individual is in a world is by being a part of that world, so the basic relation is mereological.
A2 excludes individuals that exist in more than one possible world. But because David Lewis accepts the existence of arbitrary mereological sums there are individuals that exist in several possible worlds, but they are not possible individuals because none of them have the property of being actual. And that is because it is not possible for such a whole to be actual.
A3 and A4 make counterparts worldbound, excluding an individual that has a non-worldbound counterpart.
A5 and A6 restrict the use of the CT-relation so that it is used within a possible world when and only when it is stood in by an entity to itself.
A7 and A8 make one possible world the unique actual world.
=== Principles that are not accepted in normal CT ===
R1.
(
C
x
y
→
C
y
x
)
{\displaystyle (Cxy\rightarrow Cyx)}
(Symmetry of the counterpart relation)
R2.
(
(
C
x
y
∧
C
y
z
)
→
C
x
z
)
{\displaystyle ((Cxy\land Cyz)\rightarrow Cxz)}
(Transitivity of the counterpart relation)
R3.
(
(
C
y
1
x
∧
C
y
2
x
∧
I
y
1
w
1
∧
I
y
2
w
2
∧
y
1
≠
y
2
)
→
w
1
≠
w
2
)
{\displaystyle ((Cy_{1}x\land Cy_{2}x\land Iy_{1}w_{1}\land Iy_{2}w_{2}\land y_{1}\neq y_{2})\rightarrow w_{1}\neq w_{2})}
(Nothing in any world has more than one counterpart in any other world)
R4.
(
(
C
y
x
1
∧
C
y
x
2
∧
I
x
1
w
1
∧
I
x
2
w
2
∧
x
1
≠
x
2
)
→
w
1
≠
w
2
)
{\displaystyle ((Cyx_{1}\land Cyx_{2}\land Ix_{1}w_{1}\land Ix_{2}w_{2}\land x_{1}\neq x_{2})\rightarrow w_{1}\neq w_{2})}
(No two things in any world have a common counterpart in any other world)
R5.
(
(
W
w
1
∧
W
w
2
∧
I
x
w
1
)
→
∃
y
(
I
y
w
2
∧
C
x
y
)
)
{\displaystyle ((Ww_{1}\land Ww_{2}\land Ixw_{1})\rightarrow \exists y\,(Iyw_{2}\land Cxy))}
(For any two worlds, anything in one is a counterpart of something in the other)
R6.
(
(
W
w
1
∧
W
w
2
∧
I
x
w
1
)
→
∃
y
(
I
y
w
2
∧
C
y
x
)
)
{\displaystyle ((Ww_{1}\land Ww_{2}\land Ixw_{1})\rightarrow \exists y\,(Iyw_{2}\land Cyx))}
(For any two worlds, anything in one has some counterpart in the other)
== Motivations for counterpart theory ==
CT can be applied to the relationship between identical objects in different worlds or at different times. Depending on the subject, there are different reasons for accepting CT as a description of the relation between different entities.
=== In possible worlds ===
David Lewis defended modal realism. This is the view that a possible world is a concrete, maximal connected spatio-temporal region. The actual world is one of the possible worlds; it is also concrete. Because a single concrete object demands spatio-temporal connectedness, a possible concrete object can only exist in one possible world. Still, we say true things like: It is possible that Hubert Humphrey won the 1968 US presidential election. How is it true? Humphrey has a counterpart in another possible world that wins the 1968 election in that world.
Lewis also argues against three other alternatives that might be compatible with possibilism: overlapping individuals, trans-world individuals, and haecceity.
Some philosophers, such as Peter van Inwagen (1985), see no problem with identity within a world . Lewis seems to share this attitude. He says:
"… like the Holy Roman Empire, it is badly named. […] In the first place we should bear in mind that Trans-World Airlines is an intercontinental, but not as yet an interplanetary carrier. More important, we should not suppose that we have here any problem with identity.
We never have. Identity is utterly simple and unproblematic. Everything is identical to itself; nothing is ever identical to anything else except itself. There is never any problem about what makes something identical to itself; nothing can ever fail to be. And there is never any problem about what makes two things identical; two things never can be identical.
There might be a problem about how to define identity to someone sufficiently lacking in conceptual resources — we note that it won't suffice to teach him certain rules of inference — but since such unfortunates are rare, even among philosophers, we needn't worry much if their condition is incurable.
We do state plenty of genuine problems in terms of identity. But we needn't state them so.” (Lewis 1986:192-193)
==== Overlapping individuals ====
An overlapping individual has a part in the actual world and a part in another world. Because identity is not problematic, we get overlapping individuals by having overlapping worlds. Two worlds overlap if they share a common part. But some properties of overlapping objects are, for Lewis, troublesome (Lewis 1986:199-210).
The problem is with an object’s accidental intrinsic properties, such as shape and weight, which supervene on its parts. Humphrey could have the property of having six fingers on his left hand. How does he do that? It can’t be true that Humphrey has both the property of having six fingers and five fingers on his left hand. What we might say is that he has five fingers at this world and six fingers at that world. But how should these modifiers be understood?
According to McDaniel (2004), if Lewis is right, the defender of overlapping individuals has to accept genuine contradictions or defend the view that every object has all its properties essentially.
How can you be one year older than you are? One way is to say that there is a possible world where you exist. Another way is for you to have a counterpart in that possible world, who has the property of being one year older than you.
==== Trans-world individuals ====
Take Humphrey: if he is a trans-world individual he is the mereological sum of all of the possible Humphreys in the different worlds. He is like a road that goes through different regions. There are parts that overlap, but we can also say that there is a northern part that is connected to the southern part and that the road is the mereological sum of these parts. The same thing with Humphrey. One part of him is in one world, another part in another world.
"It is possible for something to exist if it is possible for the whole to exist. That is, if there is a world at which the whole of it exists. That is, if there is a world such that quantifying only over parts of that world, the whole of it exists. That is, if the whole of it is among the parts of some world. That is, if it is part of some world – and hence not a trans-world individual. Parts of worlds are possible individuals; trans-world individuals are therefore impossible individuals."
==== Haecceity ====
A haecceity or individual essence is a property that only a single object instantiates. Ordinary properties, if one accepts the existence of universals, can be exemplified by more than one object at a time. Another way to explain a haecceity is to distinguish between suchness and thisness, where thisness has a more demonstrative character.
David Lewis gives the following definition of a haecceitistic difference: “two worlds differ in what they represent de re concerning some individual, but do not differ qualitatively in any way.” (Lewis 1986:221.)
CT does not require distinct worlds for distinct possibilities – “a single world may provide many possibilities, since many possible individuals inhabit it” (Lewis 1986:230). CT can satisfy multiple counterparts in one possible world.
=== Temporal parts ===
Perdurantism is the view that material objects are not wholly present at any single instant of time; instead, some temporal parts is said to be present. Sometimes, especially in the theory of relativity as it is expressed by Minkowski, the path traced by an object through spacetime. According to Ted Sider, “Temporal parts theory is the claim that time is like space in one particular respect, namely, with respect to parts.” Sider associates endurantism with a C-relation between temporal parts.
Sider defends a revised way of counting. Instead of counting individual objects, timeline slices or the temporal parts of an object are used. Sider discusses an example of counting road segments instead of roads simpliciter. (Sider 2001:188-192). (Compare with Lewis 1993.) Sider argues that, even if we knew that some material object would go through some fission and split into two, "we would not say" that there are two objects located at the same spacetime region. (Sider 2001:189)
How can one predicate temporal properties of these momentary temporal parts? It is here that the C-relation comes in play. Sider proposed the sentence: "Ted was once a boy." The truth condition of this sentence is that "there exists some person stage x prior to the time of utterance, such that x is a boy, and x bears the temporal counterpart relation to Ted." (Sider 2001:193)
== Counterpart theory and the necessity of identity ==
Kripke's three lectures on proper names and identity, (1980), raised the issues of how we should interpret statements about identity. Take the statement that the Evening Star is identical to the Morning Star. Both are the planet Venus. This seems to be an a posteriori identity statement. We discover that the names designate the same thing. The traditional view, since Kant, has been that statements or propositions that are necessarily true are a priori. But in the end of the sixties Saul Kripke and Ruth Barcan Marcus offered proof for the necessary truth of identity statements. Here is the Kripke's version (Kripke 1971):
(1)
∀
x
◻
x
=
x
{\displaystyle \forall x\,\Box \,x=x}
[Necessity of self-identity]
(2)
∀
x
∀
y
(
x
=
y
→
∀
P
(
P
x
→
P
y
)
)
{\displaystyle \forall x\,\forall y\,(x=y\rightarrow \forall P\,(Px\rightarrow Py))}
[Leibniz law]
(3)
∀
x
∀
y
(
x
=
y
→
(
◻
x
=
x
→
◻
x
=
y
)
)
{\displaystyle \forall x\,\forall y\ (x=y\rightarrow (\Box \,x=x\rightarrow \Box \,x=y))}
[From (1) and (2)]
(4)
∀
x
∀
y
(
x
=
y
→
◻
x
=
y
)
{\displaystyle \forall x\,\forall y\,(x=y\rightarrow \Box \,x=y)}
[From (3) and the following principle:
(
φ
→
(
⊤
→
ψ
)
)
⇒
(
φ
→
ψ
)
{\displaystyle (\varphi \rightarrow (\top \rightarrow \psi ))\Rightarrow (\varphi \rightarrow \psi )}
]
If the proof is correct the distinction between the a priori/a posteriori and necessary/contingent becomes less clear. The same applies if identity statements are necessarily true anyway. (For some interesting comments on the proof, see Lowe 2002.) The statement that for instance “Water is identical to H2O” is (then) a statement that is necessarily true but a posteriori. If CT is the correct account of modal properties we still can keep the intuition that identity statements are contingent and a priori because counterpart theory understands the modal operator in a different way than standard modal logic.
The relationship between CT and essentialism is of interest. (Essentialism, the necessity of identity, and rigid designators form an important troika of mutual interdependence.) According to David Lewis, claims about an object's essential properties can be true or false depending on context (in Chapter 4.5 in 1986 he calls against constancy, because an absolute conception of essences is constant over the logical space of possibilities). He writes:
But if I ask how things would be if Saul Kripke had come from no sperm and egg but had been brought by a stork, that makes equally good sense. I create a context that makes my question make sense, and to do so it has to be a context that makes origins not be essential. (Lewis 1986:252.)
== Counterpart theory and rigid designators ==
Kripke interpreted proper names as rigid designators where a rigid designator picks out the same object in every possible world (Kripke 1980). For someone who accepts contingent identity statements the following semantic problem occurs (semantic because we deal with de dicto necessity) (Rea 1997:xxxvii).
Take a scenario that is mentioned in the paradox of coincidence. A statue (call it “Statue”) is made by melding two pieces of clay together. Those two pieces are called “Clay”. Statue and Clay seem to be identical, they exist at the same time, and we could incinerate them at the same time. The following seems true:
(7) Necessarily, if Statue exists then Statue is identical to Statue.
But,
(8) Necessarily, if Statue exists then Statue is identical to Clay
is false, because it seems possible that Statue could have been made out of two different pieces of clay, and thus its identity to Clay is not necessary.
Counterpart theory, qua-identity, and individual concepts can offer solutions to this problem.
=== Arguments for inconstancy ===
Ted Sider gives roughly the following argument (Sider 2001:223). There is inconstancy if a proposition about the essence of an object is true in one context and false in another. C-relation is a similarity relation. What is similar in one dimension is not similar in another dimension. Therefore, the C-relation can have the same difference and express inconstant judgements about essences.
David Lewis offers another argument. The paradox of coincidence can be solved if we accept inconstancy. We can then say that it is possible for a dishpan and a piece of plastic to coincide, in some context. That context can then be described using CT.
Sider makes the point that David Lewis feels he was forced to defend CT, due to modal realism. Sider uses CT as a solution to the paradox of material coincidence.
=== Counterpart theory compared to qua-theory and individual concepts ===
We assume that contingent identity is real. Then it is informative to compare CT with other theories about how to handle de re representations.
Qua-theory
Kit Fine (1982) and Alan Gibbard (1975) (according to Rea 1997) are defences of qua-theory. According to qua-theory we can talk about some of an object's modal properties. The theory is handy if we don't think it is possible for Socrates to be identical with a piece of bread or a stone. Socrates qua person is essentially a person.
Individual concepts
According to Rudolf Carnap, in modal contexts variables refer to individual concepts instead of individuals. An individual concept is then defined as a function of individuals in different possible worlds. Basically, individual concepts deliver semantic objects or abstract functions instead of real concrete entities as in CT.
=== Counterpart theory and epistemic possibility ===
Kripke accepts the necessity of identity but agrees with the feeling that it still seems that it is possible that Phospherus (the Morning Star) is not identical to Hespherus (the Evening Star). For all we know, it could be that they are different. He says:
What, then, does the intuition that the table might have turned out to have been made of ice or of anything else, that it might even have turned out not to be made of molecules, amount to? I think that it means simply that there might have been a table looking and feeling just like this one and placed in this very position in the room, which was in fact made of ice, In other words, I (or some conscious being) could have been qualitatively in the same epistemic situation that in fact obtains, I could have the same sensory evidence that I in fact have, about a table which was made of ice. The situation is thus akin to the one which inspired the counterpart theorists; when I speak of the possibility of the table turning out to be made of various things, I am speaking loosely. This table itself could not have had an origin different form the one it in fact had, but in a situation qualitatively identical to this one with respect to all evidence I had in advance, the room could have contained a table made of ice in place of this one. Something like counterpart theory is thus applicable to the situation, but it applies only because we are not interested in what might not be true of a table given certain evidence. It is precisely because it is not true that this table might have been made of ice from the Thames that we must turn here to qualitative descriptions and counterparts. To apply these notions to genuine de re modalities, is from the present standpoint, perverse. (Kripke 1980:142.)
So to explain how the illusion of necessity is possible, according to Kripke, CT is an alternative. Therefore, CT forms an important part of our theory about the knowledge of modal intuitions. (For doubt about this strategy, see Della Roca, 2002. And for more about the knowledge of modal statements, see Gendler and Hawthorne, 2002.)
== Arguments against counterpart theory ==
The most famous is Kripke's Humphrey objection. Because a counterpart is never identical to something in another possible world Kripke raised the following objection against CT:
Thus if we say "Humphrey might have won the election" (if only he had done such-and-such), we are not talking about something that might have happened to Humphrey but to someone else, a "counterpart". Probably, however, Humphrey could not care less whether someone else, no matter how much resembling him, would have been victorious in another possible world. Thus, Lewis's view seems to me even more bizarre than the usual notions of transworld identification that it replaces. (Kripke 1980:45 note 13.)
One way to spell out the meaning of Kripke's claim is by the following imaginary dialogue: (Based on Sider MS)
Against: Kripke means that Humphrey himself doesn’t have the property of possibly winning the election, because it is only the counterpart that wins.
For: The property of possibly winning the election is the property of the counterpart.
Against: But they can't be the same property because Humphrey has different attitudes to them: he cares about he himself having the property of possibly winning the election. He doesn’t care about the counterpart having the property of possibly winning the election.
For: But properties don't work the same way as objects, our attitudes towards them can be different, because we have different descriptions – they are still the same properties. That lesson is taught by the paradox of analysis.
CT is inadequate if it can't translate all modal sentences or intuitions. Fred Feldman mentioned two sentences (Feldman 1971):
(1) I could have been quite unlike what I in fact am.
(2) I could have been more like what you in fact are than like what I in fact am. At the same time, you could have been more like what I in fact am than what you in fact are.
== See also ==
Identity (philosophy)
Many-worlds interpretation
== Notes ==
== References ==
Balashov, Yuri, 2007, "Defining endurance", Philosophical studies, 133:143-149.
Carnap, Rudolf, 1967, The Logical Structure of the World, trans. Rolf A. George, Berkeley: University of California Press.
Della Rocca, Michael, 2002, "Essentialism versus Essentialism", in Gendler and Hawthorne 2002.
Feldman, Fred, 1971 "Counterparts", Journal of Philosophy 68 (1971), pp. 406–409.
Fine, Kit, 1982, "Acts, Events and Things.", in W. Leinfellner, E. Kraemer, and J. Schank (eds.) Proceedings of the 6th International Wittgenstein Symposium, pp. 97–105, Wien: Hälder Pichler-Tempsky.
Gendler, Tamar Szabó and Hawthorne, John, 2002, Conceivability and Possibility, Oxford: Oxford University Press.
Gibbard, Alan, 1975, "Contingent Identity", Journal of Philosophical Logic 4, pp. 197–221 or in Rea 1997.
Hawley, Kathrine, 2001, How Things Persist, Oxford: Clarendon Press.
Kripke, Saul, 1971, "Identity and Necessity", in Milton K. Munitz, Identity and Individuation, pp. 135–64, New York: New York University Press.
Kripke, Saul, 1980, Naming and Necessity, Cambridge: Harvard University Press.
Lewis, David, 1968, "Counterpart Theory and Quantified Modal Logic", Journal of Philosophy 65 (1968), pp 113–26.
Lewis, David, 1971, "Counterparts of Persons and Their Bodies", Journal of Philosophy 68 (1971), pp 203–11 and in Philosophical Papers I.
Lewis, David, 1983, "Survival and Identity", in Amelie O. Rorty [ed.] The Identities of Persons (1976; University of California Press.) and in Philosophical Papers I, Oxford: Oxford University Press.
Lewis, David, 1986, On the Plurality of Worlds, Blackwell.
Lewis, David, 1993, "Many, But Almost One", in Keith Campbell, John Bacon and Lloyd Reinhart eds., Ontology, Causality and Mind:Essays in Honour of D.M. Armstrong, Cambridge:Cambridge University Press.
Lowe, E. J., 2002, A Survey of Metaphysics, Oxford: Oxford University Press.
Mackie, Penelope, 2006, How Things Might Have Been – Individuals, Kinds and essential Properties, Oxford: Clarendon Press.
McDaniel, Kris, 2004, "Modal Realism with Overlap", The Australasian Journal of Philosophy vol. 82, No. 1, pp. 137–152.
Merricks, Trenton, 2003, "The End of Counterpart Theory," Journal of Philosophy 100: 521-549.
Rea, Michael, ed., 1997, Material Constitution – A reader, Rowman & Littlefield Publishers.
Sider, Ted, 2001, Four-dimensionalism. Oxford: Oxford University Press.
Sider, Ted, 2006, Beyond the Humphrey objection.[1]
Perry, John, ed., 1975, Personal Identity, Berkeley: University of California Press
van Inwagen, Peter, 1985, "Plantinga on Trans-World Identity", in Alvin Plantina: A Profile, ed. James Tomberlin & Peter van Inwagen, Reidel.
== External links ==
Counterpart theory at PhilPapers
Zalta, Edward N. (ed.). "Possible objects". Stanford Encyclopedia of Philosophy. | Wikipedia/Counterpart_theory |
The Doctor of Metaphysics, also called a Metaphysical Science Doctorate, is a purported academic degree. While mainstream universities teach metaphysics as a branch of philosophy, the Doctor of Metaphysics degree is offered by a number of unaccredited universities and degree mills as a religious-based degree. It is not recognized by the United States Department of Education as a legitimate degree, and is not offered by any accredited institution.
In the United States, Doctor of Metaphysics degrees are offered by purported religious institutions of learning, as well as unrecognised churches and colleges of metaphysics. In 1938 the United States Department of the Interior published a book listing the "Doctor of Metaphysics" degree in a section written by Walton C. John, titled "Counterfeit Degrees".
A 1960 American Psychologist article titled, "Mail-order training in psychotherapy," warned against unaccredited schools purporting to offer "training in a variety of psychological and metapsychological methods" and awarding a Doctor of Metaphysics degree.
In the field of social work there are counselors who claim the title "Doctor of Metaphysics". In 2019, the Journal of Social Work Education published "Predatory Doctoral Programs: Warnings for Social Workers". The article warned that the majority of doctoral programs in metaphysics are little more than diploma mills which have few requirements other payment.
== See also ==
Degrees offered by unaccredited institutions of higher education
History of higher education in the United States
List of fields of doctoral studies in the United States
List of unaccredited institutions of higher education
== References == | Wikipedia/Doctor_of_Metaphysics |
In metaphysics and philosophy of language, the correspondence theory of truth states that the truth or falsity of a statement is determined only by how it relates to the world and whether it accurately describes (i.e., corresponds with) that world.
Correspondence theories claim that true beliefs and true statements correspond to the actual state of affairs. This type of theory attempts to posit a relationship between thoughts or statements on one hand, and things or facts on the other.
== History ==
Correspondence theory is a traditional model which goes back at least to some of the ancient Greek philosophers such as Plato and Aristotle. This class of theories holds that the truth or the falsity of a representation is determined solely by how it relates to a reality; that is, by whether it accurately describes that reality. As Aristotle claims in his Metaphysics: "To say that that which is, is not, and that which is not, is, is a falsehood; therefore, to say that which is, is, and that which is not, is not, is true".
A classic example of correspondence theory is the statement by the medieval philosopher and theologian Thomas Aquinas: "Veritas est adaequatio rei et intellectus" ("Truth is the adequation of things and intellect"), which Aquinas attributed to the ninth-century Neoplatonist Isaac Israeli.
Correspondence theory was either explicitly or implicitly embraced by most of the early modern thinkers, including René Descartes, Baruch Spinoza, John Locke, Gottfried Wilhelm Leibniz, David Hume, and Immanuel Kant. (However, Spinoza and Kant have also been [mis]interpreted as defenders of the coherence theory of truth.) Correspondence theory has also been attributed to Thomas Reid.
In late modern philosophy, Friedrich Wilhelm Joseph Schelling espoused the correspondence theory. According to Bhikhu Parekh, Karl Marx also subscribed to a version of the correspondence theory.
In contemporary Continental philosophy, Edmund Husserl defended the correspondence theory. In contemporary analytic philosophy, Bertrand Russell, Ludwig Wittgenstein (at least in his early period), J. L. Austin, and Karl Popper defended the correspondence theory.
== Varieties ==
=== Correspondence as congruence ===
Bertrand Russell and Ludwig Wittgenstein have in different ways suggested that a statement, to be true, must have some kind of structural isomorphism with the state of affairs in the world that makes it true. For example, "A cat is on a mat" is true if, and only if, there is in the world a cat and a mat and the cat is related to the mat by virtue of being on it. If any of the three pieces (the cat, the mat, and the relation between them which correspond respectively to the subject, object, and verb of the statement) is missing, the statement is false. Some sentences pose difficulties for this model, however. As just one example, adjectives such as "counterfeit", "alleged", or "false" do not have the usual simple meaning of restricting the meaning of the noun they modify: a "tall lawyer" is a kind of lawyer, but an "alleged lawyer" may not be.
=== Correspondence as correlation ===
J. L. Austin theorized that there need not be any structural parallelism between a true statement and the state of affairs that makes it true. It is only necessary that the semantics of the language in which the statement is expressed are such as to correlate whole-for-whole the statement with the state of affairs. A false statement, for Austin, is one that is correlated by the language to a state of affairs that does not exist.
== Relation to ontology ==
Historically, most advocates of correspondence theories have been metaphysical realists; that is, they believe that there is a world external to the minds of all humans. This is in contrast to metaphysical idealists who hold that everything that exists, exists as a substantial metaphysical entity independently of the individual thing of which it is predicated, and also to conceptualists who hold that everything that exists is, in the end, just an idea in some mind. However, it is not strictly necessary that a correspondence theory be married to metaphysical realism. It is possible to hold, for example, that the facts of the world determine which statements are true and to also hold that the world (and its facts) is but a collection of ideas in the mind of some supreme being.
== Objections ==
One attack on the theory claims that the correspondence theory succeeds in its appeal to the real world only in so far as the real world is reachable by us.
The direct realist believes that we directly know objects as they are. Such a person can wholeheartedly adopt a correspondence theory of truth.
The rigorous idealist believes that there are no real, mind-independent objects. The correspondence theory appeals to imaginary undefined entities, so it is incoherent.
Other positions hold that we have some type of awareness, perception, etc. of real-world objects which in some way falls short of direct knowledge of them. But such an indirect awareness or perception is itself an idea in one's mind, so that the correspondence theory of truth reduces to a correspondence between ideas about truth and ideas of the world, whereupon it becomes a coherence theory of truth.
=== Vagueness or circularity ===
Either the defender of the correspondence theory of truth offers some accompanying theory of the world, or they do not.
If no theory of the world is offered, the argument is so vague as to be useless or even unintelligible: truth would then be supposed to be correspondence to some undefined, unknown or ineffable world.
It would in this case be difficult to see how a candid truth could be more certain than the world we are to judge its degree of correspondence against.
On the other hand, as soon as the defender of the correspondence theory of truth offers a theory of the world, they are operating in some specific ontological or scientific theory, which stands in need of justification.
But, the only way to support the truth of this world-theory that is allowed by the correspondence theory of truth, is correspondence to the real world. Hence the argument is inescapably circular.
== See also ==
== Notes ==
== References ==
Hanna, Patricia and Harrison, Bernard (2004). Word and World: Practices and the Foundation of Language, Cambridge University Press.
Kirkham, Richard L. (1992), Theories of Truth: A Critical Introduction, MIT Press, Cambridge, MA.
Bertrand Russell (1912), The Problems of Philosophy, Oxford University Press, Oxford.
Michael Williams (1977), Groundless Belief, Basil Blackwell, Oxford.
== External links ==
The Correspondence Theory of Truth (Stanford Encyclopedia of Philosophy) | Wikipedia/Correspondence_theory_of_truth |
"Why is there anything at all?" or "Why is there something rather than nothing?" is a question about the reason for basic existence which has been raised or commented on by a range of philosophers and physicists, including Gottfried Wilhelm Leibniz, Ludwig Wittgenstein, and Martin Heidegger, who called it "the fundamental question of metaphysics".
== Introductory points ==
=== There is something ===
No experiment could support the hypothesis "There is nothing" because any observation obviously implies the existence of an observer.
=== Defining the question ===
The question is usually taken as concerning practical causality (rather than a moral reason for), and posed totally and comprehensively, rather than concerning the existence of anything specific, such as the universe or multiverse, the Big Bang, God, mathematical and physical laws, time or consciousness. It can be seen as an open metaphysical question, rather than a search for an exact answer.
=== On timescales ===
The question does not include the timing of when anything came to exist.
Some have suggested the possibility of an infinite regress, where, if an entity cannot come from nothing and this concept is mutually exclusive from something, there must have always been something that caused the previous effect, with this causal chain (either deterministic or probabilistic) extending infinitely back in time.
== Arguments against attempting to answer the question ==
=== The question is outside our experience ===
Philosopher Stephen Law has said the question may not need answering, as it is attempting to answer a question that is outside a spacetime setting while being within a spacetime setting. He compares the question to asking "what is north of the North Pole?"
=== Causation may not apply ===
The ancient Greek philosopher Aristotle argued that everything in the universe must have a cause, culminating in an ultimate uncaused cause. (See Four causes.)
However, David Hume argued that a cause may not be necessary in the case of the formation of the universe. Whilst we expect that everything has a cause because of our experience of the necessity of causes, the formation of the universe is outside our experience and may be subject to different rules. Kant supports and extends this argument.
=== We may only say the question because of the nature of our minds ===
Kant argues that the nature of our mind may lead us to ask some questions (rather than asking because of the validity of those questions).
=== The brute fact approach ===
In philosophy, the brute fact approach proposes that some facts cannot be explained in terms of a deeper, more "fundamental" fact.
It is in opposition to the principle of sufficient reason approach.
On this question, Bertrand Russell took a brute fact position when he said, "I should say that the universe is just there, and that's all." Sean Carroll similarly concluded that "any attempt to account for the existence of something rather than nothing must ultimately bottom out in a set of brute facts; the universe simply is, without ultimate cause or explanation.": 25
=== The question may be impossible to answer ===
Roy Sorensen has discussed that the question may have an impossible explanatory demand, if there are no existential premises.
== Explanations ==
=== Something may exist necessarily ===
Philosopher Brian Leftow has argued that the question cannot have a causal explanation (as any cause must itself have a cause) or a contingent explanation (as the factors giving the contingency must pre-exist), and that if there is an answer, it must be something that exists necessarily (i.e., something that just exists, rather than is caused).
==== Natural laws may necessarily exist, and may enable the emergence of matter ====
Philosopher of physics Dean Rickles has argued that numbers and mathematics (or their underlying laws) may necessarily exist. If we accept that mathematics is an extension of logic, as philosophers such as Bertrand Russell and Alfred North Whitehead did, then mathematical structures like numbers and shapes must be necessarily true propositions in all possible worlds.
Physicists, including popular physicists such as Stephen Hawking and Lawrence Krauss, have offered explanations (of at least the first particle coming into existence aspect of cosmogony) that rely on quantum mechanics, saying that in a quantum vacuum state, virtual particles and spacetime bubbles will spontaneously come into existence. The actual mathematical demonstration of quantum fluctuations of the hypothetical false vacuum state spontaneously causing an expanding bubble of true vacuum was done by quantum cosmologists in 2014 at the Chinese Academy of Sciences.
==== A necessary being bearing the reason for its existence within itself ====
Gottfried Wilhelm Leibniz attributed to God as being the necessary sufficient reason for everything that exists (see: Cosmological argument). He wrote:"Why is there something rather than nothing? The sufficient reason... is found in a substance which... is a necessary being bearing the reason for its existence within itself."
=== A state of nothing may be impossible ===
The pre-Socratic philosopher Parmenides was one of the first Western thinkers to question the possibility of nothing, and commentary on this has continued.
=== A state of nothing may be unstable ===
Nobel Laureate Frank Wilczek is credited with the aphorism that "nothing is unstable." Physicist Sean Carroll argues that this accounts merely for the existence of matter, but not the existence of quantum states, space-time, or the universe as a whole.
=== It is possible for something to come from nothing ===
Some cosmologists believe it to be possible that something (e.g., the universe) may come to exist spontaneously from nothing. Some mathematical models support this idea, and it is growing to become a more prevalent explanation among the scientific community for why the Big Bang occurred.
=== Other explanations ===
Robert Nozick proposed some possible explanations.
Self-Subsumption: "a law that applies to itself, and hence explains its own truth."
The Nothingness Force: "the nothingness force acts on itself, it sucks nothingness into nothingness and produces something..."
Mariusz Stanowski explained: "There must be both something and nothing, because separately neither can be distinguished".
== Humour ==
Philosophical wit Sidney Morgenbesser answered the question with an apothegm: "If there were nothing, you'd still be complaining!", or "Even if there was nothing, you still wouldn't be satisfied!": 17
== See also ==
== References ==
== Further reading ==
Holt, Jim (2012). Why does the world exist?: an existential detective story (1st ed.). New York: Liveright Pub. Corp. ISBN 978-0871404091.
Kuhn, Robert Lawrence (2013). Leslie, John (ed.). The mystery of existence : why is there anything at all?. Chichester [England]: John Wiley & Sons. ISBN 978-0-470-67355-3.
Sorensen, Roy (2023). "Nothingness". In Zalta, Edward N.; Nodelman, Uri (eds.). Stanford Encyclopedia of Philosophy (Spring 2023 ed.). Metaphysics Research Lab, Stanford University.
== External links ==
Why Anything at All?, Closer to Truth
Why is there something rather than nothing?, Stanford Encyclopedia of Philosophy | Wikipedia/Fundamental_question_of_metaphysics |
Metaphysics is the branch of philosophy that examines the basic structure of reality. It is traditionally seen as the study of mind-independent features of the world, but some theorists view it as an inquiry into the conceptual framework of human understanding. Some philosophers, including Aristotle, designate metaphysics as first philosophy to suggest that it is more fundamental than other forms of philosophical inquiry.
Metaphysics encompasses a wide range of general and abstract topics. It investigates the nature of existence, the features all entities have in common, and their division into categories of being. An influential division is between particulars and universals. Particulars are individual unique entities, like a specific apple. Universals are general features that different particulars have in common, like the color red. Modal metaphysics examines what it means for something to be possible or necessary. Metaphysicians also explore the concepts of space, time, and change, and their connection to causality and the laws of nature. Other topics include how mind and matter are related, whether everything in the world is predetermined, and whether there is free will.
Metaphysicians use various methods to conduct their inquiry. Traditionally, they rely on rational intuitions and abstract reasoning but have recently included empirical approaches associated with scientific theories. Due to the abstract nature of its topic, metaphysics has received criticisms questioning the reliability of its methods and the meaningfulness of its theories. Metaphysics is relevant to many fields of inquiry that often implicitly rely on metaphysical concepts and assumptions.
The roots of metaphysics lie in antiquity with speculations about the nature and origin of the universe, like those found in the Upanishads in ancient India, Daoism in ancient China, and pre-Socratic philosophy in ancient Greece. During the subsequent medieval period in the West, discussions about the nature of universals were influenced by the philosophies of Plato and Aristotle. The modern period saw the emergence of various comprehensive systems of metaphysics, many of which embraced idealism. In the 20th century, traditional metaphysics in general and idealism in particular faced various criticisms, which prompted new approaches to metaphysical inquiry.
== Definition ==
Metaphysics is the study of the most general features of reality, including existence, objects and their properties, possibility and necessity, space and time, change, causation, and the relation between matter and mind. It is one of the oldest branches of philosophy.
The precise nature of metaphysics is disputed and its characterization has changed in the course of history. Some approaches see metaphysics as a unified field and give a wide-sweeping definition by understanding it as the study of "fundamental questions about the nature of reality" or as an inquiry into the essences of things. Another approach doubts that the different areas of metaphysics share a set of underlying features and provides instead a fine-grained characterization by listing all the main topics investigated by metaphysicians. Some definitions are descriptive by providing an account of what metaphysicians do while others are normative and prescribe what metaphysicians ought to do.
Two historically influential definitions in ancient and medieval philosophy understand metaphysics as the science of the first causes and as the study of being qua being, that is, the topic of what all beings have in common and to what fundamental categories they belong. In the modern period, the scope of metaphysics expanded to include topics such as the distinction between mind and body and free will. Some philosophers follow Aristotle in describing metaphysics as "first philosophy", suggesting that it is the most basic inquiry upon which all other branches of philosophy depend in some way.
Metaphysics is traditionally understood as a study of mind-independent features of reality. Starting with Immanuel Kant's critical philosophy, an alternative conception gained prominence that focuses on conceptual schemes rather than external reality. Kant distinguishes transcendent metaphysics, which aims to describe the objective features of reality beyond sense experience, from the critical perspective on metaphysics, which outlines the aspects and principles underlying all human thought and experience. Philosopher P. F. Strawson further explored the role of conceptual schemes, contrasting descriptive metaphysics, which articulates conceptual schemes commonly used to understand the world, with revisionary metaphysics, which aims to produce better conceptual schemes.
Metaphysics differs from the individual sciences by studying the most general and abstract aspects of reality. The individual sciences, by contrast, examine more specific and concrete features and restrict themselves to certain classes of entities, such as the focus on physical things in physics, living entities in biology, and cultures in anthropology. It is disputed to what extent this contrast is a strict dichotomy rather than a gradual continuum.
=== Etymology ===
The word metaphysics has its origin in the ancient Greek words metá (μετά, meaning 'after', 'above', and 'beyond') and phusiká (φυσικά), as a short form of ta metá ta phusiká, meaning 'what comes after the physics'. This is often interpreted to mean that metaphysics discusses topics that, due to their generality and comprehensiveness, lie beyond the realm of physics and its focus on empirical observation. Metaphysics may have received its name by a historical accident when Aristotle's book on this subject was published. Aristotle did not use the term metaphysics but his editor (likely Andronicus of Rhodes) may have coined it for its title to indicate that this book should be studied after Aristotle's book published on physics: literally 'after physics'. The term entered the English language through the Latin word metaphysica.
=== Branches ===
The nature of metaphysics can also be characterized in relation to its main branches. An influential division from early modern philosophy distinguishes between general and special or specific metaphysics. General metaphysics, also called ontology, takes the widest perspective and studies the most fundamental aspects of being. It investigates the features that all entities share and how entities can be divided into different categories. Categories are the most general kinds, such as substance, property, relation, and fact. Ontologists research which categories there are, how they depend on one another, and how they form a system of categories that provides a comprehensive classification of all entities.
Special metaphysics considers being from more narrow perspectives and is divided into subdisciplines based on the perspective they take. Metaphysical cosmology examines changeable things and investigates how they are connected to form a world as a totality extending through space and time. Rational psychology focuses on metaphysical foundations and problems concerning the mind, such as its relation to matter and the freedom of the will. Natural theology studies the divine and its role as the first cause. The scope of special metaphysics overlaps with other philosophical disciplines, making it unclear whether a topic belongs to it or to areas like philosophy of mind and theology.
Starting in the second half of the 20th century, applied metaphysics was conceived as the area of applied philosophy examining the implications and uses of metaphysics, both within philosophy and other fields of inquiry. In areas like ethics and philosophy of religion, it addresses topics like the ontological foundations of moral claims and religious doctrines. Beyond philosophy, its applications include the use of ontologies in artificial intelligence, economics, and sociology to classify entities. In psychiatry and medicine, it examines the metaphysical status of diseases.
Meta-metaphysics is the metatheory of metaphysics and investigates the nature and methods of metaphysics. It examines how metaphysics differs from other philosophical and scientific disciplines and assesses its relevance to them. Even though discussions of these topics have a long history in metaphysics, meta-metaphysics has only recently developed into a systematic field of inquiry.
== Topics ==
=== Existence and categories of being ===
Metaphysicians often regard existence or being as one of the most basic and general concepts. To exist means to be part of reality, distinguishing real entities from imaginary ones. According to a traditionally influential view, existence is a property of properties: if an entity exists then its properties are instantiated. A different position states that existence is a property of individuals, meaning that it is similar to other properties, such as shape or size. It is controversial whether all entities have this property. According to philosopher Alexius Meinong, there are nonexistent objects, including merely possible objects like Santa Claus and Pegasus. A related question is whether existence is the same for all entities or whether there are different modes or degrees of existence. For instance, Plato held that Platonic forms, which are perfect and immutable ideas, have a higher degree of existence than matter, which can only imperfectly reflect Platonic forms.
Another key concern in metaphysics is the division of entities into distinct groups based on underlying features they share. Theories of categories provide a system of the most fundamental kinds or the highest genera of being by establishing a comprehensive inventory of everything. One of the earliest theories of categories was proposed by Aristotle, who outlined a system of 10 categories. He argued that substances (e.g., man and horse), are the most important category since all other categories like quantity (e.g., four), quality (e.g., white), and place (e.g., in Athens) are said of substances and depend on them. Kant understood categories as fundamental principles underlying human understanding and developed a system of 12 categories, divided into the four classes: quantity, quality, relation, and modality. More recent theories of categories were proposed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe. Many philosophers rely on the contrast between concrete and abstract objects. According to a common view, concrete objects, like rocks, trees, and human beings, exist in space and time, undergo changes, and impact each other as cause and effect. They contrast with abstract objects, like numbers and sets, which do not exist in space and time, are immutable, and do not engage in causal relations.
=== Particulars ===
Particulars are individual entities and include both concrete objects, like Aristotle, the Eiffel Tower, or a specific apple, and abstract objects, like the number 2 or a specific set in mathematics. They are unique, non-repeatable entities and contrast with universals, like the color red, which can at the same time exist in several places and characterize several particulars. A widely held view is that particulars instantiate universals but are not themselves instantiated by something else, meaning that they exist in themselves while universals exist in something else. Substratum theory, associated with John Locke's philosophy, analyzes each particular as a substratum, also called bare particular, together with various properties. The substratum confers individuality to the particular while the properties express its qualitative features or what it is like. This approach is rejected by bundle theorists. Inspired by David Hume's philosophy, they state that particulars are only bundles of properties without an underlying substratum. Some bundle theorists include in the bundle an individual essence, called haecceity following scholastic terminology, to ensure that each bundle is unique. Another proposal for concrete particulars is that they are individuated by their space-time location.
Concrete particulars encountered in everyday life, like rocks, tables, and organisms, are complex entities composed of various parts. For example, a table consists of a tabletop and legs, each of which is itself made up of countless particles. The relation between parts and wholes is studied by mereology. The problem of the many is a philosophical question about the conditions under which several individual things compose a larger whole. For example, a cloud comprises many droplets without a clear boundary, raising the question of which droplets form part of the cloud. According to mereological universalists, every collection of entities forms a whole. This means that what seems to be a single cloud is an overlay of countless clouds, one for each cloud-like collection of water droplets. Mereological moderatists hold that certain conditions must be met for a group of entities to compose a whole, for example, that the entities touch one another. Mereological nihilists reject the idea of wholes altogether, claiming that there are no clouds or tables but only particles that are arranged cloud-wise or table-wise. A related mereological problem is whether there are simple entities that have no parts, as atomists claim, or whether everything can be endlessly subdivided into smaller parts, as continuum theorists contend.
=== Universals ===
Universals are general entities, encompassing both properties and relations, that express what particulars are like and how they resemble one another. They are repeatable, meaning that they are not limited to a unique existent but can be instantiated by different particulars at the same time. For example, the particulars Nelson Mandela and Mahatma Gandhi instantiate the universal humanity, similar to how a strawberry and a ruby instantiate the universal red.
A topic discussed since ancient philosophy, the problem of universals consists in the challenge of characterizing the ontological status of universals. Realists argue that universals are real, mind-independent entities that exist in addition to particulars. According to Platonic realists, universals exist independently of particulars, which implies that the universal red would continue to exist even if there were no red things. A more moderate form of realism, inspired by Aristotle, states that universals depend on particulars, meaning that they are only real if they are instantiated. Nominalists reject the idea that universals exist in either form. For them, the world is composed exclusively of particulars. Conceptualists offer an intermediate position, stating that universals exist, but only as concepts in the mind used to order experience by classifying entities.
Natural and social kinds are often understood as special types of universals. Entities belonging to the same natural kind share certain fundamental features characteristic of the structure of the natural world. In this regard, natural kinds are not an artificially constructed classification but are discovered, usually by the natural sciences, and include kinds like electrons, H2O, and tigers. Scientific realists and anti-realists disagree about whether natural kinds exist. Social kinds, like money and baseball, are studied by social metaphysics and characterized as useful social constructions that, while not purely fictional, do not reflect the fundamental structure of mind-independent reality.
=== Possibility and necessity ===
The concepts of possibility and necessity convey what can or must be the case, expressed in modal statements like "it is possible to find a cure for cancer" and "it is necessary that two plus two equals four". Modal metaphysics studies metaphysical problems surrounding possibility and necessity, for instance, why some modal statements are true while others are false. Some metaphysicians hold that modality is a fundamental aspect of reality, meaning that besides facts about what is the case, there are additional facts about what could or must be the case. A different view argues that modal truths are not about an independent aspect of reality but can be reduced to non-modal characteristics, for example, to facts about what properties or linguistic descriptions are compatible with each other or to fictional statements.
Borrowing a term from German philosopher Gottfried Wilhelm Leibniz's theodicy, many metaphysicians use the concept of possible worlds to analyze the meaning and ontological ramifications of modal statements. A possible world is a complete and consistent way the totality of things could have been. For example, the dinosaurs were wiped out in the actual world but there are possible worlds in which they are still alive. According to possible world semantics, a statement is possibly true if it is true in at least one possible world, whereas it is necessarily true if it is true in all possible worlds. Modal realists argue that possible worlds exist as concrete entities in the same sense as the actual world, with the main difference being that the actual world is the world we live in while other possible worlds are inhabited by counterparts. This view is controversial and various alternatives have been suggested, for example, that possible worlds only exist as abstract objects or are similar to stories told in works of fiction.
=== Space, time, and change ===
Space and time are dimensions that entities occupy. Spacetime realists state that space and time are fundamental aspects of reality and exist independently of the human mind. Spacetime idealists, by contrast, hold that space and time are constructs of the human mind, created to organize and make sense of reality. Spacetime absolutism or substantivalism understands spacetime as a distinct object, with some metaphysicians conceptualizing it as a container that holds all other entities within it. Spacetime relationism sees spacetime not as an object but as a network of relations between objects, such as the spatial relation of being next to and the temporal relation of coming before.
In the metaphysics of time, an important contrast is between the A-series and the B-series. According to the A-series theory, the flow of time is real, meaning that events are categorized into the past, present, and future. The present continually moves forward in time and events that are in the present now will eventually change their status and lie in the past. From the perspective of the B-series theory, time is static, and events are ordered by the temporal relations earlier-than and later-than without any essential difference between past, present, and future. Eternalism holds that past, present, and future are equally real, whereas presentism asserts that only entities in the present exist.
Material objects persist through time and change in the process, like a tree that grows or loses leaves. The main ways of conceptualizing persistence through time are endurantism and perdurantism. According to endurantism, material objects are three-dimensional entities that are wholly present at each moment. As they change, they gain or lose properties but otherwise remain the same. Perdurantists see material objects as four-dimensional entities that extend through time and are made up of different temporal parts. At each moment, only one part of the object is present, not the object as a whole. Change means that an earlier part is qualitatively different from a later part. For example, when a banana ripens, there is an unripe part followed by a ripe part.
=== Causality ===
Causality is the relation between cause and effect whereby one entity produces or alters another entity. For instance, if a person bumps a glass and spills its contents then the bump is the cause and the spill is the effect. Besides the single-case causation between particulars in this example, there is also general-case causation expressed in statements such as "smoking causes cancer". The term agent causation is used when people and their actions cause something. Causation is usually interpreted deterministically, meaning that a cause always brings about its effect. However, some philosophers such as G. E. M. Anscombe have provided counterexamples to this idea. Such counterexamples have inspired the development of probabilistic theories, which claim that the cause merely increases the probability that the effect occurs. This view can explain that smoking causes cancer even though this does not happen in every single case.
The regularity theory of causation, inspired by David Hume's philosophy, states that causation is nothing but a constant conjunction in which the mind apprehends that one phenomenon, like putting one's hand in a fire, is always followed by another phenomenon, like a feeling of pain. According to nomic regularity theories, regularities manifest as laws of nature studied by science. Counterfactual theories focus not on regularities but on how effects depend on their causes. They state that effects owe their existence to the cause and would not occur without them. According to primitivism, causation is a basic concept that cannot be analyzed in terms of non-causal concepts, such as regularities or dependence relations. One form of primitivism identifies causal powers inherent in entities as the underlying mechanism. Eliminativists reject the above theories by holding that there is no causation.
=== Mind and free will ===
Mind encompasses phenomena like thinking, perceiving, feeling, and desiring as well as the underlying faculties responsible for these phenomena. The mind–body problem is the challenge of clarifying the relation between physical and mental phenomena. According to Cartesian dualism, minds and bodies are distinct substances. They causally interact with each other in various ways but can, at least in principle, exist on their own. This view is rejected by monists, who argue that reality is made up of only one kind. According to metaphysical idealism, everything is mental or dependent on the mind, including physical objects, which may be understood as ideas or perceptions of conscious minds. Materialists, by contrast, state that all reality is at its core material. Some deny that mind exists but the more common approach is to explain mind in terms of certain aspects of matter, such as brain states, behavioral dispositions, or functional roles. Neutral monists argue that reality is fundamentally neither material nor mental and suggest that matter and mind are both derivative phenomena. A key aspect of the mind–body problem is the hard problem of consciousness or how to explain that physical systems like brains can produce phenomenal consciousness.
The status of free will as the ability of a person to choose their actions is a central aspect of the mind–body problem. Metaphysicians are interested in the relation between free will and causal determinism—the view that everything in the universe, including human behavior, is determined by preceding events and laws of nature. It is controversial whether causal determinism is true, and, if so, whether this would imply that there is no free will. According to incompatibilism, free will cannot exist in a deterministic world since there is no true choice or control if everything is determined. Hard determinists infer from this that there is no free will, whereas libertarians conclude that determinism must be false. Compatibilists offer a third perspective, arguing that determinism and free will do not exclude each other, for instance, because a person can still act in tune with their motivation and choices even if they are determined by other forces. Free will plays a key role in ethics regarding the moral responsibility people have for what they do.
=== Others ===
Identity is a relation that every entity has to itself as a form of sameness. It refers to numerical identity when the same entity is involved, as in the statement "the morning star is the evening star" (both are the planet Venus). In a slightly different sense, it encompasses qualitative identity, also called exact similarity and indiscernibility, which occurs when two distinct entities are exactly alike, such as perfect identical twins. The principle of the indiscernibility of identicals is widely accepted and holds that numerically identical entities exactly resemble one another. The converse principle, known as the identity of indiscernibles or Leibniz's Law, is more controversial and states that two entities are numerically identical if they exactly resemble one another. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time, whereas diachronic identity is about the same entity at different times, as in statements like "the table I bought last year is the same as the table in my dining room now". Personal identity is a related topic in metaphysics that uses the term identity in a slightly different sense and concerns questions like what personhood is or what makes someone a person.
Various contemporary metaphysicians rely on the concepts of truth, truth-bearer, and truthmaker to conduct their inquiry. Truth is a property of being in accord with reality. Truth-bearers are entities that can be true or false, such as linguistic statements and mental representations. A truthmaker of a statement is the entity whose existence makes the statement true. For example, the fact that a tomato exists and that it is red acts as a truthmaker for the statement "a tomato is red". Based on this observation, it is possible to pursue metaphysical research by asking what the truthmakers of statements are, with different areas of metaphysics being dedicated to different types of statements. According to this view, modal metaphysics asks what makes statements about what is possible and necessary true while the metaphysics of time is interested in the truthmakers of temporal statements about the past, present, and future. A closely related topic concerns the nature of truth. Theories of truth aim to determine this nature and include correspondence, coherence, pragmatic, semantic, and deflationary theories.
== Methodology ==
Metaphysicians employ a variety of methods to develop metaphysical theories and formulate arguments for and against them. Traditionally, a priori methods have been the dominant approach. They rely on rational intuition and abstract reasoning from general principles rather than sensory experience. A posteriori approaches, by contrast, ground metaphysical theories in empirical observations and scientific theories. Some metaphysicians incorporate perspectives from fields such as physics, psychology, linguistics, and history into their inquiry. The two approaches are not mutually exclusive: it is possible to combine elements from both. The method a metaphysician chooses often depends on their understanding of the nature of metaphysics, for example, whether they see it as an inquiry into the mind-independent structure of reality, as metaphysical realists claim, or the principles underlying thought and experience, as some metaphysical anti-realists contend.
A priori approaches often rely on intuitions—non-inferential impressions about the correctness of specific claims or general principles. For example, arguments for the A-theory of time, which states that time flows from the past through the present and into the future, often rely on pre-theoretical intuitions associated with the sense of the passage of time. Some approaches use intuitions to establish a small set of self-evident fundamental principles, known as axioms, and employ deductive reasoning to build complex metaphysical systems by drawing conclusions from these axioms. Intuition-based approaches can be combined with thought experiments, which help evoke and clarify intuitions by linking them to imagined situations. They use counterfactual thinking to assess the possible consequences of these situations. For example, to explore the relation between matter and consciousness, some theorists compare humans to philosophical zombies—hypothetical creatures identical to humans but without conscious experience. A related method relies on commonly accepted beliefs instead of intuitions to formulate arguments and theories. The common-sense approach is often used to criticize metaphysical theories that deviate significantly from how the average person thinks about an issue. For example, common-sense philosophers have argued that mereological nihilism is false since it implies that commonly accepted things, like tables, do not exist.
Conceptual analysis, a method particularly prominent in analytic philosophy, aims to decompose metaphysical concepts into component parts to clarify their meaning and identify essential relations. In phenomenology, the method of eidetic variation is used to investigate essential structures underlying phenomena. This method involves imagining an object and varying its features to determine which ones are essential and cannot be changed. The transcendental method is a further approach and examines the metaphysical structure of reality by observing what entities there are and studying the conditions of possibility without which these entities could not exist.
Some approaches give less importance to a priori reasoning and view metaphysics as a practice continuous with the empirical sciences that generalizes their insights while making their underlying assumptions explicit. This approach is known as naturalized metaphysics and is closely associated with the work of Willard Van Orman Quine. He relies on the idea that true sentences from the sciences and other fields have ontological commitments, that is, they imply that certain entities exist. For example, if the sentence "some electrons are bonded to protons" is true then it can be used to justify that electrons and protons exist. Quine used this insight to argue that one can learn about metaphysics by closely analyzing scientific claims to understand what kind of metaphysical picture of the world they presuppose.
In addition to methods of conducting metaphysical inquiry, there are various methodological principles used to decide between competing theories by comparing their theoretical virtues. Ockham's Razor is a well-known principle that gives preference to simple theories, in particular, those that assume that few entities exist. Other principles consider explanatory power, theoretical usefulness, and proximity to established beliefs.
== Criticism ==
Despite its status as one of the main branches of philosophy, metaphysics has received numerous criticisms questioning its legitimacy as a field of inquiry. One criticism argues that metaphysical inquiry is impossible because humans lack the cognitive capacities needed to access the ultimate nature of reality. This line of thought leads to skepticism about the possibility of metaphysical knowledge. Empiricists often follow this idea, like Hume, who asserts that there is no good source of metaphysical knowledge since metaphysics lies outside the field of empirical knowledge and relies on dubious intuitions about the realm beyond sensory experience. Arguing that the mind actively structures experience, Kant criticizes traditional metaphysics for its attempt to gain insight into the mind-independent nature of reality. He asserts that knowledge is limited to the realm of possible experience, meaning that humans are not able to decide questions like whether the world has a beginning in time or is infinite. A related argument favoring the unreliability of metaphysical theorizing points to the deep and lasting disagreements about metaphysical issues, suggesting a lack of overall progress.
Another criticism holds that the problem lies not with human cognitive abilities but with metaphysical statements themselves, which some claim are neither true nor false but meaningless. According to logical positivists, for instance, the meaning of a statement is given by the procedure used to verify it, usually through the observations that would confirm it. Based on this controversial assumption, they argue that metaphysical statements are meaningless since they make no testable predictions about experience.
A slightly weaker position allows metaphysical statements to have meaning while holding that metaphysical disagreements are merely verbal disputes about different ways to describe the world. According to this view, the disagreement in the metaphysics of composition about whether there are tables or only particles arranged table-wise is a trivial debate about linguistic preferences without any substantive consequences for the nature of reality. The position that metaphysical disputes have no meaning or no significant point is called metaphysical or ontological deflationism. This view is opposed by so-called serious metaphysicians, who contend that metaphysical disputes are about substantial features of the underlying structure of reality. A closely related debate between ontological realists and anti-realists concerns the question of whether there are any objective facts that determine which metaphysical theories are true. A different criticism, formulated by pragmatists, sees the fault of metaphysics not in its cognitive ambitions or the meaninglessness of its statements, but in its practical irrelevance and lack of usefulness.
Martin Heidegger criticized traditional metaphysics, saying that it fails to distinguish between individual entities and being as their ontological ground. His attempt to reveal the underlying assumptions and limitations in the history of metaphysics to "overcome metaphysics" influenced Jacques Derrida's method of deconstruction. Derrida employed this approach to criticize metaphysical texts for relying on opposing terms, like presence and absence, which he thought were inherently unstable and contradictory.
There is no consensus about the validity of these criticisms and whether they affect metaphysics as a whole or only certain issues or approaches in it. For example, it could be the case that certain metaphysical disputes are merely verbal while others are substantive.
== Relation to other disciplines ==
Metaphysics is related to many fields of inquiry by investigating their basic concepts and relation to the fundamental structure of reality. For example, the natural sciences rely on concepts such as law of nature, causation, necessity, and spacetime to formulate their theories and predict or explain the outcomes of experiments. While scientists primarily focus on applying these concepts to specific situations, metaphysics examines their general nature and how they depend on each other. For instance, physicists formulate laws of nature, like laws of gravitation and thermodynamics, to describe how physical systems behave under various conditions. Metaphysicians, by contrast, examine what all laws of nature have in common, asking whether they merely describe contingent regularities or express necessary relations. New scientific discoveries have also influenced existing metaphysical theories and inspired new ones. Einstein's theory of relativity, for instance, prompted various metaphysicians to conceive space and time as a unified dimension rather than as independent dimensions. Empirically focused metaphysicians often rely on scientific theories to ground their theories about the nature of reality in empirical observations.
Similar issues arise in the social sciences where metaphysicians investigate their basic concepts and analyze their metaphysical implications. This includes questions like whether social facts emerge from non-social facts, whether social groups and institutions have mind-independent existence, and how they persist through time. Metaphysical assumptions and topics in psychology and psychiatry include the questions about the relation between body and mind, whether the nature of the human mind is historically fixed, and what the metaphysical status of diseases is.
Metaphysics is similar to both physical cosmology and theology in its exploration of the first causes and the universe as a whole. Key differences are that metaphysics relies on rational inquiry while physical cosmology gives more weight to empirical observations and theology incorporates divine revelation and other faith-based doctrines. Historically, cosmology and theology were considered subfields of metaphysics.
Computer scientists rely on metaphysics in the form of ontology to represent and classify objects. They develop conceptual frameworks, called ontologies, for limited domains, such as a database with categories like person, company, address, and name to represent information about clients and employees. Ontologies provide standards for encoding and storing information in a structured way, allowing computational processes to use the information for various purposes. Upper ontologies, such as Suggested Upper Merged Ontology and Basic Formal Ontology, define concepts at a more abstract level, making it possible to integrate information belonging to different domains.
Logic as the study of correct reasoning is often used by metaphysicians to engage in their inquiry and express insights through precise logical formulas. Another relation between the two fields concerns the metaphysical assumptions associated with logical systems. Many logical systems like first-order logic rely on existential quantifiers to express existential statements. For instance, in the logical formula
∃
x
Horse
(
x
)
{\displaystyle \exists x{\text{Horse}}(x)}
the existential quantifier
∃
{\displaystyle \exists }
is applied to the predicate
Horse
{\displaystyle {\text{Horse}}}
to express that there are horses. Following Quine, various metaphysicians assume that existential quantifiers carry ontological commitments, meaning that existential statements imply that the entities over which one quantifies are part of reality.
== History ==
Metaphysics originated in the ancient period from speculations about the nature and origin of the cosmos. In ancient India, starting in the 7th century BCE, the Upanishads were written as religious and philosophical texts that examine how ultimate reality constitutes the ground of all being. They further explore the nature of the self and how it can reach liberation by understanding ultimate reality. This period also saw the emergence of Buddhism in the 6th century BCE, which denies the existence of an independent self and understands the world as a cyclic process. At about the same time in ancient China, the school of Daoism was formed and explored the natural order of the universe, known as Dao, and how it is characterized by the interplay of yin and yang as two correlated forces.
In ancient Greece, metaphysics emerged in the 6th century BCE with the pre-Socratic philosophers, who gave rational explanations of the cosmos as a whole by examining the first principles from which everything arises. Building on their work, Plato (427–347 BCE) formulated his theory of forms, which states that eternal forms or ideas possess the highest kind of reality while the material world is only an imperfect reflection of them. Aristotle (384–322 BCE) accepted Plato's idea that there are universal forms but held that they cannot exist on their own but depend on matter. He also proposed a system of categories and developed a comprehensive framework of the natural world through his theory of the four causes. Starting in the 4th century BCE, Hellenistic philosophy explored the rational order underlying the cosmos and the laws governing it. Neoplatonism emerged towards the end of the ancient period in the 3rd century CE and introduced the idea of "the One" as the transcendent and ineffable source of all creation.
Meanwhile, in Indian Buddhism, the Madhyamaka school developed the idea that all phenomena are inherently empty without a permanent essence. The consciousness-only doctrine of the Yogācāra school stated that experienced objects are mere transformations of consciousness and do not reflect external reality. The Hindu school of Samkhya philosophy introduced a metaphysical dualism with pure consciousness and matter as its fundamental categories. In China, the school of Xuanxue explored metaphysical problems such as the contrast between being and non-being.
Medieval Western philosophy was profoundly shaped by ancient Greek thought as philosophers integrated these ideas with Christian philosophical teachings. Boethius (477–524 CE) sought to reconcile Plato's and Aristotle's theories of universals, proposing that universals can exist both in matter and mind. His theory inspired the development of nominalism and conceptualism, as in the thought of Peter Abelard (1079–1142 CE). Thomas Aquinas (1224–1274 CE) understood metaphysics as the discipline investigating different meanings of being, such as the contrast between substance and accident, and principles applying to all beings, such as the principle of identity. William of Ockham (1285–1347 CE) developed a methodological principle, known as Ockham's razor, to choose between competing metaphysical theories. Arabic–Persian philosophy flourished from the early 9th century CE to the late 12th century CE, integrating ancient Greek philosophies to interpret and clarify the teachings of the Quran. Avicenna (980–1037 CE) developed a comprehensive philosophical system that examined the contrast between existence and essence and distinguished between contingent and necessary existence. Medieval India saw the emergence of the monist school of Advaita Vedanta in the 8th century CE, which holds that everything is one and that the idea of many entities existing independently is an illusion. In China, Neo-Confucianism arose in the 9th century CE and explored the concept of li as the rational principle that is the ground of being and reflects the order of the universe.
In the early modern period and following renewed interest in Platonism during the Renaissance, René Descartes (1596–1650) developed a substance dualism according to which body and mind exist as independent entities that causally interact. This idea was rejected by Baruch Spinoza (1632–1677), who formulated a monist philosophy suggesting that there is only one substance with both physical and mental attributes that develop side-by-side without interacting. Gottfried Wilhelm Leibniz (1646–1716) introduced the concept of possible worlds and articulated a metaphysical system known as monadology, which views the universe as a collection of simple substances synchronized without causal interaction. Christian Wolff (1679–1754), conceptualized the scope of metaphysics by distinguishing between general and special metaphysics. According to the idealism of George Berkeley (1685–1753), everything is mental, including material objects, which are ideas perceived by the mind. David Hume (1711–1776) made various contributions to metaphysics, including the regularity theory of causation and the idea that there are no necessary connections between distinct entities. Inspired by the empiricism of Francis Bacon (1561–1626) and John Locke (1632–1704), Hume criticized metaphysical theories that seek ultimate principles inaccessible to sensory experience. This critical outlook was embraced by Immanuel Kant (1724–1804), who tried to reconceptualize metaphysics as an inquiry into the basic principles and categories of thought and understanding rather than seeing it as an attempt to comprehend mind-independent reality.
Many developments in the later modern period were shaped by Kant's philosophy. German idealists adopted his idealistic outlook in their attempt to find a unifying principle as the foundation of all reality. Georg Wilhelm Friedrich Hegel's (1770–1831) idealistic contention is that reality is conceptual all the way down, and being itself is rational. He inspired the British idealism of Francis Herbert Bradley (1846–1924), who interpreted Hegel's concept of absolute spirit as the all-inclusive totality of being. Arthur Schopenhauer (1788–1860) was a strong critic of German idealism and articulated a different metaphysical vision, positing a blind and irrational will as the underlying principle of reality. Pragmatists like C. S. Peirce (1839–1914) and John Dewey (1859–1952) conceived metaphysics as an observational science of the most general features of reality and experience.
At the turn of the 20th century in analytic philosophy, philosophers such as Bertrand Russell (1872–1970) and G. E. Moore (1873–1958) led a "revolt against idealism", arguing for the existence of a mind-independent world aligned with common sense and empirical science. Logical atomists, like Russell and the early Ludwig Wittgenstein (1889–1951), conceived the world as a multitude of atomic facts, which later inspired metaphysicians such as D. M. Armstrong (1926–2014). Alfred North Whitehead (1861–1947) developed process metaphysics as an attempt to provide a holistic description of both the objective and the subjective realms.
Rudolf Carnap (1891–1970) and other logical positivists formulated a wide-ranging criticism of metaphysical statements, arguing that they are meaningless because there is no way to verify them. Other criticisms of traditional metaphysics identified misunderstandings of ordinary language as the source of many traditional metaphysical problems or challenged complex metaphysical deductions by appealing to common sense.
The decline of logical positivism led to a revival of metaphysical theorizing. Willard Van Orman Quine (1908–2000) tried to naturalize metaphysics by connecting it to the empirical sciences. His student David Lewis (1941–2001) employed the concept of possible worlds to formulate his modal realism. Saul Kripke (1940–2022) helped revive discussions of identity and essentialism, distinguishing necessity as a metaphysical notion from the epistemic notion of a priori.
In continental philosophy, Edmund Husserl (1859–1938) engaged in ontology through a phenomenological description of experience, while his student Martin Heidegger (1889–1976) developed fundamental ontology to clarify the meaning of being. Heidegger's philosophy inspired Jacques Derrida's (1930–2004) criticism of metaphysics. Gilles Deleuze's (1925–1995) approach to metaphysics challenged traditionally influential concepts like substance, essence, and identity by reconceptualizing the field through alternative notions such as multiplicity, event, and difference.
== See also ==
Computational metaphysics
Doctor of Metaphysics
Enrico Berti's classification of metaphysics
Feminist metaphysics
Fundamental question of metaphysics
List of metaphysicians
Metaphysical grounding
== References ==
=== Notes ===
=== Citations ===
=== Sources ===
== External links ==
Metaphysics at PhilPapers
Metaphysics at the Indiana Philosophy Ontology Project
"Metaphysics". Internet Encyclopedia of Philosophy.
Metaphysics at Encyclopædia Britannica
Metaphysics public domain audiobook at LibriVox | Wikipedia/Metametaphysics |
In philosophy and logic, a deflationary theory of truth (also semantic deflationism or simply deflationism) is one of a family of theories that all have in common the claim that assertions of predicate truth of a statement do not attribute a property called "truth" to such a statement.
== Redundancy theory ==
Gottlob Frege was probably the first philosopher or logician to note that predicating truth or existence does not express anything above and beyond the statement to which it is attributed. He remarked:
It is worthy of notice that the sentence "I smell the scent of violets" has the same content as the sentence "it is true that I smell the scent of violets". So it seems, then, that nothing is added to the thought by my ascribing to it the property of truth. (Frege, G., 1918. "Thought", in his Logical Investigations, Oxford: Blackwell, 1977)
Nevertheless, the first serious attempt at the formulation of a theory of truth which attempted to systematically define the truth predicate out of existence is attributable to F. P. Ramsey. Ramsey argued, against the prevailing currents of the times, that not only was it not necessary to construct a theory of truth on the foundation of a prior theory of meaning (or mental content) but that once a theory of content had been successfully formulated, it would become obvious that there was no further need for a theory of truth, since the truth predicate would be demonstrated to be redundant. Hence, his particular version of deflationism is commonly referred to as the redundancy theory. Ramsey noted that in ordinary contexts in which we attribute truth to a proposition directly, as in "It is true that Caesar was murdered", the predicate "is true" does not seem to be doing any work. "It is true that Caesar was murdered" just means "Caesar was murdered" and "It is false that Caesar was murdered" just means that "Caesar was not murdered".
Ramsey recognized that the simple elimination of the truth-predicate from all statements in which it is used in ordinary language was not the way to go about attempting to construct a comprehensive theory of truth. For example, take the sentence Everything that John says is true. This can be easily translated into the formal sentence with variables ranging over propositions For all P, if John says P, then P is true. But attempting to directly eliminate "is true" from this sentence, on the standard first-order interpretation of quantification in terms of objects, would result in the ungrammatical formulation For all P, if John says P, then P. It is ungrammatical because P must, in that case, be replaced by the name of an object and not a proposition. Ramsey's approach was to suggest that such sentences as "He is always right" could be expressed in terms of relations: "For all a, R and b, if he asserts aRb, then aRb".
Ramsey also noticed that, although his paraphrasings and definitions could be easily rendered in logical symbolism, the more fundamental problem was that, in ordinary English, the elimination of the truth-predicate in a phrase such as Everything John says is true would result in something like "If John says something, then that". Ramsey attributed this to a defect in natural language, suggesting that such pro-sentences as "that" and "what" were being treated as if they were pronouns. This "gives rise to artificial problems as to the nature of truth, which disappear at once when they are expressed in logical symbolism..." According to Ramsey, it is only because natural languages lack, what he called, pro-sentences (expressions that stand in relation to sentences as pronouns stand to nouns) that the truth predicate cannot be defined away in all contexts.
A. J. Ayer took Ramsey's idea one step further by declaring that the redundancy of the truth predicate implies that there is no such property as truth.
There are sentences...in which the word "truth" seems to stand for something real; and this leads the speculative philosopher to enquire what this "something" is. Naturally he fails to obtain a satisfactory answer, since his question is illegitimate. For our analysis has shown that the word "truth" does not stand for anything, in the way which such a question requires.
This extreme version of deflationism has often been called the disappearance theory or the no truth theory of truth and it is easy to understand why, since Ayer seems here to be claiming both that the predicate "is true" is redundant (and therefore unnecessary) and also that there is no such property as truth to speak of.
== Performative theory ==
Peter Strawson formulated a performative theory of truth in the 1950s. Like Ramsey, Strawson believed that there was no separate problem of truth apart from determining the semantic contents (or facts of the world) which give the words and sentences of language the meanings that they have. Once the questions of meaning and reference are resolved, there is no further question of truth. Strawson's view differs from Ramsey's, however, in that Strawson maintains that there is an important role for the expression "is true": specifically, it has a performative role similar to "I promise to clean the house". In asserting that p is true, we not only assert that p but also perform the "speech act" of confirming the truth of a statement in a context. We signal our agreement or approbation of a previously uttered assertion or confirm some commonly held belief or imply that what we are asserting is likely to be accepted by others in the same context.
== Tarski and deflationary theories ==
Some years before Strawson developed his account of the sentences which include the truth-predicate as performative utterances, Alfred Tarski had developed his so-called semantic theory of truth. Tarski's basic goal was to provide a rigorously logical definition of the expression "true sentence" within a specific formal language and to clarify the fundamental conditions of material adequacy that would have to be met by any definition of the truth-predicate. If all such conditions were met, then it would be possible to avoid semantic paradoxes such as the liar paradox (i.e., "This sentence is false.") Tarski's material adequacy condition, or Convention T, is: a definition of truth for an object language implies all instances of the sentential form
(T) S is true if and only if P
where S is replaced by a name of a sentence (in the object language) and P is replaced by a translation of that sentence in the metalanguage. So, for example, "La neve è bianca is true if and only if snow is white" is a sentence which conforms to Convention T; the object language is Italian and the metalanguage is English. The predicate "true" does not appear in the object language, so no sentence of the object language can directly or indirectly assert truth or falsity of itself. Tarski thus formulated a two-tiered scheme that avoids semantic paradoxes such as Russell's paradox.
Tarski formulated his definition of truth indirectly through a recursive definition of the satisfaction of sentential functions and then by defining truth in terms of satisfaction. An example of a sentential function is "x defeated y in the 2004 US presidential elections"; this function is said to be satisfied when we replace the variables x and y with the names of objects such that they stand in the relation denoted by "defeated in the 2004 US presidential elections" (in the case just mentioned, replacing x with "George W. Bush" and y with "John Kerry" would satisfy the function, resulting in a true sentence). In general, a1, ..., an satisfy an n-ary predicate φ(x1. ..., xn) if and only if substitution of the names "a1", ..., "an" for the variables of φ in the relevant order yields "φ(a1, ..., an)", and φ(a1, ..., an). Given a method for establishing the satisfaction (or not) of every atomic sentence of the form A(..., xk, ...), the usual rules for truth-functional connectives and quantifiers yield a definition for the satisfaction condition of all sentences of the object language. For instance, for any two sentences A, B, the sentence A & B is satisfied if and only if A and B are satisfied (where '&' stands for conjunction), for any sentence A, ~A is satisfied if and only if A fails to be satisfied, and for any open sentence A where x is free in A, (x)A is satisfied if and only if for every substitution of an item of the domain for x yielding A*, A* is satisfied. Whether any complex sentence is satisfied is seen to be determined by its structure. An interpretation is an assignment of denotation to all of the non-logical terms of the object language. A sentence A is true (under an interpretation I) if and only if it is satisfied in I.
Tarski thought of his theory as a species of correspondence theory of truth, not a deflationary theory.
== Disquotationalism ==
On the basis of Tarski's semantic conception, W. V. O. Quine developed what eventually came to be called the disquotational theory of truth or disquotationalism. Quine interpreted Tarski's theory as essentially deflationary. He accepted Tarski's treatment of sentences as the only truth-bearers. Consequently, Quine suggested that the truth-predicate could only be applied to sentences within individual languages. The basic principle of disquotationalism is that an attribution of truth to a sentence undoes the effects of the quotation marks that have been used to form sentences. Instead of (T) above then, Quine's reformulation would be something like the following "Disquotation Schema":
(DS) Sentence "S" is true if and only if S.
Disquotationalists are able to explain the existence and usefulness of the truth predicate in such contexts of generalization as "John believes everything that Mary says" by asserting, with Quine, that we cannot dispense with the truth predicate in these contexts because the convenient expression of such generalization is precisely the role of the truth predicate in language. In the case of "John believes everything that Mary says", if we try to capture the content of John's beliefs, we would need to form an infinite conjunction such as the following:
If Mary says that lemons are yellow, then lemons are yellow, and if Mary says that lemons are green, then lemons are green, and...
The disquotation schema (DS), allows us to reformulate this as:
If Mary says that lemons are yellow, then the sentence "lemons are yellow" is true, and if Mary says that lemons are green, then the sentence "lemons are green" is true, and...
Since x is equivalent to "x" is true, for the disquotationalist, then the above infinite conjunctions are also equivalent. Consequently, we can form the generalization:
For all sentences "S", if Mary said S, then "S" is true.
Since we could not express this statement without a truth-predicate along the lines of those defined by deflationary theories, it is the role of the truth predicate in forming such generalizations that characterizes all that needs to be characterized about the concept of truth.
== Prosententialism ==
Grover, Camp and Belnap developed a deflationary theory of truth called prosententialism, which has since been defended by Robert Brandom.
Prosententialism asserts that there are prosentences which stand in for and derive their meanings from the sentences which they substitute. In the statement:
Bill is tired and he is hungry.
the pronoun "he" takes its reference from the noun "Bill." By analogy, in the statement:
He explained that he was in financial straits, said that this is how things were, and that therefore he needed an advance.
the clause "this is how things were" receives its reference from the previously occurring sentential clause "he was in financial straits", according to a prosententialist account.
How does this relate to truth? Prosententialists view the statements that contain "is true" as sentences which do not contain a truth-predicate but rather contain some form of prosentence; the truth-predicate itself is part of an anaphoric or prosentential construction. Prosententialists point out the many parallels which exist between pronouns and prosentences. Pronouns are often used out of "laziness", as in:
Bill is tired and he is hungry
or they can be used in quantificational contexts, such as:
Someone is in the room and he is armed with a rifle.
In a similar manner, "it is true" can be used as a prosentence of laziness, as in:
Fred believes that it is raining and it is true.
and as a quantificational prosentence, such as:
Whatever Alice believes is true.
Prosententialists therefore reject the idea that truth is a property of some sort.
== Minimalism ==
Paul Horwich's minimal theory of truth, also known as minimalism, takes the primary truth-bearing entities to be propositions, rather than sentences. According to the minimalist view then, truth is indeed a property of propositions (or sentences, as the case may be) but it is so minimal and anomalous a property that it cannot be said to provide us with any useful information about or insight into the nature of truth. It is fundamentally nothing more than a sort of metalinguistic property.
Another way of formulating the minimalist thesis is to assert that the conjunction of all of the instances of the following schema:
The proposition that P is true if and only if P.
provides an implicit definition of the property of truth. Each such instance is an axiom of the theory and there are an infinite number of such instances (one for every actual or possible proposition in the universe). Our concept of truth consists of nothing more than a disposition to assent to all of the instances of the above schema when we encounter them.
== Objections to deflationism ==
One of the main objections to deflationary theories of all flavors was formulated by Jackson, Oppy and Smith in 1994 (following Kirkham 1992). According to the objection, if deflationism is interpreted as a sentential theory (that is, one where truth is predicated of sentences on the left hand side of the biconditionals such as (T) above), then deflationism is false; on the other hand, if it is interpreted as a propositional theory, then it is trivial. Examining another simple instance of the standard equivalence schema:
Grass is green is true if and only if grass is green.
the objection is just that, if the italicized words are taken as a sentence, then it is false, because something more is required for the whole statement to be true than merely the fact that "grass is green" is true. It is also necessary that the sentence "grass is green" means that grass is green and this further linguistic fact is not dealt with in the equivalence schema.
However, if we now assume that grass is green on the left-hand side refers to a proposition, then the theory seems trivial since grass is green is defined as true if and only if grass is green. Note that the triviality involved here is not caused by the concept of truth but by that of proposition. In any case, simply accepting the triviality of the propositional version implies that, at least within the Deflationary Theory of Truth, there can be no explanation of the connection between sentences and the things that they express; i.e., propositions.
=== Normativity of assertions ===
Michael Dummett, among others, has argued that deflationism cannot account for the fact that truth should be a normative goal of assertion. The idea is that truth plays a central role in the activity of stating facts. The deflationist response is that the assertion that truth is a norm of assertion can be stated only in the form of the following infinite conjunction:
One should assert the proposition that grass is green only if grass is green and one should assert the proposition that lemons are yellow only if lemons are yellow and one should assert the proposition that a square circle is impossible only if a squared circle is impossible and...
This, in turn, can be reformulated as:
For all propositions P, speakers should assert the propositions that P only if the proposition that P is true.
It may be the case that we use the truth-predicate to express this norm, not because it has anything to do with the nature of truth in some inflationary sense, but because it is a convenient way of expressing this otherwise inexpressible generalization.
== See also ==
Coherentism
Confirmation holism
Truth
Truth theory
Truthmaker theory
=== Related topics ===
== Notes ==
== References ==
Ayer, A. J. (1952). Language, Truth and Logic. New York: Dover Publications.
Beall, J. C. and Armour-Garb, B. (eds.) (2006). Deflationism and Paradox. Oxford: Clarendon.
Brandom, Robert Expressive versus Explanatory Deflationism about Truth in Deflationary Truth Armour-Garb B.P. and Beall J.C. (ed.), Chicago and La Salle, Illinois: Open Court Publishing, 2005, pp. 237–257.
Butler, M.K. (2017). Deflationism and Semantic Theories of Truth. Manchester: Pendlebury Press.
Candlish, Stewart, and Damnjanovic, Nicolas J., "A Brief History of Truth", in Dale Jacquette (ed.), Handbook of the Philosophy of Science, Volume 11, Philosophy of Logic. Early views (1903-1930), Middle views (1930-1970), Later views (1970-2000), and References.
Grover, Dorothy, Camp, Joseph & Belnap, Nuel (1975), A Prosentential Theory of Truth, Philosophical Studies, 27, 73-125.
Horwich, Paul (1998), Truth, Oxford University Press, London, UK.
Frege, G. (1918) Ricerche Logiche. M. di Francesco (ed.). tr: R. Casati. Milan: Guerini. 1998.
Jackson, Frank, Graham Oppy & Michael Smith (1994), "Minimalism and truth aptness", Mind 103, 287-302.
Kirkham, Richard (1992), Theories of Truth, MIT Press.
Quine, W. V. O. (1970), Philosophy of Logic, Prentice Hall, Englewood Cliffs, NJ.
Ramsey, F. P. (1927), "Facts and Propositions", Aristotelian Society Supplementary Volume 7, 153–170. Reprinted, pp. 34–51 in F. P. Ramsey, Philosophical Papers, David Hugh Mellor (ed.), Cambridge University Press, Cambridge, UK, 1990.
Ramsey, F. P. (1990), Philosophical Papers, David Hugh Mellor (ed.), Cambridge University Press, Cambridge, UK.
Strawson, P. F. (1949) "Truth", Analysis, 9: 83-97.
Tarski, A. (1935), "Der Wahrheitsbegriff in den formalisierten Sprachen", Studia Philosophica 1, pp. 261–405. Translated as "The Concept of Truth in Formalized Languages", in Tarski (1983), pp. 152–278.
Tarski, Alfred (1944), "The Semantic Conception of Truth and the Foundations of Semantics", Philosophy and Phenomenological Research 4 (3), 341–376.
Tarski, Alfred (1983), Logic, Semantics, Metamathematics: Papers from 1923 to 1938, J.H. Woodger (trans.), Oxford University Press, Oxford, UK, 1956. 2nd edition, John Corcoran (ed.), Hackett Publishing, Indianapolis, IN, 1983.
== External links ==
Stoljar, Daniel and Damnjanovic, Nic (2007), "The Deflationary Theory of Truth", Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.).
"Prosentential Theory of Truth". Internet Encyclopedia of Philosophy. | Wikipedia/Deflationary_theory_of_truth |
Coherence theories of truth characterize truth as a property of whole systems of propositions that can be ascribed to individual propositions only derivatively according to their coherence with the whole. While modern coherence theorists hold that there are many possible systems to which the determination of truth may be based upon coherence, others, particularly those with strong religious beliefs, hold that the truth only applies to a single absolute system. In general, truth requires a proper fit of elements within the whole system. Very often, though, coherence is taken to imply something more than simple formal coherence. For example, the coherence of the underlying set of concepts is considered to be a critical factor in judging validity for the whole system. In other words, the set of base concepts in a universe of discourse must first be seen to form an intelligible paradigm before many theorists will consider that the coherence theory of truth is applicable.
== History ==
In modern philosophy, the coherence theory of truth was defended by Baruch Spinoza, Immanuel Kant, Johann Gottlieb Fichte, Karl Wilhelm Friedrich Schlegel, Georg Wilhelm Friedrich Hegel and Harold Henry Joachim (who is credited with the definitive formulation of the theory). However, Spinoza and Kant have also been interpreted as defenders of the correspondence theory of truth. In contemporary philosophy, several epistemologists have significantly contributed to and defended the theory, primarily Brand Blanshard (who gave the earliest characterization of the theory in contemporary times) and Nicholas Rescher.
== Varieties ==
According to one view, the coherence theory of truth regards truth as coherence within some specified set of sentences, propositions or beliefs. It is the "theory of knowledge which maintains that truth is a property primarily applicable to any extensive body of consistent propositions, and derivatively applicable to any one proposition in such a system by virtue of its part in the system". Ideas like this are a part of the philosophical perspective known as confirmation holism. Coherence theories of truth claim that coherence and consistency are important features of a theoretical system, and that these properties are sufficient to its truth. To state it in the reverse, that "truth" exists only within a system, and doesn't exist outside of a system.
According to another version by H. H. Joachim (the philosopher credited with the definitive formulation of the theory, in his book The Nature of Truth, published in 1906), truth is a systematic coherence that involves more than logical consistency. In this view, a proposition is true to the extent that it is a necessary constituent of a systematically coherent whole. Others of this school of thought, for example, Brand Blanshard, hold that this whole must be so interdependent that every element in it necessitates and even entails every other element. Exponents of this view infer that the most complete truth is a property solely of a unique coherent system, called the absolute, and that humanly knowable propositions and systems have a degree of truth that is proportionate to how fully they approximate this ideal.: 321–322
== Criticism ==
Perhaps the best-known objection to a coherence theory of truth is Bertrand Russell's. He maintained that since both a belief and its negation will, individually, cohere with at least one set of beliefs, this means that contradictory beliefs can be shown to be true according to coherence theory, and therefore that the theory cannot work. However, what most coherence theorists are concerned with is not all possible beliefs, but the set of beliefs that people actually hold. The main problem for a coherence theory of truth, then, is how to specify just this particular set, given that the truth of which beliefs are actually held can only be determined by means of coherence.
== See also ==
Coherence theory of justification
Confirmation holism
Bayesian epistemology
== References ==
== Further reading ==
Kirkham, Richard L. (1992), Theories of Truth, MIT Press, Cambridge, MA.
Runes, Dagobert D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
== External links ==
The Coherence Theory of Truth (Stanford Encyclopedia of Philosophy) | Wikipedia/Coherence_theory_of_truth |
Philosophical methodology encompasses the methods used to philosophize and the study of these methods. Methods of philosophy are procedures for conducting research, creating new theories, and selecting between competing theories. In addition to the description of methods, philosophical methodology also compares and evaluates them.
Philosophers have employed a great variety of methods. Methodological skepticism tries to find principles that cannot be doubted. The geometrical method deduces theorems from self-evident axioms. The phenomenological method describes first-person experience. Verificationists study the conditions of empirical verification of sentences to determine their meaning. Conceptual analysis decomposes concepts into fundamental constituents. Common-sense philosophers use widely held beliefs as their starting point of inquiry, whereas ordinary language philosophers extract philosophical insights from ordinary language. Intuition-based methods, like thought experiments, rely on non-inferential impressions. The method of reflective equilibrium seeks coherence among beliefs, while the pragmatist method assesses theories by their practical consequences. The transcendental method studies the conditions without which an entity could not exist. Experimental philosophers use empirical methods.
The choice of method can significantly impact how theories are constructed and the arguments used to support them. As a result, methodological disagreements can lead to philosophical disagreements.
== Definition ==
The term "philosophical methodology" refers either to the methods used to philosophize or to the branch of metaphilosophy studying these methods. A method is a way of doing things, such as a set of actions or decisions, in order to achieve a certain goal, when used under the right conditions. In the context of inquiry, a method is a way of conducting one's research and theorizing, like inductive or axiomatic methods in logic or experimental methods in the sciences. Philosophical methodology studies the methods of philosophy. It is not primarily concerned with whether a philosophical position, such as metaphysical dualism or utilitarianism, is true or false. Instead, it asks how one can determine which position should be adopted.
In the widest sense, any principle for choosing between competing theories may be considered as part of the methodology of philosophy. In this sense, the philosophical methodology is "the general study of criteria for theory selection". For example, Occam’s Razor is a methodological principle of theory selection favoring simple over complex theories. A closely related aspect of philosophical methodology concerns the question of which conventions one needs to adopt necessarily to succeed at theory making. But in a more narrow sense, only guidelines that help philosophers learn about facts studied by philosophy qualify as philosophical methods. This is the more common sense, which applies to most of the methods listed in this article. In this sense, philosophical methodology is closely related to epistemology in that it consists in epistemological methods that enable philosophers to arrive at knowledge. Because of this, the problem of the methods of philosophy is central to how philosophical claims are to be justified.
An important difference in philosophical methodology concerns the distinction between descriptive and normative questions. Descriptive questions ask what methods philosophers actually use or used in the past, while normative questions ask what methods they should use. The normative aspect of philosophical methodology expresses the idea that there is a difference between good and bad philosophy. In this sense, philosophical methods either articulate the standards of evaluation themselves or the practices that ensure that these standards are met. Philosophical methods can be understood as tools that help the theorist do good philosophy and arrive at knowledge. The normative question of philosophical methodology is quite controversial since different schools of philosophy often have very different views on what constitutes good philosophy and how to achieve it.
== Methods ==
A great variety of philosophical methods has been proposed. Some of these methods were developed as a reaction to other methods, for example, to counter skepticism by providing a secure path to knowledge. In other cases, one method may be understood as a development or a specific application of another method. Some philosophers or philosophical movements give primacy to one specific method, while others use a variety of methods depending on the problem they are trying to solve. It has been argued that many of the philosophical methods are also commonly used implicitly in more crude forms by regular people and are only given a more careful, critical, and systematic exposition in philosophical methodology.
=== Methodological skepticism ===
Methodological skepticism, also referred to as Cartesian doubt, uses systematic doubt as a method of philosophy. It is motivated by the search for an absolutely certain foundation of knowledge. The method for finding these foundations is doubt: only that which is indubitable can serve this role. While this approach has been influential, it has also received various criticisms. One problem is that it has proven very difficult to find such absolutely certain claims if the doubt is applied in its most radical form. Another is that while absolute certainty may be desirable, it is by no means necessary for knowledge. In this sense, it excludes too much and seems to be unwarranted and arbitrary, since it is not clear why very certain theorems justified by strong arguments should be abandoned just because they are not absolutely certain. This can be seen in relation to the insights discovered by the empirical sciences, which have proven very useful even though they are not indubitable.
=== Geometrical method ===
The geometrical method came to particular prominence through rationalists like Baruch Spinoza. It starts from a small set of self-evident axioms together with relevant definitions and tries to deduce a great variety of theorems from this basis, thereby mirroring the methods found in geometry. Historically, it can be understood as a response to methodological skepticism: it consists in trying to find a foundation of certain knowledge and then expanding this foundation through deductive inferences. The theorems arrived at this way may be challenged in two ways. On the one hand, they may be derived from axioms that are not as self-evident as their defenders proclaim and thereby fail to inherit the status of absolute certainty. For example, many philosophers have rejected the claim of self-evidence concerning one of René Descartes's first principles stating that "he can know that whatever he perceives clearly and distinctly is true only if he first knows that God exists and is not a deceiver". Another example is the causal axiom of Spinoza's system that "the knowledge of an effect depends on and involves knowledge of its cause", which has been criticized in various ways. In this sense, philosophical systems built using the geometrical method are open to criticisms that reject their basic axioms. A different form of objection holds that the inference from the axioms to the theorems may be faulty, for example, because it does not follow a rule of inference or because it includes implicitly assumed premises that are not themselves self-evident.
=== Phenomenological method ===
Phenomenology is the science of appearances - broadly speaking, the science of phenomenon, given that almost all phenomena are perceived. The phenomenological method aims to study the appearances themselves and the relations found between them. This is achieved through the so-called phenomenological reduction, also known as epoché or bracketing: the researcher suspends their judgments about the natural external world in order to focus exclusively on the experience of how things appear to be, independent of whether these appearances are true or false. One idea behind this approach is that our presuppositions of what things are like can get in the way of studying how they appear to be and thereby mislead the researcher into thinking they know the answer instead of looking for themselves. The phenomenological method can also be seen as a reaction to methodological skepticism since its defenders traditionally claimed that it could lead to absolute certainty and thereby help philosophy achieve the status of a rigorous science. But phenomenology has been heavily criticized because of this overly optimistic outlook concerning the certainty of its insights. A different objection to the method of phenomenological reduction holds that it involves an artificial stance that gives too much emphasis on the theoretical attitude at the expense of feeling and practical concerns.
Another phenomenological method is called "eidetic variation". It is used to study the essences of things. This is done by imagining an object of the kind under investigation. The features of this object are then varied in order to see whether the resulting object still belongs to the investigated kind. If the object can survive the change of a certain feature then this feature is inessential to this kind. Otherwise, it belongs to the kind's essence. For example, when imagining a triangle, one can vary its features, like the length of its sides or its color. These features are inessential since the changed object is still a triangle, but it ceases to be a triangle if a fourth side is added.
=== Verificationism ===
The method of verificationism consists in understanding sentences by analyzing their characteristic conditions of verification, i.e. by determining which empirical observations would prove them to be true. A central motivation behind this method has been to distinguish meaningful from meaningless sentences. This is sometimes expressed through the claim that "[the] meaning of a statement is the method of its verification". Meaningful sentences, like the ones found in the natural sciences, have clear conditions of empirical verification. But since most metaphysical sentences cannot be verified by empirical observations, they are deemed to be non-sensical by verificationists. Verificationism has been criticized on various grounds. On the one hand, it has proved very difficult to give a precise formulation that includes all scientific claims, including the ones about unobservables. This is connected to the problem of underdetermination in the philosophy of science: the problem that the observational evidence is often insufficient to determine which theory is true. This would lead to the implausible conclusion that even for the empirical sciences, many of their claims would be meaningless. But on a deeper level, the basic claim underlying verificationism seems itself to be meaningless by its own standards: it is not clear what empirical observations could verify the claim that the meaning of a sentence is the method of its verification. In this sense, verificationism would be contradictory by directly refuting itself. These and other problems have led some theorists, especially from the sciences, to adopt falsificationism instead. It is a less radical approach that holds that serious theories or hypotheses should at least be falsifiable, i.e. there should be some empirical observations that could prove them wrong.
=== Conceptual analysis ===
The goal of conceptual analysis is to decompose or analyze a given concept into its fundamental constituents. It consists in considering a philosophically interesting concept, like knowledge, and determining the necessary and sufficient conditions for whether the application of this concept is true. The resulting claim about the relation between the concept and its constituents is normally seen as knowable a priori since it is true only in virtue of the involved concepts and thereby constitutes an analytic truth. Usually, philosophers use their own intuitions to determine whether a concept is applicable to a specific situation to test their analyses. But other approaches have also been utilized by using not the intuitions of philosophers but of regular people, an approach often defended by experimental philosophers.
G. E. Moore proposed that the correctness of a conceptual analysis can be tested using the open question method. According to this view, asking whether the decomposition fits the concept should result in a closed or pointless question. If it results in an open or intelligible question, then the analysis does not exactly correspond to what we have in mind when we use the term. This can be used, for example, to reject the utilitarian claim that "goodness" is "whatever maximizes happiness". The underlying argument is that the question "Is what is good what maximizes happiness?" is an open question, unlike the question "Is what is good what is good?", which is a closed question. One problem with this approach is that it results in a very strict conception of what constitutes a correct conceptual analysis, leading to the conclusion that many concepts, like "goodness", are simple or indefinable.
Willard Van Orman Quine criticized conceptual analysis as part of his criticism of the analytic-synthetic distinction. This objection is based on the idea that all claims, including how concepts are to be decomposed, are ultimately based on empirical evidence. Another problem with conceptual analysis is that it is often very difficult to find an analysis of a concept that really covers all its cases. For this reason, Rudolf Carnap has suggested a modified version that aims to cover only the most paradigmatic cases while excluding problematic or controversial cases. While this approach has become more popular in recent years, it has also been criticized based on the argument that it tends to change the subject rather than resolve the original problem. In this sense, it is closely related to the method of conceptual engineering, which consists in redefining concepts in fruitful ways or developing new interesting concepts. This method has been applied, for example, to the concepts of gender and race.
=== Common sense ===
The method of common sense is based on the fact that we already have a great variety of beliefs that seem very certain to us, even if we do not believe them based on explicit arguments. Common sense philosophers use these beliefs as their starting point of philosophizing. This often takes the form of criticism directed against theories whose premises or conclusions are very far removed from how the average person thinks about the issue in question. G. E. Moore, for example, rejects J. M. E. McTaggart's sophisticated argumentation for the unreality of time based on his common-sense impression that time exists. He holds that his simple common-sense impression is much more certain than that McTaggart's arguments are sound, even though Moore was unable to pinpoint where McTaggart's arguments went wrong. According to his method, common sense constitutes an evidence base. This base may be used to eliminate philosophical theories that stray too far away from it, that are abstruse from its perspective. This can happen because either the theory itself or consequences that can be drawn from it violate common sense. For common sense philosophers, it is not the task of philosophy to question common sense. Instead, they should analyze it to formulate theories in accordance with it.
One important argument against this method is that common sense has often been wrong in the past, as is exemplified by various scientific discoveries. This suggests that common sense is in such cases just an antiquated theory that is eventually eliminated by the progress of science. For example, Albert Einstein's theory of relativity constitutes a radical departure from the common-sense conception of space and time, and quantum physics poses equally serious problems to how we tend to think about how elementary particles behave. This puts into question that common sense is a reliable source of knowledge. Another problem is that for many issues, there is no one universally accepted common-sense opinion. In such cases, common sense only amounts to the majority opinion, which should not be blindly accepted by researchers. This problem can be approached by articulating a weaker version of the common-sense method. One such version is defended by Roderick Chisholm, who allows that theories violating common sense may still be true. He contends that, in such cases, the theory in question is prima facie suspect and the burden of proof is always on its side. But such a shift in the burden of proof does not constitute a blind belief in common sense since it leaves open the possibility that, for various issues, there is decisive evidence against the common-sense opinion.
=== Ordinary language philosophy ===
The method of ordinary language philosophy consists in tackling philosophical questions based on how the related terms are used in ordinary language. In this sense, it is related to the method of common sense but focuses more on linguistic aspects. Some types of ordinary language philosophy only take a negative form in that they try to show how philosophical problems are not real problems at all. Instead, it is aimed to show that false assumptions, to which humans are susceptible due to the confusing structure of natural language, are responsible for this false impression. Other types take more positive approaches by defending and justifying philosophical claims, for example, based on what sounds insightful or odd to the average English speaker.
One problem for ordinary language philosophy is that regular speakers may have many different reasons for using a certain expression. Sometimes they intend to express what they believe, but other times they may be motivated by politeness or other conversational norms independent of the truth conditions of the expressed sentences. This significantly complicates ordinary language philosophy, since philosophers have to take the specific context of the expression into account, which may considerably alter its meaning. This criticism is partially mitigated by J. L. Austin's approach to ordinary language philosophy. According to him, ordinary language already has encoded many important distinctions and is our point of departure in theorizing. But "ordinary language is not the last word: in principle, it can everywhere be supplemented and improved upon and superseded". However, it also falls prey to another criticism: that it is often not clear how to distinguish ordinary from non-ordinary language. This makes it difficult in all but the paradigmatic cases to decide whether a philosophical claim is or is not supported by ordinary language.
=== Intuition and thought experiments ===
Methods based on intuition, like ethical intuitionism, use intuitions to evaluate whether a philosophical claim is true or false. In this context, intuitions are seen as a non-inferential source of knowledge: they consist in the impression of correctness one has when considering a certain claim. They are intellectual seemings that make it appear to the thinker that the considered proposition is true or false without the need to consider arguments for or against the proposition. This is sometimes expressed by saying that the proposition in question is self-evident. Examples of such propositions include "torturing a sentient being for fun is wrong" or "it is irrational to believe both something and its opposite". But not all defenders of intuitionism restrict intuitions to self-evident propositions. Instead, often weaker non-inferential impressions are also included as intuitions, such as a mother's intuition that her child is innocent of a certain crime.
Intuitions can be used in various ways as a philosophical method. On the one hand, philosophers may consult their intuitions in relation to very general principles, which may then be used to deduce further theorems. Another technique, which is often applied in ethics, consists in considering concrete scenarios instead of general principles. This often takes the form of thought experiments, in which certain situations are imagined with the goal of determining the possible consequences of the imagined scenario. These consequences are assessed using intuition and counterfactual thinking. For this reason, thought experiments are sometimes referred to as intuition pumps: they activate the intuitions concerning the specific situation, which may then be generalized to arrive at universal principles. In some cases, the imagined scenario is physically possible but it would not be feasible to make an actual experiment due to the costs, negative consequences, or technological limitations. But other thought experiments even work with scenarios that defy what is physically possible. It is controversial to what extent thought experiments merit to be characterized as real experiments and whether the insights they provide are reliable.
One problem with intuitions in general and thought experiments in particular consists in assessing their epistemological status, i.e. whether, how much, and in which circumstances they provide justification in comparison to other sources of knowledge. Some of its defenders claim that intuition is a reliable source of knowledge just like perception, with the difference being that it happens without the sensory organs. Others compare it not to perception but to the cognitive ability to evaluate counterfactual conditionals, which may be understood as the capacity to answer what-if questions. But the reliability of intuitions has been contested by its opponents. For example, wishful thinking may be the reason why it intuitively seems to a person that a proposition is true without providing any epistemological support for this proposition. Another objection, often raised in the empirical and naturalist tradition, is that intuitions do not constitute a reliable source of knowledge since the practitioner restricts themselves to an inquiry from their armchair instead of looking at the world to make empirical observations.
=== Reflective equilibrium ===
Reflective equilibrium is a state in which a thinker has the impression that they have considered all the relevant evidence for and against a theory and have made up their mind on this issue. It is a state of coherent balance among one's beliefs. This does not imply that all the evidence has really been considered, but it is tied to the impression that engaging in further inquiry is unlikely to make one change one's mind, i.e. that one has reached a stable equilibrium. In this sense, it is the endpoint of the deliberative process on the issue in question. The philosophical method of reflective equilibrium aims at reaching this type of state by mentally going back and forth between all relevant beliefs and intuitions. In this process, the thinker may have to let go of some beliefs or deemphasize certain intuitions that do not fit into the overall picture in order to progress.
In this wide sense, reflective equilibrium is connected to a form of coherentism about epistemological justification and is thereby opposed to foundationalist attempts at finding a small set of fixed and unrevisable beliefs from which to build one's philosophical theory. One problem with this wide conception of the reflective equilibrium is that it seems trivial: it is a truism that the rational thing to do is to consider all the evidence before making up one's mind and to strive towards building a coherent perspective. But as a method to guide philosophizing, this is usually too vague to provide specific guidance.
When understood in a more narrow sense, the method aims at finding an equilibrium between particular intuitions and general principles. On this view, the thinker starts with intuitions about particular cases and formulates general principles that roughly reflect these intuitions. The next step is to deal with the conflicts between the two by adjusting both the intuitions and the principles to reconcile them until an equilibrium is reached. One problem with this narrow interpretation is that it depends very much on the intuitions one started with. This means that different philosophers may start with very different intuitions and may therefore be unable to find a shared equilibrium. For example, the narrow method of reflective equilibrium may lead some moral philosophers towards utilitarianism and others towards Kantianism.
=== Pragmatic method ===
The pragmatic method assesses the truth or falsity of theories by looking at the consequences of accepting them. In this sense, "[t]he test of truth is utility: it's true if it works". Pragmatists approach intractable philosophical disputes in a down-to-earth fashion by asking about the concrete consequences associated, for example, with whether an abstract metaphysical theory is true or false. This is also intended to clarify the underlying issues by spelling out what would follow from them. Another goal of this approach is to expose pseudo-problems, which involve a merely verbal disagreement without any genuine difference on the level of the consequences between the competing standpoints.
Succinct summaries of the pragmatic method base it on the pragmatic maxim, of which various versions exist. An important version is due to Charles Sanders Peirce: "Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of those effects is the whole of our conception of the object." Another formulation is due to William James: "To develop perfect clearness in our thoughts of an object, then, we need only consider what effects of a conceivable practical kind the object may involve – what sensations we are to expect from it and what reactions we must prepare". Various criticisms to the pragmatic method have been raised. For example, it is commonly rejected that the terms "true" and "useful" mean the same thing. A closely related problem is that believing in a certain theory may be useful to one person and useless to another, which would mean the same theory is both true and false.
=== Transcendental method ===
The transcendental method (German: Transzendentale Methodenlehre) is used to study phenomena by reflecting on the conditions of possibility of these phenomena. This method usually starts out with an obvious fact, often about our mental life, such as what we know or experience. It then goes on to argue that for this fact to obtain, other facts also have to obtain: they are its conditions of possibility. This type of argument is called "transcendental argument": it argues that these additional assumptions also have to be true because otherwise, the initial fact would not be the case. For example, it has been used to argue for the existence of an external world based on the premise that the experience of the temporal order of our mental states would not be possible otherwise. Another example argues in favor of a description of nature in terms of concepts such as motion, force, and causal interaction based on the claim that an objective account of nature would not be possible otherwise.
Transcendental arguments have faced various challenges. On the one hand, the claim that the belief in a certain assumption is necessary for the experience of a certain entity is often not obvious. So in the example above, critics can argue against the transcendental argument by denying the claim that an external world is necessary for the experience of the temporal order of our mental states. But even if this point is granted, it does not guarantee that the assumption itself is true. So even if the belief in a given proposition is a psychological necessity for a certain experience, it does not automatically follow that this belief itself is true. Instead, it could be the case that humans are just wired in such a way that they have to believe in certain false assumptions.
=== Experimental philosophy ===
Experimental philosophy is the most recent development of the methods discussed in this article: it began only in the early years of the 21st century. Experimental philosophers try to answer philosophical questions by gathering empirical data. It is an interdisciplinary approach that applies the methods of psychology and the cognitive sciences to topics studied by philosophy. This usually takes the form of surveys probing the intuitions of ordinary people and then drawing conclusions from the findings. For example, one such inquiry came to the conclusion that justified true belief may be sufficient for knowledge despite various Gettier cases claiming to show otherwise. The method of experimental philosophy can be used both in a negative or a positive program. As a negative program, it aims to challenge traditional philosophical movements and positions. This can be done, for example, by showing how the intuitions used to defend certain claims vary a lot depending on factors such as culture, gender, or ethnicity. This variation casts doubt on the reliability of the intuitions and thereby also on theories supported by them. As a positive program, it uses empirical data to support its own philosophical claims. It differs from other philosophical methods in that it usually studies the intuitions of ordinary people and uses them, and not the experts' intuitions, as philosophical evidence.
One problem for both the positive and the negative approaches is that the data obtained from surveys do not constitute hard empirical evidence since they do not directly express the intuitions of the participants. The participants may react to subtle pragmatic cues in giving their answers, which brings with it the need for further interpretation in order to get from the given answers to the intuitions responsible for these answers. Another problem concerns the question of how reliable the intuitions of ordinary people on the often very technical issues are. The core of this objection is that, for many topics, the opinions of ordinary people are not very reliable since they have little familiarity with the issues themselves and the underlying problems they may pose. For this reason, it has been argued that they cannot replace the expert intuitions found in trained philosophers. Some critics have even argued that experimental philosophy does not really form part of philosophy. This objection does not reject that the method of experimental philosophy has value, it just rejects that this method belongs to philosophical methodology.
=== Others ===
Various other philosophical methods have been proposed. The Socratic method or Socratic debate is a form of cooperative philosophizing in which one philosopher usually first states a claim, which is then scrutinized by their interlocutor by asking them questions about various related claims, often with the implicit goal of putting the initial claim into doubt. It continues to be a popular method for teaching philosophy. Plato and Aristotle emphasize the role of wonder in the practice of philosophy. On this view, "philosophy begins in wonder" and "[i]t was their wonder, astonishment, that first led men to philosophize and still leads them". This position is also adopted in the more recent philosophy of Nicolai Hartmann. Various other types of methods were discussed in ancient Greek philosophy, like analysis, synthesis, dialectics, demonstration, definition, and reduction to absurdity. The medieval philosopher Thomas Aquinas identifies composition and division as ways of forming propositions while he sees invention and judgment as forms of reasoning from the known to the unknown.
Various methods for the selection between competing theories have been proposed. They often focus on the theoretical virtues of the involved theories. One such method is based on the idea that, everything else being equal, the simpler theory is to be preferred. Another gives preference to the theory that provides the best explanation. According to the method of epistemic conservatism, we should, all other things being equal, prefer the theory which, among its competitors, is the most conservative, i.e. the one closest to the beliefs we currently hold. One problem with these methods of theory selection is that it is usually not clear how the different virtues are to be weighted, often resulting in cases where they are unable to resolve disputes between competing theories that excel at different virtues.
Methodological naturalism holds that all philosophical claims are synthetic claims that ultimately depend for their justification or rejection on empirical observational evidence. In this sense, philosophy is continuous with the natural sciences in that they both give priority to the scientific method for investigating all areas of reality.
According to truthmaker theorists, every true proposition is true because another entity, its truthmaker, exists. This principle can be used as a methodology to critically evaluate philosophical theories. In particular, this concerns theories that accept certain truths but are unable to provide their truthmaker. Such theorists are derided as ontological cheaters. For example, this can be applied to philosophical presentism, the view that nothing outside the present exists. Philosophical presentists usually accept the very common belief that dinosaurs existed but have trouble in providing a truthmaker for this belief since they deny existence to past entities.
In philosophy, the term "genealogical method" refers to a form of criticism that tries to expose commonly held beliefs by uncovering their historical origin and function. For example, it may be used to reject specific moral claims or the status of truth by giving a concrete historical reconstruction of how their development was contingent on power relations in society. This is usually accompanied by the assertion that these beliefs were accepted and became established, because of non-rational considerations, such as because they served the interests of a predominant class.
== Disagreements and influence ==
The disagreements within philosophy do not only concern which first-order philosophical claims are true, they also concern the second-order issue of which philosophical methods to use. One way to evaluate philosophical methods is to assess how well they do at solving philosophical problems. The question of the nature of philosophy has important implications for which methods of inquiry are appropriate to philosophizing. Seeing philosophy as an empirical science brings its methods much closer to the methods found in the natural sciences. Seeing it as the attempt to clarify concepts and increase understanding, on the other hand, usually leads to a methodology much more focused on apriori reasoning. In this sense, philosophical methodology is closely tied up with the question of how philosophy is to be defined. Different conceptions of philosophy often associated it with different goals, leading to certain methods being more or less suited to reach the corresponding goal.
The interest in philosophical methodology has risen a lot in contemporary philosophy. But some philosophers reject its importance by emphasizing that "preoccupation with questions about methods tends to distract us from prosecuting the methods themselves". However, such objections are often dismissed by pointing out that philosophy is at its core a reflective and critical enterprise, which is perhaps best exemplified by its preoccupation with its own methods. This is also backed up by the arguments to the effect that one's philosophical method has important implications for how one does philosophy and which philosophical claims one accepts or rejects. Since philosophy also studies the methodology of other disciplines, such as the methods of science, it has been argued that the study of its own methodology is an essential part of philosophy.
In several instances in the history of philosophy, the discovery of a new philosophical method, such as Cartesian doubt or the phenomenological method, has had important implications both on how philosophers conducted their theorizing and what claims they set out to defend. In some cases, such discoveries led the involved philosophers to overly optimistic outlooks, seeing them as historic breakthroughs that would dissolve all previous disagreements in philosophy.
== Relation to other fields ==
=== Science ===
The methods of philosophy differ in various respects from the methods found in the natural sciences. One important difference is that philosophy does not use experimental data obtained through measuring equipment like telescopes or cloud chambers to justify its claims. For example, even philosophical naturalists emphasizing the close relation between philosophy and the sciences mostly practice a form of armchair theorizing instead of gathering empirical data. Experimental philosophers are an important exception: they use methods found in social psychology and other empirical sciences to test their claims.
One reason for the methodological difference between philosophy and science is that philosophical claims are usually more speculative and cannot be verified or falsified by looking through a telescope. This problem is not solved by citing works published by other philosophers, since it only defers the question of how their insights are justified. An additional complication concerning testimony is that different philosophers often defend mutually incompatible claims, which poses the challenge of how to select between them. Another difference between scientific and philosophical methodology is that there is wide agreement among scientists concerning their methods, testing procedures, and results. This is often linked to the fact that science has seen much more progress than philosophy.
=== Epistemology ===
An important goal of philosophical methods is to assist philosophers in attaining knowledge. This is often understood in terms of evidence. In this sense, philosophical methodology is concerned with the questions of what constitutes philosophical evidence, how much support it offers, and how to acquire it. In contrast to the empirical sciences, it is often claimed that empirical evidence is not used in justifying philosophical theories, that philosophy is less about the empirical world and more about how we think about the empirical world. In this sense, philosophy is often identified with conceptual analysis, which is concerned with explaining concepts and showing their interrelations. Philosophical naturalists often reject this line of thought and hold that empirical evidence can confirm or disconfirm philosophical theories, at least indirectly.
Philosophical evidence, which may be obtained, for example, through intuitions or thought experiments, is central for justifying basic principles and axioms. These principles can then be used as premises to support further conclusions. Some approaches to philosophical methodology emphasize that these arguments have to be deductively valid, i.e. that the truth of their premises ensures the truth of their conclusion. In other cases, philosophers may commit themselves to working hypotheses or norms of investigation even though they lack sufficient evidence. Such assumptions can be quite fruitful in simplifying the possibilities the philosopher needs to consider and by guiding them to ask interesting questions. But the lack of evidence makes this type of enterprise vulnerable to criticism.
== See also ==
Scholarly method
Scientific method
Historical method
Dialectic
== References ==
== External links ==
Philosophical methodology at PhilPapers | Wikipedia/Philosophical_methodology |
In metaphysics, the A series and the B series are two different descriptions of the temporal ordering relation among events. The two series differ principally in their use of tense to describe the temporal relation between events and the resulting ontological implications regarding time.
John McTaggart introduced these terms in 1908, in an argument for the unreality of time. They are now commonly used by contemporary philosophers of time.
== History ==
Metaphysical debate about temporal orderings reaches back to the ancient Greek philosophers Heraclitus and Parmenides. Parmenides thought that reality is timeless and unchanging. Heraclitus, in contrast, believed that the world is a process of ceaseless change, flux and decay. Reality for Heraclitus is dynamic and ephemeral, in a state of constant flux, as in his famous statement that it is impossible to step twice into the same river (since the river is flowing).
== McTaggart's series ==
McTaggart distinguished the ancient conceptions as a set of relations. According to McTaggart, there are two distinct modes in which all events can be ordered in time.
=== A series ===
In the first mode, events are ordered as future, present, and past. Futurity and pastness allow of degrees, while the present does not. When we speak of time in this way, we are speaking in terms of a series of positions which run from the remote past through the recent past to the present, and from the present through the near future all the way to the remote future. The essential characteristic of this descriptive modality is that one must think of the series of temporal positions as being in continual transformation, in the sense that an event is first part of the future, then part of the present, and then part of the past. Moreover, the assertions made according to this modality correspond to the temporal perspective of the person who utters them. This is the A series of temporal events.
Although originally McTaggart defined tenses as relational qualities, i.e. qualities that events possess by standing in a certain relation to something outside of time (that does not change its position in time), today it is popularly believed that he treated tenses as monadic properties. Later philosophers have independently inferred that McTaggart must have understood tense as monadic because English tenses are normally expressed by the non-relational singular predicates "is past", "is present" and "is future", as noted by R. D. Ingthorsson.
=== B series ===
From a second point of view, events can be ordered according to a different series of temporal positions by way of two-term relations that are asymmetric, irreflexive and transitive (forming a strict partial order): "earlier than" (or precedes) and "later than" (or follows).
An important difference between the two series is that while events continuously change their position in the A series, their position in the B series does not. If an event ever is earlier than some events and later than the rest, it is always earlier than and later than those very events. Furthermore, while events acquire their A series determinations through a relation to something outside of time, their B series determinations hold between the events that constitutes the B series. This is the B series, and the philosophy that says all truths about time can be reduced to B series statements is the B-theory of time.
=== Distinctions ===
The logic and the linguistic expression of the two series are radically different. The A series is tensed and the B series is tenseless. For example, the assertion "today it is raining" is a tensed assertion because it depends on the temporal perspective—the present—of the person who utters it, while the assertion "It rained on 28 May 2025" is tenseless because it does not so depend. From the point of view of their truth-values, the two propositions are identical (both true or both false) if the first assertion is made on 28 May 2025. The non-temporal relation of precedence between two events, say "E precedes F", does not change over time
(excluding from this discussion the issue of the relativity of temporal order of causally disconnected events in the theory of relativity). On the other hand, the character of being "past, present or future" of the events "E" or "F" does change with time. In the image of McTaggart the passage of time consists in the fact that terms ever further in the future pass into the present...or that the present advances toward terms ever farther in the future. If we assume the first point of view, we speak as if the B series slides along a fixed A series. If we assume the second point of view, we speak as if the A series slides along a fixed B series.
== Relation to other ideas in the philosophy of time ==
There are two principal varieties of the A-theory, presentism and the growing block universe. Both assume an objective present, but presentism assumes that only present objects exist, while the growing block universe assumes both present and past objects exist, but not future ones. Views that assume no objective present and are therefore versions of the B-theory include eternalism and four-dimensionalism.
Vincent Conitzer argues that A-theory is related to Benj Hellie's vertiginous question and Caspar Hare's ideas of egocentric presentism and perspectival realism. He argues that A-theory being true and "now" being metaphysically distinguished from other moments of time implies that the "I" is also metaphysically distinguished from other first-person perspectives.
== See also ==
Endurantism
New riddle of induction
Perdurantism
The Unreality of Time
== Notes ==
== References ==
Craig, William Lane, The Tensed Theory of Time, Springer, 2000.
Craig, William Lane, The Tenseless Theory of Time, Springer, 2010.
Ingthorsson, R. D., "McTaggart's Paradox", Routledge, 2016.
McTaggart, J. E., 'The Unreality of Time', Mind, 1908.
McTaggart, J. E.,The Nature of Existence, vols. 1-2, Cambridge University Press, Cambridge, 1968.
Bradley, F. H., The Principles of Logic, Oxford University Press, Oxford, 1922.
== External links ==
"Notes on McTaggart, 'The Unreality of Time'". Seminar on Philosophy and Time, Trinity University, 2005.
Zalta, Edward N. (ed.). "McTaggart's A series and B series". Stanford Encyclopedia of Philosophy. | Wikipedia/A-theory_of_time |
A metamaterial (from the Greek word μετά meta, meaning "beyond" or "after", and the Latin word materia, meaning "matter" or "material") is a type of material engineered to have a property, typically rarely observed in naturally occurring materials, that is derived not from the properties of the base materials but from their newly designed structures. Metamaterials are usually fashioned from multiple materials, such as metals and plastics, and are usually arranged in repeating patterns, at scales that are smaller than the wavelengths of the phenomena they influence. Their precise shape, geometry, size, orientation, and arrangement give them their "smart" properties of manipulating electromagnetic, acoustic, or even seismic waves: by blocking, absorbing, enhancing, or bending waves, to achieve benefits that go beyond what is possible with conventional materials.
Appropriately designed metamaterials can affect waves of electromagnetic radiation or sound in a manner not observed in bulk materials. Those that exhibit a negative index of refraction for particular wavelengths have been the focus of a large amount of research. These materials are known as negative-index metamaterials.
Potential applications of metamaterials are diverse and include sports equipment, optical filters, medical devices, remote aerospace applications, sensor detection and infrastructure monitoring, smart solar power management, lasers, crowd control, radomes, high-frequency battlefield communication and lenses for high-gain antennas, improving ultrasonic sensors, and even shielding structures from earthquakes. Metamaterials offer the potential to create super-lenses. Such a lens can allow imaging below the diffraction limit that is the minimum resolution d=λ/(2NA) that can be achieved by conventional lenses having a numerical aperture NA and with illumination wavelength λ. Sub-wavelength optical metamaterials, when integrated with optical recording media, can be used to achieve optical data density higher than limited by diffraction. A form of 'invisibility' was demonstrated using gradient-index materials. Acoustic and seismic metamaterials are also research areas.
Metamaterial research is interdisciplinary and involves such fields as electrical engineering, electromagnetics, classical optics, solid state physics, microwave and antenna engineering, optoelectronics, material sciences, nanoscience and semiconductor engineering. Recent developments also show promise for metamaterials in optical computing, with metamaterial-based systems theoretically being able to perform certain tasks more efficiently than conventional computing.
== History ==
Explorations of artificial materials for manipulating electromagnetic waves began at the end of the 19th century. Some of the earliest structures that may be considered metamaterials were studied by Jagadish Chandra Bose, who in 1898 researched substances with chiral properties. Karl Ferdinand Lindman studied wave interaction with metallic helices as artificial chiral media in the early twentieth century.
In the late 1940s, Winston E. Kock from AT&T Bell Laboratories developed materials that had similar characteristics to metamaterials. In the 1950s and 1960s, artificial dielectrics were studied for lightweight microwave antennas. Microwave radar absorbers were researched in the 1980s and 1990s as applications for artificial chiral media.
Negative-index materials were first described theoretically by Victor Veselago in 1967. He proved that such materials could transmit light. He showed that the phase velocity could be made anti-parallel to the direction of Poynting vector. This is contrary to wave propagation in naturally occurring materials.
In 1995, John M. Guerra fabricated a sub-wavelength transparent grating (later called a photonic metamaterial) having 50 nm lines and spaces, and then coupled it with a standard oil immersion microscope objective (the combination later called a super-lens) to resolve a grating in a silicon wafer also having 50 nm lines and spaces. This super-resolved image was achieved with illumination having a wavelength of 650 nm in air.
In 2000, John Pendry was the first to identify a practical way to make a left-handed metamaterial, a material in which the right-hand rule is not followed. Such a material allows an electromagnetic wave to convey energy (have a group velocity) against its phase velocity. Pendry hypothesized that metallic wires aligned along the direction of a wave could provide negative permittivity (dielectric function ε < 0). Natural materials (such as ferroelectrics) display negative permittivity; the challenge was achieving negative permeability (μ < 0). In 1999, Pendry demonstrated that a split ring (C shape) with its axis placed along the direction of wave propagation could do so. In the same paper, he showed that a periodic array of wires and rings could give rise to a negative refractive index. Pendry also proposed a related negative-permeability design, the Swiss roll.
In 2000, David R. Smith et al. reported the experimental demonstration of functioning electromagnetic metamaterials by horizontally stacking, periodically, split-ring resonators and thin wire structures. A method was provided in 2002 to realize negative-index metamaterials using artificial lumped-element loaded transmission lines in microstrip technology. In 2003, complex (both real and imaginary parts of) negative refractive index and imaging by flat lens using left handed metamaterials were demonstrated. By 2007, experiments that involved negative refractive index had been conducted by many groups. At microwave frequencies, the first, imperfect invisibility cloak was realized in 2006.
From the standpoint of governing equations, contemporary researchers can classify the realm of metamaterials into three primary branches: Electromagnetic/Optical wave metamaterials, other wave metamaterials, and diffusion metamaterials. These branches are characterized by their respective governing equations, which include Maxwell's equations (a wave equation describing transverse waves), other wave equations (for longitudinal and transverse waves), and diffusion equations (pertaining to diffusion processes). Crafted to govern a range of diffusion activities, diffusion metamaterials prioritize diffusion length as their central metric. This crucial parameter experiences temporal fluctuations while remaining immune to frequency variations. In contrast, wave metamaterials, designed to adjust various wave propagation paths, consider the wavelength of incoming waves as their essential metric. This wavelength remains constant over time, though it adjusts with frequency alterations. Fundamentally, the key metrics for diffusion and wave metamaterials present a stark divergence, underscoring a distinct complementary relationship between them. For comprehensive information, refer to Section I.B, "Evolution of metamaterial physics," in Ref.
== Electromagnetic metamaterials ==
An electromagnetic metamaterial affects electromagnetic waves that impinge on or interact with its structural features, which are smaller than the wavelength. To behave as a homogeneous material accurately described by an effective refractive index, its features must be much smaller than the wavelength.
The unusual properties of metamaterials arise from the resonant response of each constituent element rather than their spatial arrangement into a lattice. It allows considering the local effective material parameters (permittivity and permeability). The resonance effect related to the mutual arrangement of elements is responsible for Bragg scattering, which underlies the physics of photonic crystals, another class of electromagnetic materials. Unlike the local resonances, Bragg scattering and corresponding Bragg stop-band have a low-frequency limit determined by the lattice spacing. The subwavelength approximation ensures that the Bragg stop-bands with the strong spatial dispersion effects are at higher frequencies and can be neglected. The criterion for shifting the local resonance below the lower Bragg stop-band make it possible to build a photonic phase transition diagram in a parameter space, for example, size and permittivity of the constituent element. Such diagram displays the domain of structure parameters allowing the metamaterial properties observation in the electromagnetic material.
For microwave radiation, the features are on the order of millimeters. Microwave frequency metamaterials are usually constructed as arrays of electrically conductive elements (such as loops of wire) that have suitable inductive and capacitive characteristics. Many microwave metamaterials use split-ring resonators.
Photonic metamaterials are structured on the nanometer scale and manipulate light at optical frequencies. Photonic crystals and frequency-selective surfaces such as diffraction gratings, dielectric mirrors and optical coatings exhibit similarities to subwavelength structured metamaterials. However, these are usually considered distinct from metamaterials, as their function arises from diffraction or interference and thus cannot be approximated as a homogeneous material. However, material structures such as photonic crystals are effective in the visible light spectrum. The middle of the visible spectrum has a wavelength of approximately 560 nm (for sunlight). Photonic crystal structures are generally half this size or smaller, that is < 280 nm.
Plasmonic metamaterials utilize surface plasmons, which are packets of electrical charge that collectively oscillate at the surfaces of metals at optical frequencies.
Frequency selective surfaces (FSS) can exhibit subwavelength characteristics and are known variously as artificial magnetic conductors (AMC) or High Impedance Surfaces (HIS). FSS display inductive and capacitive characteristics that are directly related to their subwavelength structure.
Electromagnetic metamaterials can be divided into different classes, as follows:
=== Negative refractive index ===
Negative-index metamaterials (NIM) are characterized by a negative index of refraction. Other terms for NIMs include "left-handed media", "media with a negative refractive index", and "backward-wave media". NIMs where the negative index of refraction arises from simultaneously negative permittivity and negative permeability are also known as double negative metamaterials or double negative materials (DNG).
Assuming a material well-approximated by a real permittivity and permeability, the relationship between permittivity
ε
r
{\displaystyle \varepsilon _{r}}
, permeability
μ
r
{\displaystyle \mu _{r}}
and refractive index n is given by
n
=
±
ε
r
μ
r
{\textstyle n=\pm {\sqrt {\varepsilon _{\mathrm {r} }\mu _{\mathrm {r} }}}}
. All known non-metamaterial transparent materials (glass, water, ...) possess positive
ε
r
{\displaystyle \varepsilon _{r}}
and
μ
r
{\displaystyle \mu _{r}}
. By convention the positive square root is used for n. However, some engineered metamaterials have
ε
r
{\displaystyle \varepsilon _{r}}
and
μ
r
<
0
{\displaystyle \mu _{r}<0}
. Because the product
ε
r
μ
r
{\displaystyle \varepsilon _{r}\mu _{r}}
is positive, n is real. Under such circumstances, it is necessary to take the negative square root for n. When both
ε
r
{\displaystyle \varepsilon _{r}}
and
μ
r
{\displaystyle \mu _{r}}
are positive (negative), waves travel in the forward (backward) direction. Electromagnetic waves cannot propagate in materials with
ε
r
{\displaystyle \varepsilon _{r}}
and
μ
r
{\displaystyle \mu _{r}}
of opposite sign as the refractive index becomes imaginary. Such materials are opaque for electromagnetic radiation and examples include plasmonic materials such as metals (gold, silver, ...).
The foregoing considerations are simplistic for actual materials, which must have complex-valued
ε
r
{\displaystyle \varepsilon _{r}}
and
μ
r
{\displaystyle \mu _{r}}
. The real parts of both
ε
r
{\displaystyle \varepsilon _{r}}
and
μ
r
{\displaystyle \mu _{r}}
do not have to be negative for a passive material to display negative refraction. Indeed, a negative refractive index for circularly polarized waves can also arise from chirality. Metamaterials with negative n have numerous interesting properties:
Snell's law (n1sinθ1 = n2sinθ2) still describes refraction, but as n2 is negative, incident and refracted rays are on the same side of the surface normal at an interface of positive and negative index materials.
Cherenkov radiation points the other way.
The time-averaged Poynting vector is antiparallel to phase velocity. However, for waves (energy) to propagate, a –μ must be paired with a –ε in order to satisfy the wave number dependence on the material parameters
k
c
=
ω
μ
ε
{\displaystyle kc=\omega {\sqrt {\mu \varepsilon }}}
.
Negative index of refraction derives mathematically from the vector triplet E, H and k.
For plane waves propagating in electromagnetic metamaterials, the electric field, magnetic field and wave vector follow a left-hand rule, the reverse of the behavior of conventional optical materials.
To date, only metamaterials exhibit a negative index of refraction.
=== Single negative ===
Single negative (SNG) metamaterials have either negative relative permittivity (εr) or negative relative permeability (μr), but not both. They act as metamaterials when combined with a different, complementary SNG, jointly acting as a DNG.
Epsilon negative media (ENG) display a negative εr while μr is positive. Many plasmas exhibit this characteristic. For example, noble metals such as gold or silver are ENG in the infrared and visible spectrums.
Mu-negative media (MNG) display a positive εr and negative μr. Gyrotropic or gyromagnetic materials exhibit this characteristic. A gyrotropic material is one that has been altered by the presence of a quasistatic magnetic field, enabling a magneto-optic effect. A magneto-optic effect is a phenomenon in which an electromagnetic wave propagates through such a medium. In such a material, left- and right-rotating elliptical polarizations can propagate at different speeds. When light is transmitted through a layer of magneto-optic material, the result is called the Faraday effect: the polarization plane can be rotated, forming a Faraday rotator. The results of such a reflection are known as the magneto-optic Kerr effect (not to be confused with the nonlinear Kerr effect). Two gyrotropic materials with reversed rotation directions of the two principal polarizations are called optical isomers.
Joining a slab of ENG material and slab of MNG material resulted in properties such as resonances, anomalous tunneling, transparency and zero reflection. Like negative-index materials, SNGs are innately dispersive, so their εr, μr and refraction index n, are a function of frequency.
=== Hyperbolic ===
Hyperbolic metamaterials (HMMs) behave as a metal for certain polarization or direction of light propagation and behave as a dielectric for the other due to the negative and positive permittivity tensor components, giving extreme anisotropy. The material's dispersion relation in wavevector space forms a hyperboloid and therefore it is called a hyperbolic metamaterial. The extreme anisotropy of HMMs leads to directional propagation of light within and on the surface. HMMs have shown various potential applications, such as sensing, reflection modulator, all-optical ultra-fast switching for integrated photonics, imaging, super high resolution and single photon source, steering of optical signals, enhanced plasmon resonance effects.
=== Bandgap ===
Electromagnetic bandgap metamaterials (EBG or EBM) control light propagation. This is accomplished either with photonic crystals (PC) or left-handed materials (LHM). PCs can prohibit light propagation altogether. Both classes can allow light to propagate in specific, designed directions and both can be designed with bandgaps at desired frequencies. The period size of EBGs is an appreciable fraction of the wavelength, creating constructive and destructive interference.
PC are distinguished from sub-wavelength structures, such as tunable metamaterials, because the PC derives its properties from its bandgap characteristics. PCs are sized to match the wavelength of light, versus other metamaterials that expose sub-wavelength structure. Furthermore, PCs function by diffracting light. In contrast, metamaterial does not use diffraction.
PCs have periodic inclusions that inhibit wave propagation due to the inclusions' destructive interference from scattering. The photonic bandgap property of PCs makes them the electromagnetic analog of electronic semi-conductor crystals.
EBGs have the goal of creating high quality, low loss, periodic, dielectric structures. An EBG affects photons in the same way semiconductor materials affect electrons. PCs are the perfect bandgap material, because they allow no light propagation. Each unit of the prescribed periodic structure acts like one atom, albeit of a much larger size.
EBGs are designed to prevent the propagation of an allocated bandwidth of frequencies, for certain arrival angles and polarizations. Various geometries and structures have been proposed to fabricate EBG's special properties. In practice it is impossible to build a flawless EBG device.
EBGs have been manufactured for frequencies ranging from a few gigahertz (GHz) to a few terahertz (THz), radio, microwave and mid-infrared frequency regions. EBG application developments include a transmission line, woodpiles made of square dielectric bars and several different types of low gain antennas.
=== Double positive medium ===
Double positive mediums (DPS) do occur in nature, such as naturally occurring dielectrics. Permittivity and magnetic permeability are both positive and wave propagation is in the forward direction. Artificial materials have been fabricated which combine DPS, ENG and MNG properties.
=== Bi-isotropic and bianisotropic ===
Categorizing metamaterials into double or single negative, or double positive, normally assumes that the metamaterial has independent electric and magnetic responses described by ε and μ. However, in many cases, the electric field causes magnetic polarization, while the magnetic field induces electrical polarization, known as magnetoelectric coupling. Such media are denoted as bi-isotropic. Media that exhibit magnetoelectric coupling and that are anisotropic (which is the case for many metamaterial structures), are referred to as bi-anisotropic.
Four material parameters are intrinsic to magnetoelectric coupling of bi-isotropic media. They are the electric (E) and magnetic (H) field strengths, and electric (D) and magnetic (B) flux densities. These parameters are ε, μ, κ and χ or permittivity, permeability, strength of chirality, and the Tellegen parameter, respectively. In this type of media, material parameters do not vary with changes along a rotated coordinate system of measurements. In this sense they are invariant or scalar.
The intrinsic magnetoelectric parameters, κ and χ, affect the phase of the wave. The effect of the chirality parameter is to split the refractive index. In isotropic media this results in wave propagation only if ε and μ have the same sign. In bi-isotropic media with χ assumed to be zero, and κ a non-zero value, different results appear. Either a backward wave or a forward wave can occur. Alternatively, two forward waves or two backward waves can occur, depending on the strength of the chirality parameter.
In the general case, the constitutive relations for bi-anisotropic materials read
D
=
ε
E
+
ξ
H
,
{\displaystyle \mathbf {D} =\varepsilon \mathbf {E} +\xi \mathbf {H} ,}
B
=
ζ
E
+
μ
H
,
{\displaystyle \mathbf {B} =\zeta \mathbf {E} +\mu \mathbf {H} ,}
where
ε
{\displaystyle \varepsilon }
and
μ
{\displaystyle \mu }
are the permittivity and the permeability tensors, respectively, whereas
ξ
{\displaystyle \xi }
and
ζ
{\displaystyle \zeta }
are the two magneto-electric tensors. If the medium is reciprocal, permittivity and permeability are symmetric tensors, and
ξ
=
−
ζ
T
=
−
i
κ
T
{\displaystyle \xi =-\zeta ^{T}=-i\kappa ^{T}}
, where
κ
{\displaystyle \kappa }
is the chiral tensor describing chiral electromagnetic and reciprocal magneto-electric response. The chiral tensor can be expressed as
κ
=
1
3
tr
(
κ
)
I
+
N
+
J
{\displaystyle \kappa ={\tfrac {1}{3}}\operatorname {tr} (\kappa )I+N+J}
, where
tr
(
κ
)
{\displaystyle \operatorname {tr} (\kappa )}
is the trace of
κ
{\displaystyle \kappa }
, I is the identity matrix, N is a symmetric trace-free tensor, and J is an antisymmetric tensor. Such decomposition allows us to classify the reciprocal bianisotropic response and we can identify the following three main classes: (i) chiral media (
tr
(
κ
)
≠
0
,
N
≠
0
,
J
=
0
{\displaystyle \operatorname {tr} (\kappa )\neq 0,N\neq 0,J=0}
), (ii) pseudochiral media (
tr
(
κ
)
=
0
,
N
≠
0
,
J
=
0
{\displaystyle \operatorname {tr} (\kappa )=0,N\neq 0,J=0}
), (iii) omega media (
tr
(
κ
)
=
0
,
N
=
0
,
J
≠
0
{\displaystyle \operatorname {tr} (\kappa )=0,N=0,J\neq 0}
).
=== Chiral ===
Handedness of metamaterials is a potential source of confusion as the metamaterial literature includes two conflicting uses of the terms left- and right-handed. The first refers to one of the two circularly polarized waves that are the propagating modes in chiral media. The second relates to the triplet of electric field, magnetic field and Poynting vector that arise in negative refractive index media, which in most cases are not chiral.
Generally a chiral and/or bianisotropic electromagnetic response is a consequence of 3D geometrical chirality: 3D-chiral metamaterials are composed by embedding 3D-chiral structures in a host medium and they show chirality-related polarization effects such as optical activity and circular dichroism. The concept of 2D chirality also exists and a planar object is said to be chiral if it cannot be superposed onto its mirror image unless it is lifted from the plane. 2D-chiral metamaterials that are anisotropic and lossy have been observed to exhibit directionally asymmetric transmission (reflection, absorption) of circularly polarized waves due to circular conversion dichroism. On the other hand, bianisotropic response can arise from geometrical achiral structures possessing neither 2D nor 3D intrinsic chirality. Plum and colleagues investigated magneto-electric coupling due to extrinsic chirality, where the arrangement of a (achiral) structure together with the radiation wave vector is different from its mirror image, and observed large, tuneable linear optical activity, nonlinear optical activity, specular optical activity and circular conversion dichroism. Rizza et al. suggested 1D chiral metamaterials where the effective chiral tensor is not vanishing if the system is geometrically one-dimensional chiral (the mirror image of the entire structure cannot be superposed onto it by using translations without rotations).
3D-chiral metamaterials are constructed from chiral materials or resonators in which the effective chirality parameter
κ
{\displaystyle \kappa }
is non-zero.
Wave propagation properties in such chiral metamaterials demonstrate that negative refraction can be realized in metamaterials with a strong chirality and positive
ε
r
{\displaystyle \varepsilon _{r}}
and
μ
r
{\displaystyle \mu _{r}}
.
This is because the refractive index
n
{\displaystyle n}
has distinct values for left and right circularly polarized waves, given by
n
=
±
ε
r
μ
r
±
κ
{\displaystyle n=\pm {\sqrt {\varepsilon _{r}\mu _{r}}}\pm \kappa }
It can be seen that a negative index will occur for one polarization if
κ
{\displaystyle \kappa }
>
ε
r
μ
r
{\displaystyle {\sqrt {\varepsilon _{r}\mu _{r}}}}
. In this case, it is not necessary that either or both
ε
r
{\displaystyle \varepsilon _{r}}
and
μ
r
{\displaystyle \mu _{r}}
be negative for backward wave propagation. A negative refractive index due to chirality was first observed simultaneously and independently by Plum et al. and Zhang et al. in 2009.
=== FSS based ===
Frequency selective surface-based metamaterials block signals in one waveband and pass those at another waveband. They have become an alternative to fixed frequency metamaterials. They allow for optional changes of frequencies in a single medium, rather than the restrictive limitations of a fixed frequency response.
== Mechanical metamaterials (Elastic metamaterials) ==
Mechanical metamaterials are rationally designed artificial materials/structures of precision geometrical arrangements leading to unusual physical and mechanical properties. These unprecedented properties are often derived from their unique internal structures rather than the materials from which they are made. Inspiration for mechanical metamaterials design often comes from biological materials (such as honeycombs and cells), from molecular and crystalline unit cell structures as well as the artistic fields of origami and kirigami. While early mechanical metamaterials had regular repeats of simple unit cell structures, increasingly complex units and architectures are now being explored. Mechanical metamaterials can be seen as a counterpart to the rather well-known family of optical metamaterials and electromagnetic metamaterials. Mechanical properties, including elasticity, viscoelasticity, and thermoelasticity, are central to the design of mechanical metamaterials. They are often also referred to as elastic metamaterials or elastodynamic metamaterials. Their mechanical properties can be designed to have values that cannot be found in nature, such as negative stiffness, negative Poisson’s ratio, negative compressibility, and vanishing shear modulus. In addition to classical mechanical metamaterials, there has been growing attention to active mechanical metamaterials with advanced functionalities. These enable "intelligent mechanical metamaterials", which are programmable material systems capable of sensing, energy harvesting, actuation, communication, and information processing—to interact with their surrounding environments, optimize their response, and create a sense–decide–respond loop.
== Other types ==
=== Acoustic ===
Acoustic metamaterials control, direct and manipulate sound in the form of sonic, infrasonic or ultrasonic waves in gases, liquids and solids. As with electromagnetic waves, sonic waves can exhibit negative refraction.
Control of sound waves is mostly accomplished through the bulk modulus β, mass density ρ and chirality. The bulk modulus and density are analogs of permittivity and permeability in electromagnetic metamaterials. Related to this is the mechanics of sound wave propagation in a lattice structure. Also materials have mass and intrinsic degrees of stiffness. Together, these form a resonant system and the mechanical (sonic) resonance may be excited by appropriate sonic frequencies (for example audible pulses).
=== Structural ===
Structural metamaterials are a type of mechanical metamaterial that provide properties such as crushability and lightweight characteristics. Using projection micro-stereolithography, microlattices can be created using forms much like trusses and girders. Materials four orders of magnitude stiffer than conventional aerogel, but with the same density have been created. Such materials can withstand a load of at least 160,000 times their own weight by over-constraining the materials.
A ceramic nanotruss metamaterial can be flattened and revert to its original state.
=== Thermal ===
Typically materials found in nature, when homogeneous, are thermally isotropic. That is to say, heat passes through them at roughly the same rate in all directions. However, thermal metamaterials are anisotropic usually due to their highly organized internal structure. Composite materials with highly aligned internal particles or structures, such as fibers, and carbon nanotubes (CNT), are examples of this.
=== Nonlinear ===
Metamaterials may be fabricated that include some form of nonlinear media, whose properties change with the power of the incident wave. Nonlinear media are essential for nonlinear optics. Most optical materials have a relatively weak response, meaning that their properties change by only a small amount for large changes in the intensity of the electromagnetic field. The local electromagnetic fields of the inclusions in nonlinear metamaterials can be much larger than the average value of the field. Besides, remarkable nonlinear effects have been predicted and observed if the metamaterial effective dielectric permittivity is very small (epsilon-near-zero media). In addition, exotic properties such as a negative refractive index, create opportunities to tailor the phase matching conditions that must be satisfied in any nonlinear optical structure.
=== Liquid ===
Metafluids offer programmable properties such as viscosity, compressibility, and optical. One approach employed 50-500 micron diameter air-filled elastomer spheres suspended in silicon oil. The spheres compress under pressure, and regain their shape when the pressure is relieved. Their properties differ across those two states. Unpressurized, they scatter light, making them opaque. Under pressure, they collapse into half-moon shapes, focusing light, and becoming transparent. The pressure response could allow them to act as a sensor or as a dynamic hydraulic fluid. Like cornstarch, it can act as either a Newtonian or a non-Newtonian fluid. Under pressure, it becomes non-Newtonian – meaning its viscosity changes in response to shear force.
=== Hall metamaterials ===
In 2009, Marc Briane and Graeme Milton proved mathematically that one can in principle invert the sign of a 3 materials based composite in 3D made out of only positive or negative sign Hall coefficient materials. Later in 2015 Muamer Kadic et al. showed that a simple perforation of isotropic material can lead to its change of sign of the Hall coefficient. This theoretical claim was finally experimentally demonstrated by Christian Kern et al.
In 2015, it was also demonstrated by Christian Kern et al. that an anisotropic perforation of a single material can lead to a yet more unusual effect namely the parallel Hall effect. This means that the induced electric field inside a conducting media is no longer orthogonal to the current and the magnetic field but is actually parallel to the latest.
=== Meta-biomaterials ===
Meta-biomaterials are a type of mechanical metamaterial purposefully designed to interact with biological systems, integrating principles from both metamaterial science and biological disciplines. Engineered at the nanoscale, these materials adeptly manipulate electromagnetic, acoustic, or thermal properties to facilitate biological processes. Through meticulous adjustment of their structure and composition, meta-biomaterials hold promise in augmenting various biomedical technologies such as medical imaging, drug delivery, and tissue engineering. This underscores the importance of comprehending biological systems through the interdisciplinary lens of materials science.
== Frequency bands ==
=== Terahertz ===
Terahertz metamaterials interact at terahertz frequencies, usually defined as 0.1 to 10 THz. Terahertz radiation lies at the far end of the infrared band, just after the end of the microwave band. This corresponds to millimeter and submillimeter wavelengths between the 3 mm (EHF band) and 0.03 mm (long-wavelength edge of far-infrared light).
=== Photonic ===
Photonic metamaterial interact with optical frequencies (mid-infrared). The sub-wavelength period distinguishes them from photonic band gap structures.
=== Tunable ===
Tunable metamaterials allow arbitrary adjustments to frequency changes in the refractive index. A tunable metamaterial expands beyond the bandwidth limitations in left-handed materials by constructing various types of metamaterials.
=== Plasmonic ===
Plasmonic metamaterials exploit surface plasmons, which are produced from the interaction of light with metal-dielectrics. Under specific conditions, the incident light couples with the surface plasmons to create self-sustaining, propagating electromagnetic waves or surface waves known as surface plasmon polaritons. Bulk plasma oscillations make possible the effect of negative mass (density).
== Applications ==
Metamaterials are under consideration for many applications. Metamaterial antennas are commercially available.
In 2007, one researcher stated that for metamaterial applications to be realized, energy loss must be reduced, materials must be extended into three-dimensional isotropic materials and production techniques must be industrialized.
=== Antennas ===
Metamaterial antennas are a class of antennas that use metamaterials to improve performance. Demonstrations showed that metamaterials could enhance an antenna's radiated power. Materials that can attain negative permeability allow for properties such as small antenna size, high directivity and tunable frequency.
=== Absorber ===
A metamaterial absorber manipulates the loss components of metamaterials' permittivity and magnetic permeability, to absorb large amounts of electromagnetic radiation. This is a useful feature for photodetection and solar photovoltaic applications. Loss components are also relevant in applications of negative refractive index (photonic metamaterials, antenna systems) or transformation optics (metamaterial cloaking, celestial mechanics), but often are not used in these applications.
=== Superlens ===
A superlens is a two or three-dimensional device that uses metamaterials, usually with negative refraction properties, to achieve resolution beyond the diffraction limit (ideally, infinite resolution). Such a behavior is enabled by the capability of double-negative materials to yield negative phase velocity. The diffraction limit is inherent in conventional optical devices or lenses.
=== Cloaking devices ===
Metamaterials are a potential basis for a practical cloaking device. The proof of principle was demonstrated on October 19, 2006. No practical cloaks are publicly known to exist.
=== Radar cross-section (RCS-)reducing metamaterials ===
Metamaterials have applications in stealth technology, which reduces RCS in any of various ways (e.g., absorption, diffusion, redirection). Conventionally, the RCS has been reduced either by radar-absorbent material (RAM) or by purpose shaping of the targets such that the scattered energy can be redirected away from the source. While RAMs have narrow frequency band functionality, purpose shaping limits the aerodynamic performance of the target. More recently, metamaterials or metasurfaces have been synthesized that can redirect the scattered energy away from the source using either array theory or generalized Snell's law. This has led to aerodynamically favorable shapes for the targets with the reduced RCS.
=== Seismic protection ===
Seismic metamaterials counteract the adverse effects of seismic waves on man-made structures.
=== Sound filtering ===
Metamaterials textured with nanoscale wrinkles could control sound or light signals, such as changing a material's color or improving ultrasound resolution. Uses include nondestructive material testing, medical diagnostics and sound suppression. The materials can be made through a high-precision, multi-layer deposition process. The thickness of each layer can be controlled within a fraction of a wavelength. The material is then compressed, creating precise wrinkles whose spacing can cause scattering of selected frequencies.
=== Guided mode manipulations ===
Metamaterials can be integrated with optical waveguides to tailor guided electromagnetic waves (meta-waveguide). Subwavelength structures like metamaterials can be integrated with for instance silicon waveguides to develop and polarization beam splitters and optical couplers, adding new degrees of freedom of controlling light propagation at nanoscale for integrated photonic devices. Other applications such as integrated mode converters, polarization (de)multiplexers, structured light generation, and on-chip bio-sensors can be developed.
== Theoretical models ==
All materials are made of atoms, which are dipoles. These dipoles modify light velocity by a factor n (the refractive index). In a split ring resonator the ring and wire units act as atomic dipoles: the wire acts as a ferroelectric atom, while the ring acts as an inductor L, while the open section acts as a capacitor C. The ring as a whole acts as an LC circuit. When the electromagnetic field passes through the ring, an induced current is created. The generated field is perpendicular to the light's magnetic field. The magnetic resonance results in a negative permeability; the refraction index is negative as well. (The lens is not truly flat, since the structure's capacitance imposes a slope for the electric induction.)
Several (mathematical) material models frequency response in DNGs. One of these is the Lorentz model, which describes electron motion in terms of a driven-damped, harmonic oscillator. The Debye relaxation model applies when the acceleration component of the Lorentz mathematical model is small compared to the other components of the equation. The Drude model applies when the restoring force component is negligible and the coupling coefficient is generally the plasma frequency. Other component distinctions call for the use of one of these models, depending on its polarity or purpose.
Three-dimensional composites of metal/non-metallic inclusions periodically/randomly embedded in a low permittivity matrix are usually modeled by analytical methods, including mixing formulas and scattering-matrix based methods. The particle is modeled by either an electric dipole parallel to the electric field or a pair of crossed electric and magnetic dipoles parallel to the electric and magnetic fields, respectively, of the applied wave. These dipoles are the leading terms in the multipole series. They are the only existing ones for a homogeneous sphere, whose polarizability can be easily obtained from the Mie scattering coefficients. In general, this procedure is known as the "point-dipole approximation", which is a good approximation for metamaterials consisting of composites of electrically small spheres. Merits of these methods include low calculation cost and mathematical simplicity.
Three conceptions- negative-index medium, non-reflecting crystal and superlens are foundations of the metamaterial theory. Other first principles techniques for analyzing triply-periodic electromagnetic media may be found in Computing photonic band structure
== Institutional networks ==
=== MURI ===
The Multidisciplinary University Research Initiative (MURI) encompasses dozens of Universities and a few government organizations. Participating universities include UC Berkeley, UC Los Angeles, UC San Diego, Massachusetts Institute of Technology, and Imperial College in London. The sponsors are Office of Naval Research and the Defense Advanced Research Project Agency.
MURI supports research that intersects more than one traditional science and engineering discipline to accelerate both research and translation to applications. As of 2009, 69 academic institutions were expected to participate in 41 research efforts.
=== Metamorphose ===
The Virtual Institute for Artificial Electromagnetic Materials and Metamaterials "Metamorphose VI AISBL" is an international association to promote artificial electromagnetic materials and metamaterials. It organizes scientific conferences, supports specialized journals, creates and manages research programs, provides training programs (including PhD and training programs for industrial partners); and technology transfer to European Industry.
== See also ==
Metamaterials Handbook
Metamaterials: Physics and Engineering Explorations
Metasurface
npj Metamaterials (journal), Nature Portfolio
Artificial dielectrics—macroscopic analogues of naturally occurring dielectrics that came into use with the radar microwave technologies developed between the 1940s and 1970s.
METATOY (Metamaterial for rays)—composed of super-wavelength structures, such as small arrays of prisms and lenses and can operate over a broad band of frequencies
Magnonics
== References ==
== External links ==
Media related to Metamaterials at Wikimedia Commons
UK academic and end-user community funded by UKRI: UK Metamaterials Network
UK Government Rapid Technology Assessment looking at Metamaterials
PwC Tech Translated: Metamaterials
Centre for Metamaterial Research and Innovation, University of Exeter, UK www.metamaterials.center
Institute of Physics, Impact Project Pathway "Commercialising Metamaterials" | Wikipedia/Metamaterials |
Bundle theory, originated by the 18th century Scottish philosopher David Hume, is the ontological theory about objecthood in which an object consists only of a collection (bundle) of properties, relations or tropes.
According to bundle theory, an object consists of its properties and nothing more; thus, there cannot be an object without properties and one cannot conceive of such an object. For example, when we think of an apple, we think of its properties: redness, roundness, being a type of fruit, etc. There is nothing above and beyond these properties; the apple is nothing more than the collection of its properties. In particular, there is no substance in which the properties are inherent.
Bundle theory has been contrasted with the ego theory of the self, which views the egoic self as a soul-like substance existing in the same manner as the corporeal self.
== Arguments in favor ==
The difficulty in conceiving and or describing an object without also conceiving and or describing its properties is a common justification for bundle theory, especially among current philosophers in the Anglo-American tradition.
The inability to comprehend any aspect of the thing other than its properties implies, this argument maintains, that one cannot conceive of a bare particular (a substance without properties), an implication that directly opposes substance theory. The conceptual difficulty of bare particulars was illustrated by John Locke when he described a substance by itself, apart from its properties as "something, I know not what. [...] The idea then we have, to which we give the general name substance, being nothing but the supposed, but unknown, support of those qualities we find existing, which we imagine cannot subsist sine re substante, without something to support them, we call that support substantia; which, according to the true import of the word, is, in plain English, standing under or upholding."
Whether a relation of an object is one of its properties may complicate such an argument. However, the argument concludes that the conceptual challenge of bare particulars leaves a bundle of properties and nothing more as the only possible conception of an object, thus justifying bundle theory.
== Objections ==
Bundle theory maintains that properties are bundled together in a collection without describing how they are tied together. For example, bundle theory regards an apple as red, four inches (100 mm) wide, and juicy but lacking an underlying substance. The apple is said to be a bundle of properties including redness, being four inches (100 mm) wide, and juiciness.
Hume used the term "bundle" in this sense, also referring to the personal identity, in his main work: "I may venture to affirm of the rest of mankind, that they are nothing but a bundle or collection of different perceptions, which succeed each other with inconceivable rapidity, and are in a perpetual flux and movement".
Critics question how bundle theory accounts for the properties' compresence (the togetherness relation between those properties) without an underlying substance. Critics also question how any two given properties are determined to be properties of the same object if there is no substance in which they both inhere. This argument is done away with if one considers spatio-temporal location to be a property as well.
Traditional bundle theory explains the compresence of properties by defining an object as a collection of properties bound together. Thus, different combinations of properties and relations produce different objects. Redness and juiciness, for example, may be found together on top of the table because they are part of a bundle of properties located on the table, one of which is the "looks like an apple" property.
By contrast, substance theory explains the compresence of properties by asserting that the properties are found together because it is the substance that has those properties. In substance theory, a substance is the thing in which properties inhere. For example, redness and juiciness are found on top of the table because redness and juiciness inhere in an apple, making the apple red and juicy.
The bundle theory of substance explains compresence. Specifically, it maintains that properties' compresence itself engenders a substance. Thus, it determines substancehood empirically by the togetherness of properties rather than by a bare particular or by any other non-empirical underlying strata. The bundle theory of substance thus rejects the substance theories of Aristotle, Descartes, Leibniz, and more recently, J. P. Moreland, Jia Hou, Joseph Bridgman, Quentin Smith, and others.
== Buddhism ==
The Indian Madhyamaka philosopher, Chandrakirti, used the aggregate nature of objects to demonstrate the lack of essence in what is known as the sevenfold reasoning. In his work, Guide to the Middle Way (Sanskrit: Madhyamakāvatāra), he says:
[The self] is like a cart, which is not other than its parts, not non-other, and does not possess them. It is not within its parts, and its parts are not within it. It is not the mere collection, and it is not the shape.
He goes on to explain what is meant by each of these seven assertions, but briefly in a subsequent commentary he explains that the conventions of the world do not exist essentially when closely analyzed, but exist only through being taken for granted, without being subject to scrutiny that searches for an essence within them.
Another view of the Buddhist theory of the self, especially in early Buddhism, is that the Buddhist theory is essentially an eliminativist theory. According to this understanding, the self can not be reduced to a bundle because there is nothing that answers to the concept of a self. Consequently, the idea of a self must be eliminated.
== See also ==
Anattā
Humeanism § Bundle theory of the self
Platonic realism
Substance theory
== References ==
== Further reading ==
David Hume (1738), A Treatise of Human Nature, Book I, Part IV, Section VI
Derek Parfit (1984), Reasons and Persons
== External links ==
Robinson, Howard. "Substance". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. | Wikipedia/Bundle_theory |
In ontology, the theory of categories concerns itself with the categories of being: the highest genera or kinds of entities. To investigate the categories of being, or simply categories, is to determine the most fundamental and the broadest classes of entities. A distinction between such categories, in making the categories or applying them, is called an ontological distinction. Various systems of categories have been proposed, they often include categories for substances, properties, relations, states of affairs or events. A representative question within the theory of categories might articulate itself, for example, in a query like, "Are universals prior to particulars?"
== Early development ==
The process of abstraction required to discover the number and names of the categories of being has been undertaken by many philosophers since Aristotle and involves the careful inspection of each concept to ensure that there is no higher category or categories under which that concept could be subsumed. The scholars of the twelfth and thirteenth centuries developed Aristotle's ideas. For example, Gilbert of Poitiers divides Aristotle's ten categories into two sets, primary and secondary, according to whether they inhere in the subject or not:
Primary categories: Substance, Relation, Quantity and Quality
Secondary categories: Place, Time, Situation, Condition, Action, Passion
Furthermore, following Porphyry’s likening of the classificatory hierarchy to a tree, they concluded that the major classes could be subdivided to form subclasses, for example, Substance could be divided into Genus and Species, and Quality could be subdivided into Property and Accident, depending on whether the property was necessary or contingent. An alternative line of development was taken by Plotinus in the second century who by a process of abstraction reduced Aristotle's list of ten categories to five: Substance, Relation, Quantity, Motion and Quality. Plotinus further suggested that the latter three categories of his list, namely Quantity, Motion and Quality correspond to three different kinds of relation and that these three categories could therefore be subsumed under the category of Relation. This was to lead to the supposition that there were only two categories at the top of the hierarchical tree, namely Substance and Relation. Many supposed that relations only exist in the mind. Substance and Relation, then, are closely commutative with Matter and Mind--this is expressed most clearly in the dualism of René Descartes.
=== Vaisheshika ===
=== Stoic ===
=== Aristotle ===
One of Aristotle’s early interests lay in the classification of the natural world, how for example the genus "animal" could be first divided into "two-footed animal" and then into "wingless, two-footed animal". He realised that the distinctions were being made according to the qualities the animal possesses, the quantity of its parts and the kind of motion that it exhibits. To fully complete the proposition "this animal is ..." Aristotle stated in his work on the Categories that there were ten kinds of predicate where ...
"... each signifies either substance or quantity or quality or relation or where or when or being-in-a-position or having or acting or being acted upon".
He realised that predicates could be simple or complex. The simple kinds consist of a subject and a predicate linked together by the "categorical" or inherent type of relation. For Aristotle the more complex kinds were limited to propositions where the predicate is compounded of two of the above categories for example "this is a horse running". More complex kinds of proposition were only discovered after Aristotle by the Stoic, Chrysippus, who developed the "hypothetical" and "disjunctive" types of syllogism and these were terms which were to be developed through the Middle Ages and were to reappear in Kant's system of categories.
Category came into use with Aristotle's essay Categories, in which he discussed univocal and equivocal terms, predication, and ten categories:
Substance, essence (ousia) – examples of primary substance: this man, this horse; secondary substance (species, genera): man, horse
Quantity (poson, how much), discrete or continuous – examples: two cubits long, number, space, (length of) time.
Quality (poion, of what kind or description) – examples: white, black, grammatical, hot, sweet, curved, straight.
Relation (pros ti, toward something) – examples: double, half, large, master, knowledge.
Place (pou, where) – examples: in a marketplace, in the Lyceum
Time (pote, when) – examples: yesterday, last year
Position, posture, attitude (keisthai, to lie) – examples: sitting, lying, standing
State, condition (echein, to have or be) – examples: shod, armed
Action (poiein, to make or do) – examples: to lance, to heat, to cool (something)
Affection, passion (paschein, to suffer or undergo) – examples: to be lanced, to be heated, to be cooled
=== Plotinus ===
Plotinus in writing his Enneads around AD 250 recorded that "Philosophy at a very early age investigated the number and character of the existents ... some found ten, others less ... to some the genera were the first principles, to others only a generic classification of existents." He realised that some categories were reducible to others saying "Why are not Beauty, Goodness and the virtues, Knowledge and Intelligence included among the primary genera?" He concluded that such transcendental categories and even the categories of Aristotle were in some way posterior to the three Eleatic categories first recorded in Plato's dialogue Parmenides and which comprised the following three coupled terms:
Unity/Plurality
Motion/Stability
Identity/Difference
Plotinus called these "the hearth of reality" deriving from them not only the three categories of Quantity, Motion and Quality but also what came to be known as "the three moments of the Neoplatonic world process":
First, there existed the "One", and his view that "the origin of things is a contemplation"
The Second "is certainly an activity ... a secondary phase ... life streaming from life ... energy running through the universe"
The Third is some kind of Intelligence concerning which he wrote "Activity is prior to Intellection ... and self knowledge"
Plotinus likened the three to the centre, the radii and the circumference of a circle, and clearly thought that the principles underlying the categories were the first principles of creation. "From a single root all being multiplies." Similar ideas were to be introduced into Early Christian thought by, for example, Gregory of Nazianzus who summed it up saying "Therefore, Unity, having from all eternity arrived by motion at duality, came to rest in Trinity."
== Modern development ==
Kant and Hegel accused the Aristotelian table of categories of being 'rhapsodic', derived arbitrarily and in bulk from experience, without any systematic necessity.
The early modern dualism, which has been described above, of Mind and Matter or Subject and Relation, as reflected in the writings of Descartes underwent a substantial revision in the late 18th century. The first objections to this stance were formulated in the eighteenth century by Immanuel Kant who realised that we can say nothing about Substance except through the relation of the subject to other things.
For example: In the sentence "This is a house" the substantive subject "house" only gains meaning in relation to human use patterns or to other similar houses. The category of Substance disappears from Kant's tables, and under the heading of Relation, Kant lists inter alia the three relationship types of Disjunction, Causality and Inherence. The three older concepts of Quantity, Motion and Quality, as Peirce discovered, could be subsumed under these three broader headings in that Quantity relates to the subject through the relation of Disjunction; Motion relates to the subject through the relation of Causality; and Quality relates to the subject through the relation of Inherence. Sets of three continued to play an important part in the nineteenth century development of the categories, most notably in G.W.F. Hegel's extensive tabulation of categories, and in C.S. Peirce's categories set out in his work on the logic of relations. One of Peirce's contributions was to call the three primary categories Firstness, Secondness and Thirdness which both emphasises their general nature, and avoids the confusion of having the same name for both the category itself and for a concept within that category.
In a separate development, and building on the notion of primary and secondary categories introduced by the Scholastics, Kant introduced the idea that secondary or "derivative" categories could be derived from the primary categories through the combination of one primary category with another. This would result in the formation of three secondary categories: the first, "Community" was an example that Kant gave of such a derivative category; the second, "Modality", introduced by Kant, was a term which Hegel, in developing Kant's dialectical method, showed could also be seen as a derivative category; and the third, "Spirit" or "Will" were terms that Hegel and Schopenhauer were developing separately for use in their own systems. Karl Jaspers in the twentieth century, in his development of existential categories, brought the three together, allowing for differences in terminology, as Substantiality, Communication and Will. This pattern of three primary and three secondary categories was used most notably in the nineteenth century by Peter Mark Roget to form the six headings of his Thesaurus of English Words and Phrases. The headings used were the three objective categories of Abstract Relation, Space (including Motion) and Matter and the three subjective categories of Intellect, Feeling and Volition, and he found that under these six headings all the words of the English language, and hence any possible predicate, could be assembled.
=== Kant ===
In the Critique of Pure Reason (1781), Immanuel Kant argued that the categories are part of our own mental structure and consist of a set of a priori concepts through which we interpret the world around us. These concepts correspond to twelve logical functions of the understanding which we use to make judgements and there are therefore two tables given in the Critique, one of the Judgements and a corresponding one for the Categories. To give an example, the logical function behind our reasoning from ground to consequence (based on the Hypothetical relation) underlies our understanding of the world in terms of cause and effect (the Causal relation). In each table the number twelve arises from, firstly, an initial division into two: the Mathematical and the Dynamical; a second division of each of these headings into a further two: Quantity and Quality, and Relation and Modality respectively; and, thirdly, each of these then divides into a further three subheadings as follows.
Criticism of Kant's system followed, firstly, by Arthur Schopenhauer, who amongst other things was unhappy with the term "Community", and declared that the tables "do open violence to truth, treating it as nature was treated by old-fashioned gardeners", and secondly, by W.T.Stace who in his book The Philosophy of Hegel suggested that in order to make Kant's structure completely symmetrical a third category would need to be added to the Mathematical and the Dynamical. This, he said, Hegel was to do with his category of concept.
=== Hegel ===
G.W.F. Hegel in his Science of Logic (1812) attempted to provide a more comprehensive system of categories than Kant and developed a structure that was almost entirely triadic. So important were the categories to Hegel that he claimed the first principle of the world, which he called the "absolute", is "a system of categories … the categories must be the reason of which the world is a consequent".
Using his own logical method of sublation, later called the Hegelian dialectic, reasoning from the abstract through the negative to the concrete, he arrived at a hierarchy of some 270 categories, as explained by W. T. Stace. The three very highest categories were "logic", "nature" and "spirit". The three highest categories of "logic", however, he called "being", "essence", and "notion" which he explained as follows:
Being was differentiated from Nothing by containing with it the concept of the "other", an initial internal division that can be compared with Kant's category of disjunction. Stace called the category of Being the sphere of common sense containing concepts such as consciousness, sensation, quantity, quality and measure.
Essence. The "other" separates itself from the "one" by a kind of motion, reflected in Hegel's first synthesis of "becoming". For Stace this category represented the sphere of science containing within it firstly, the thing, its form and properties; secondly, cause, effect and reciprocity, and thirdly, the principles of classification, identity and difference.
Notion. Having passed over into the "Other" there is an almost neoplatonic return into a higher unity that in embracing the "one" and the "other" enables them to be considered together through their inherent qualities. This according to Stace is the sphere of philosophy proper where we find not only the three types of logical proposition: disjunctive, hypothetical, and categorical but also the three transcendental concepts of beauty, goodness and truth.
Schopenhauer's category that corresponded with "notion" was that of "idea", which in his Four-Fold Root of Sufficient Reason he complemented with the category of the "will". The title of his major work was The World as Will and Idea. The two other complementary categories, reflecting one of Hegel's initial divisions, were those of Being and Becoming. At around the same time, Goethe was developing his colour theories in the Farbenlehre of 1810, and introduced similar principles of combination and complementation, symbolising, for Goethe, "the primordial relations which belong both to nature and vision". Hegel in his Science of Logic accordingly asks us to see his system not as a tree but as a circle.
== Twentieth-century development ==
In the twentieth century the primacy of the division between the subjective and the objective, or between mind and matter, was disputed by, among others, Bertrand Russell and Gilbert Ryle. Philosophy began to move away from the metaphysics of categorisation towards the linguistic problem of trying to differentiate between, and define, the words being used. Ludwig Wittgenstein’s conclusion was that there were no clear definitions which we can give to words and categories but only a "halo" or "corona" of related meanings radiating around each term. Gilbert Ryle thought the problem could be seen in terms of dealing with "a galaxy of ideas" rather than a single idea, and suggested that category mistakes are made when a concept (e.g. "university"), understood as falling under one category (e.g. abstract idea), is used as though it falls under another (e.g. physical object). With regard to the visual analogies being used, Peirce and Lewis, just like Plotinus earlier, likened the terms of propositions to points, and the relations between the terms to lines. Peirce, taking this further, talked of univalent, bivalent and trivalent relations linking predicates to their subject and it is just the number and types of relation linking subject and predicate that determine the category into which a predicate might fall. Primary categories contain concepts where there is one dominant kind of relation to the subject. Secondary categories contain concepts where there are two dominant kinds of relation. Examples of the latter were given by Heidegger in his two propositions "the house is on the creek" where the two dominant relations are spatial location (Disjunction) and cultural association (Inherence), and "the house is eighteenth century" where the two relations are temporal location (Causality) and cultural quality (Inherence). A third example may be inferred from Kant in the proposition "the house is impressive or sublime" where the two relations are spatial or mathematical disposition (Disjunction) and dynamic or motive power (Causality). Both Peirce and Wittgenstein introduced the analogy of colour theory in order to illustrate the shades of meanings of words. Primary categories, like primary colours, are analytical representing the furthest we can go in terms of analysis and abstraction and include Quantity, Motion and Quality. Secondary categories, like secondary colours, are synthetic and include concepts such as Substance, Community and Spirit.
Apart from these, the categorial scheme of Alfred North Whitehead and his Process Philosophy, alongside Nicolai Hartmann and his Critical Realism, remain one of the most detailed and advanced systems in categorial research in metaphysics.
=== Peirce ===
Charles Sanders Peirce, who had read Kant and Hegel closely, and who also had some knowledge of Aristotle, proposed a system of merely three phenomenological categories: Firstness, Secondness, and Thirdness, which he repeatedly invoked in his subsequent writings. Like Hegel, C.S. Peirce attempted to develop a system of categories from a single indisputable principle, in Peirce's case the notion that in the first instance he could only be aware of his own ideas.
"It seems that the true categories of consciousness are first, feeling ... second, a sense of resistance ... and third, synthetic consciousness, or thought".
Elsewhere he called the three primary categories: Quality, Reaction and Meaning, and even Firstness, Secondness and Thirdness, saying, "perhaps it is not right to call these categories conceptions, they are so intangible that they are rather tones or tints upon conceptions":
Firstness (Quality): "The first is predominant in feeling ... we must think of a quality without parts, e.g. the colour of magenta ... When I say it is a quality I do not mean that it "inheres" in a subject ... The whole content of consciousness is made up of qualities of feeling, as truly as the whole of space is made up of points, or the whole of time by instants".
Secondness (Reaction): "This is present even in such a rudimentary fragment of experience as a simple feeling ... an action and reaction between our soul and the stimulus ... The idea of second is predominant in the ideas of causation and of statical force ... the real is active; we acknowledge it by calling it the actual".
Thirdness (Meaning): "Thirdness is essentially of a general nature ... ideas in which thirdness predominate [include] the idea of a sign or representation ... Every genuine triadic relation involves meaning ... the idea of meaning is irreducible to those of quality and reaction ... synthetical consciousness is the consciousness of a third or medium".
Although Peirce's three categories correspond to the three concepts of relation given in Kant's tables, the sequence is now reversed and follows that given by Hegel, and indeed before Hegel of the three moments of the world-process given by Plotinus. Later, Peirce gave a mathematical reason for there being three categories in that although monadic, dyadic and triadic nodes are irreducible, every node of a higher valency is reducible to a "compound of triadic relations". Ferdinand de Saussure, who was developing "semiology" in France just as Peirce was developing "semiotics" in the US, likened each term of a proposition to "the centre of a constellation, the point where other coordinate terms, the sum of which is indefinite, converge".
=== Others ===
Edmund Husserl (1962, 2000) wrote extensively about categorial systems as part of his phenomenology.
For Gilbert Ryle (1949), a category (in particular a "category mistake") is an important semantic concept, but one having only loose affinities to an ontological category.
Contemporary systems of categories have been proposed by John G. Bennett (The Dramatic Universe, 4 vols., 1956–65), Wilfrid Sellars (1974), Reinhardt Grossmann (1983, 1992), Johansson (1989), Hoffman and Rosenkrantz (1994), Roderick Chisholm (1996), Barry Smith (ontologist) (2003), and Jonathan Lowe (2006).
== See also ==
== References ==
== Selected bibliography ==
Aristotle, 1953. Metaphysics. Ross, W. D., trans. Oxford University Press.
--------, 2004. Categories, Edghill, E. M., trans. Uni. of Adelaide library.
John G. Bennett, 1956–1965. The Dramatic Universe. London, Hodder & Stoughton.
Gustav Bergmann, 1992. New Foundations of Ontology. Madison: Uni. of Wisconsin Press.
Browning, Douglas, 1990. Ontology and the Practical Arena. Pennsylvania State Uni.
Butchvarov, Panayot, 1979. Being qua Being: A Theory of Identity, Existence, and Predication. Indiana Uni. Press.
Roderick Chisholm, 1996. A Realistic Theory of Categories. Cambridge Uni. Press.
Feibleman, James Kern, 1951. Ontology. The Johns Hopkins Press (reprinted 1968, Greenwood Press, Publishers, New York).
Grossmann, Reinhardt, 1983. The Categorial Structure of the World. Indiana Uni. Press.
Grossmann, Reinhardt, 1992. The Existence of the World: An Introduction to Ontology. Routledge.
Haaparanta, Leila and Koskinen, Heikki J., 2012. Categories of Being: Essays on Metaphysics and Logic. New York: Oxford University Press.
Hoffman, J., and Rosenkrantz, G. S.,1994. Substance among other Categories. Cambridge Uni. Press.
Edmund Husserl, 1962. Ideas: General Introduction to Pure Phenomenology. Boyce Gibson, W. R., trans. Collier.
------, 2000. Logical Investigations, 2nd ed. Findlay, J. N., trans. Routledge.
Johansson, Ingvar, 1989. Ontological Investigations. Routledge, 2nd ed. Ontos Verlag 2004.
Kahn, Charles H., 2009. Essays on Being, Oxford University Press.
Immanuel Kant, 1998. Critique of Pure Reason. Guyer, Paul, and Wood, A. W., trans. Cambridge Uni. Press.
Charles Sanders Peirce, 1992, 1998. The Essential Peirce, vols. 1,2. Houser, Nathan et al., eds. Indiana Uni. Press.
Gilbert Ryle, 1949. The Concept of Mind. Uni. of Chicago Press.
Wilfrid Sellars, 1974, "Toward a Theory of the Categories" in Essays in Philosophy and Its History. Reidel.
Barry Smith, 2003. "Ontology" in Blackwell Guide to the Philosophy of Computing and Information. Blackwell.
== External links ==
Aristotle's Categories at MIT.
Thomasson, Amie. "Categories". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
"Ontological Categories and How to Use Them" – Amie Thomasson.
"Recent Advances in Metaphysics" – E. J. Lowe.
Theory and History of Ontology – Raul Corazzon. | Wikipedia/Theory_of_categories |
Physics is the scientific study of matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. It is one of the most fundamental scientific disciplines. A scientist who specializes in the field of physics is called a physicist.
Physics is one of the oldest academic disciplines. Over much of the past two millennia, physics, chemistry, biology, and certain branches of mathematics were a part of natural philosophy, but during the Scientific Revolution in the 17th century, these natural sciences branched into separate research endeavors. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in these and other academic disciplines such as mathematics and philosophy.
Advances in physics often enable new technologies. For example, advances in the understanding of electromagnetism, solid-state physics, and nuclear physics led directly to the development of technologies that have transformed modern society, such as television, computers, domestic appliances, and nuclear weapons; advances in thermodynamics led to the development of industrialization; and advances in mechanics inspired the development of calculus.
== History ==
The word physics comes from the Latin physica ('study of nature'), which itself is a borrowing of the Greek φυσική (phusikḗ 'natural science'), a term derived from φύσις (phúsis 'origin, nature, property').
=== Ancient astronomy ===
Astronomy is one of the oldest natural sciences. Early civilizations dating before 3000 BCE, such as the Sumerians, ancient Egyptians, and the Indus Valley Civilisation, had a predictive knowledge and a basic awareness of the motions of the Sun, Moon, and stars. The stars and planets, believed to represent gods, were often worshipped. While the explanations for the observed positions of the stars were often unscientific and lacking in evidence, these early observations laid the foundation for later astronomy, as the stars were found to traverse great circles across the sky, which could not explain the positions of the planets.
According to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. Egyptian astronomers left monuments showing knowledge of the constellations and the motions of the celestial bodies, while Greek poet Homer wrote of various celestial objects in his Iliad and Odyssey; later Greek astronomers provided names, which are still used today, for most constellations visible from the Northern Hemisphere.
=== Natural philosophy ===
Natural philosophy has its origins in Greece during the Archaic period (650 BCE – 480 BCE), when pre-Socratic philosophers like Thales rejected non-naturalistic explanations for natural phenomena and proclaimed that every event had a natural cause. They proposed ideas verified by reason and observation, and many of their hypotheses proved successful in experiment; for example, atomism was found to be correct approximately 2000 years after it was proposed by Leucippus and his pupil Democritus.
=== Aristotle and Hellenistic physics ===
During the classical period in Greece (6th, 5th and 4th centuries BCE) and in Hellenistic times, natural philosophy developed along many lines of inquiry. Aristotle (Greek: Ἀριστοτέλης, Aristotélēs) (384–322 BCE), a student of Plato,
wrote on many subjects, including a substantial treatise on "Physics" – in the 4th century BC. Aristotelian physics was influential for about two millennia. His approach mixed some limited observation with logical deductive arguments, but did not rely on experimental verification of deduced statements. Aristotle's foundational work in Physics, though very imperfect, formed a framework against which later thinkers further developed the field. His approach is entirely superseded today.
He explained ideas such as motion (and gravity) with the theory of four elements.
Aristotle believed that each of the four classical elements (air, fire, water, earth) had its own natural place. Because of their differing densities, each element will revert to its own specific place in the atmosphere. So, because of their weights, fire would be at the top, air underneath fire, then water, then lastly earth. He also stated that when a small amount of one element enters the natural place of another, the less abundant element will automatically go towards its own natural place. For example, if there is a fire on the ground, the flames go up into the air in an attempt to go back into its natural place where it belongs. His laws of motion included: that heavier objects will fall faster, the speed being proportional to the weight and the speed of the object that is falling depends inversely on the density object it is falling through (e.g. density of air). He also stated that, when it comes to violent motion (motion of an object when a force is applied to it by a second object) that the speed that object moves, will only be as fast or strong as the measure of force applied to it. The problem of motion and its causes was studied carefully, leading to the philosophical notion of a "prime mover" as the ultimate source of all motion in the world (Book 8 of his treatise Physics).
=== Medieval European and Islamic ===
The Western Roman Empire fell to invaders and internal decay in the fifth century, resulting in a decline in intellectual pursuits in western Europe. By contrast, the Eastern Roman Empire (usually known as the Byzantine Empire) resisted the attacks from invaders and continued to advance various fields of learning, including physics. In the sixth century, John Philoponus challenged the dominant Aristotelian approach to science although much of his work was focused on Christian theology.
In the sixth century, Isidore of Miletus created an important compilation of Archimedes' works that are copied in the Archimedes Palimpsest.
Islamic scholarship inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further, especially placing emphasis on observation and a priori reasoning, developing early forms of the scientific method.
The most notable innovations under Islamic scholarship were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics (also known as Kitāb al-Manāẓir), written by Ibn al-Haytham, in which he presented the alternative to the ancient Greek idea about vision. His discussed his experiments with camera obscura, showing that light moved in a straight line; he encouraged readers to reproduce his experiments making him one of the originators of the scientific method
=== Scientific Revolution ===
Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics.
Major developments in this period include the replacement of the geocentric model of the Solar System with the heliocentric Copernican model, the laws governing the motion of planetary bodies (determined by Johannes Kepler between 1609 and 1619), Galileo's pioneering work on telescopes and observational astronomy in the 16th and 17th centuries, and Isaac Newton's discovery and unification of the laws of motion and universal gravitation (that would come to bear his name). Newton, and separately Gottfried Wilhelm Leibniz, developed calculus, the mathematical study of continuous change, and Newton applied it to solve physical problems.
=== 19th century ===
The discovery of laws in thermodynamics, chemistry, and electromagnetics resulted from research efforts during the Industrial Revolution as energy needs increased. By the end of the 19th century, theories of thermodynamics, mechanics, and electromagnetics matched a wide variety of observations. Taken together these theories became the basis for what would later be called classical physics.: 2
A few experimental results remained inexplicable. Classical electromagnetism presumed a medium, an luminiferous aether to support the propagation of waves, but this medium could not be detected. The intensity of light from hot glowing blackbody objects did not match the predictions of thermodynamics and electromagnetism. The character of electron emission of illuminated metals differed from predictions. These failures, seemingly insignificant in the big picture would upset the physics world in first two decades of the 20th century.
=== 20th century ===
Modern physics began in the early 20th century with the work of Max Planck in quantum theory and Albert Einstein's theory of relativity. Both of these theories came about due to inaccuracies in classical mechanics in certain situations. Classical mechanics predicted that the speed of light depends on the motion of the observer, which could not be resolved with the constant speed predicted by Maxwell's equations of electromagnetism. This discrepancy was corrected by Einstein's theory of special relativity, which replaced classical mechanics for fast-moving bodies and allowed for a constant speed of light. Black-body radiation provided another problem for classical physics, which was corrected when Planck proposed that the excitation of material oscillators is possible only in discrete steps proportional to their frequency. This, along with the photoelectric effect and a complete theory predicting discrete energy levels of electron orbitals, led to the theory of quantum mechanics improving on classical physics at very small scales.
Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger and Paul Dirac. From this early work, and work in related fields, the Standard Model of particle physics was derived. Following the discovery of a particle with properties consistent with the Higgs boson at CERN in 2012, all fundamental particles predicted by the standard model, and no others, appear to exist; however, physics beyond the Standard Model, with theories such as supersymmetry, is an active area of research. Areas of mathematics in general are important to this field, such as the study of probabilities and groups.
== Core theories ==
Physics deals with a wide variety of systems, although certain theories are used by all physicists. Each of these theories was experimentally tested numerous times and found to be an adequate approximation of nature.
These central theories are important tools for research into more specialized topics, and any physicist, regardless of their specialization, is expected to be literate in them. These include classical mechanics, quantum mechanics, thermodynamics and statistical mechanics, electromagnetism, and special relativity.
=== Distinction between classical and modern physics ===
In the first decades of the 20th century physics was revolutionized by the discoveries of quantum mechanics and relativity. The changes were so fundamental that these new concepts became the foundation of "modern physics", with other topics becoming "classical physics". The majority of applications of physics are essentially classical.: xxxi
The laws of classical physics accurately describe systems whose important length scales are greater than the atomic scale and whose motions are much slower than the speed of light.: xxxii Outside of this domain, observations do not match predictions provided by classical mechanics.: 6
=== Classical theory ===
Classical physics includes the traditional branches and topics that were recognized and well-developed before the beginning of the 20th century—classical mechanics, thermodynamics, and electromagnetism.: 2 Classical mechanics is concerned with bodies acted on by forces and bodies in motion and may be divided into statics (study of the forces on a body or bodies not subject to an acceleration), kinematics (study of motion without regard to its causes), and dynamics (study of motion and the forces that affect it); mechanics may also be divided into solid mechanics and fluid mechanics (known together as continuum mechanics), the latter include such branches as hydrostatics, hydrodynamics and pneumatics. Acoustics is the study of how sound is produced, controlled, transmitted and received. Important modern branches of acoustics include ultrasonics, the study of sound waves of very high frequency beyond the range of human hearing; bioacoustics, the physics of animal calls and hearing, and electroacoustics, the manipulation of audible sound waves using electronics.
Optics, the study of light, is concerned not only with visible light but also with infrared and ultraviolet radiation, which exhibit all of the phenomena of visible light except visibility, e.g., reflection, refraction, interference, diffraction, dispersion, and polarization of light. Heat is a form of energy, the internal energy possessed by the particles of which a substance is composed; thermodynamics deals with the relationships between heat and other forms of energy. Electricity and magnetism have been studied as a single branch of physics since the intimate connection between them was discovered in the early 19th century; an electric current gives rise to a magnetic field, and a changing magnetic field induces an electric current. Electrostatics deals with electric charges at rest, electrodynamics with moving charges, and magnetostatics with magnetic poles at rest.
=== Modern theory ===
The discovery of relativity and of quantum mechanics in the first decades of the 20th century transformed the conceptual basis of physics without reducing the practical value of most of the physical theories developed up to that time. Consequently the topics of physics have come to be divided into "classical physics" and "modern physics", with the latter category including effects related to quantum mechanics and relativity.: 2
Classical physics is generally concerned with matter and energy on the normal scale of observation, while much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on a very large or very small scale. For example, atomic and nuclear physics study matter on the smallest scale at which chemical elements can be identified. The physics of elementary particles is on an even smaller scale since it is concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in particle accelerators. On this scale, ordinary, commonsensical notions of space, time, matter, and energy are no longer valid.
The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. Classical mechanics approximates nature as continuous, while quantum theory is concerned with the discrete nature of many phenomena at the atomic and subatomic level and with the complementary aspects of particles and waves in the description of such phenomena. The theory of relativity is concerned with the description of phenomena that take place in a frame of reference that is in motion with respect to an observer; the special theory of relativity is concerned with motion in the absence of gravitational fields and the general theory of relativity with motion and its connection with gravitation. Both quantum theory and the theory of relativity find applications in many areas of modern physics.
Fundamental concepts in modern physics include:
Action
Causality
Covariance
Particle
Physical field
Physical interaction
Quantum
Statistical ensemble
Symmetry
Wave
== Research ==
=== Scientific method ===
Physicists use the scientific method to test the validity of a physical theory. By using a methodical approach to compare the implications of a theory with the conclusions drawn from its related experiments and observations, physicists are better able to test the validity of a theory in a logical, unbiased, and repeatable way. To that end, experiments are performed and observations are made in order to determine the validity or invalidity of a theory.
A scientific law is a concise verbal or mathematical statement of a relation that expresses a fundamental principle of some theory, such as Newton's law of universal gravitation.
=== Theory and experiment ===
Theorists seek to develop mathematical models that both agree with existing experiments and successfully predict future experimental results, while experimentalists devise and perform experiments to test theoretical predictions and explore new phenomena. Although theory and experiment are developed separately, they strongly affect and depend upon each other. Progress in physics frequently comes about when experimental results defy explanation by existing theories, prompting intense focus on applicable modelling, and when new theories generate experimentally testable predictions, which inspire the development of new experiments (and often related equipment).
Physicists who work at the interplay of theory and experiment are called phenomenologists, who study complex phenomena observed in experiment and work to relate them to a fundamental theory.
Theoretical physics has historically taken inspiration from philosophy; electromagnetism was unified this way. Beyond the known universe, the field of theoretical physics also deals with hypothetical issues, such as parallel universes, a multiverse, and higher dimensions. Theorists invoke these ideas in hopes of solving particular problems with existing theories; they then explore the consequences of these ideas and work toward making testable predictions.
Experimental physics expands, and is expanded by, engineering and technology. Experimental physicists who are involved in basic research design and perform experiments with equipment such as particle accelerators and lasers, whereas those involved in applied research often work in industry, developing technologies such as magnetic resonance imaging (MRI) and transistors. Feynman has noted that experimentalists may seek areas that have not been explored well by theorists.
=== Scope and aims ===
Physics covers a wide range of phenomena, from elementary particles (such as quarks, neutrinos, and electrons) to the largest superclusters of galaxies. Included in these phenomena are the most basic objects composing all other things. Therefore, physics is sometimes called the "fundamental science". Physics aims to describe the various phenomena that occur in nature in terms of simpler phenomena. Thus, physics aims to both connect the things observable to humans to root causes, and then connect these causes together.
For example, the ancient Chinese observed that certain rocks (lodestone and magnetite) were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. But even before the Chinese discovered magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also first studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force—electromagnetism. This process of "unifying" forces continues today, and electromagnetism and the weak nuclear force are now considered to be two aspects of the electroweak interaction. Physics hopes to find an ultimate reason (theory of everything) for why nature is as it is (see section Current research below for more information).
=== Current research ===
Research in physics is continually progressing on a large number of fronts.
In condensed matter physics, an important unsolved theoretical problem is that of high-temperature superconductivity. Many condensed matter experiments are aiming to fabricate workable spintronics and quantum computers.
In particle physics, the first pieces of experimental evidence for physics beyond the Standard Model have begun to appear. Foremost among these are indications that neutrinos have non-zero mass. These experimental results appear to have solved the long-standing solar neutrino problem, and the physics of massive neutrinos remains an area of active theoretical and experimental research. The Large Hadron Collider has already found the Higgs boson, but future research aims to prove or disprove the supersymmetry, which extends the Standard Model of particle physics. Research on the nature of the major mysteries of dark matter and dark energy is also currently ongoing.
Although much progress has been made in high-energy, quantum, and astronomical physics, many everyday phenomena involving complexity, chaos, or turbulence are still poorly understood. Complex problems that seem like they could be solved by a clever application of dynamics and mechanics remain unsolved; examples include the formation of sandpiles, nodes in trickling water, the shape of water droplets, mechanisms of surface tension catastrophes, and self-sorting in shaken heterogeneous collections.
These complex phenomena have received growing attention since the 1970s for several reasons, including the availability of modern mathematical methods and computers, which enabled complex systems to be modeled in new ways. Complex physics has become part of increasingly interdisciplinary research, as exemplified by the study of turbulence in aerodynamics and the observation of pattern formation in biological systems. In the 1932 Annual Review of Fluid Mechanics, Horace Lamb said:
I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic.
== Branches and fields ==
=== Fields ===
The major fields of physics, along with their subfields and the theories and concepts they employ, are shown in the following table.
Since the 20th century, the individual fields of physics have become increasingly specialised, and today most physicists work in a single field for their entire careers. "Universalists" such as Einstein (1879–1955) and Lev Landau (1908–1968), who worked in multiple fields of physics, are now very rare.
Contemporary research in physics can be broadly divided into nuclear and particle physics; condensed matter physics; atomic, molecular, and optical physics; astrophysics; and applied physics. Some physics departments also support physics education research and physics outreach.
==== Nuclear and particle ====
Particle physics is the study of the elementary constituents of matter and energy and the interactions between them. In addition, particle physicists design and develop the high-energy accelerators, detectors, and computer programs necessary for this research. The field is also called "high-energy physics" because many elementary particles do not occur naturally but are created only during high-energy collisions of other particles.
Currently, the interactions of elementary particles and fields are described by the Standard Model. The model accounts for the 12 known particles of matter (quarks and leptons) that interact via the strong, weak, and electromagnetic fundamental forces. Dynamics are described in terms of matter particles exchanging gauge bosons (gluons, W and Z bosons, and photons, respectively). The Standard Model also predicts a particle known as the Higgs boson. In July 2012 CERN, the European laboratory for particle physics, announced the detection of a particle consistent with the Higgs boson, an integral part of the Higgs mechanism.
Nuclear physics is the field of physics that studies the constituents and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power generation and nuclear weapons technology, but the research has provided application in many fields, including those in nuclear medicine and magnetic resonance imaging, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology.
==== Atomic, molecular, and optical ====
Atomic, molecular, and optical physics (AMO) is the study of matter—matter and light—matter interactions on the scale of single atoms and molecules. The three areas are grouped together because of their interrelationships, the similarity of methods used, and the commonality of their relevant energy scales. All three areas include both classical, semi-classical and quantum treatments; they can treat their subject from a microscopic view (in contrast to a macroscopic view).
Atomic physics studies the electron shells of atoms. Current research focuses on activities in quantum control, cooling and trapping of atoms and ions, low-temperature collision dynamics and the effects of electron correlation on structure and dynamics. Atomic physics is influenced by the nucleus (see hyperfine splitting), but intra-nuclear phenomena such as fission and fusion are considered part of nuclear physics.
Molecular physics focuses on multi-atomic structures and their internal and external interactions with matter and light. Optical physics is distinct from optics in that it tends to focus not on the control of classical light fields by macroscopic objects but on the fundamental properties of optical fields and their interactions with matter in the microscopic realm.
==== Condensed matter ====
Condensed matter physics is the field of physics that deals with the macroscopic physical properties of matter. In particular, it is concerned with the "condensed" phases that appear whenever the number of particles in a system is extremely large and the interactions between them are strong.
The most familiar examples of condensed phases are solids and liquids, which arise from the bonding by way of the electromagnetic force between atoms. More exotic condensed phases include the superfluid and the Bose–Einstein condensate found in certain atomic systems at very low temperature, the superconducting phase exhibited by conduction electrons in certain materials, and the ferromagnetic and antiferromagnetic phases of spins on atomic lattices.
Condensed matter physics is the largest field of contemporary physics. Historically, condensed matter physics grew out of solid-state physics, which is now considered one of its main subfields. The term condensed matter physics was apparently coined by Philip Anderson when he renamed his research group—previously solid-state theory—in 1967. In 1978, the Division of Solid State Physics of the American Physical Society was renamed as the Division of Condensed Matter Physics. Condensed matter physics has a large overlap with chemistry, materials science, nanotechnology and engineering.
==== Astrophysics ====
Astrophysics and astronomy are the application of the theories and methods of physics to the study of stellar structure, stellar evolution, the origin of the Solar System, and related problems of cosmology. Because astrophysics is a broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
The discovery by Karl Jansky in 1931 that radio signals were emitted by celestial bodies initiated the science of radio astronomy. Most recently, the frontiers of astronomy have been expanded by space exploration. Perturbations and interference from the Earth's atmosphere make space-based observations necessary for infrared, ultraviolet, gamma-ray, and X-ray astronomy.
Physical cosmology is the study of the formation and evolution of the universe on its largest scales. Albert Einstein's theory of relativity plays a central role in all modern cosmological theories. In the early 20th century, Hubble's discovery that the universe is expanding, as shown by the Hubble diagram, prompted rival explanations known as the steady state universe and the Big Bang.
The Big Bang was confirmed by the success of Big Bang nucleosynthesis and the discovery of the cosmic microwave background in 1964. The Big Bang model rests on two theoretical pillars: Albert Einstein's general relativity and the cosmological principle. Cosmologists have recently established the ΛCDM model of the evolution of the universe, which includes cosmic inflation, dark energy, and dark matter.
== Other aspects ==
=== Education ===
=== Careers ===
=== Philosophy ===
Physics, as with the rest of science, relies on the philosophy of science and its "scientific method" to advance knowledge of the physical world. The scientific method employs a priori and a posteriori reasoning as well as the use of Bayesian inference to measure the validity of a given theory.
Study of the philosophical issues surrounding physics, the philosophy of physics, involves issues such as the nature of space and time, determinism, and metaphysical outlooks such as empiricism, naturalism, and realism.
Many physicists have written about the philosophical implications of their work, for instance Laplace, who championed causal determinism, and Erwin Schrödinger, who wrote on quantum mechanics. The mathematical physicist Roger Penrose has been called a Platonist by Stephen Hawking, a view Penrose discusses in his book, The Road to Reality. Hawking referred to himself as an "unashamed reductionist" and took issue with Penrose's views.
Mathematics provides a compact and exact language used to describe the order in nature. This was noted and advocated by Pythagoras, Plato, Galileo, and Newton. Some theorists, like Hilary Putnam and Penelope Maddy, hold that logical truths, and therefore mathematical reasoning, depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world, which may explain the peculiar relation between these fields.
Physics uses mathematics to organise and formulate experimental results. From those results, precise or estimated solutions are obtained, or quantitative results, from which new predictions can be made and experimentally confirmed or negated. The results from physics experiments are numerical data, with their units of measure and estimates of the errors in the measurements. Technologies based on mathematics, like computation have made computational physics an active area of research.
Ontology is a prerequisite for physics, but not for mathematics. It means physics is ultimately concerned with descriptions of the real world, while mathematics is concerned with abstract patterns, even beyond the real world. Thus physics statements are synthetic, while mathematical statements are analytic. Mathematics contains hypotheses, while physics contains theories. Mathematics statements have to be only logically true, while predictions of physics statements must match observed and experimental data.
The distinction is clear-cut, but not always obvious. For example, mathematical physics is the application of mathematics in physics. Its methods are mathematical, but its subject is physical. The problems in this field start with a "mathematical model of a physical situation" (system) and a "mathematical description of a physical law" that will be applied to that system. Every mathematical statement used for solving has a hard-to-find physical meaning. The final mathematical solution has an easier-to-find meaning, because it is what the solver is looking for.
=== Fundamental vs. applied physics ===
Physics is a branch of fundamental science (also called basic science). Physics is also called "the fundamental science" because all branches of natural science including chemistry, astronomy, geology, and biology are constrained by laws of physics. Similarly, chemistry is often called the central science because of its role in linking the physical sciences. For example, chemistry studies properties, structures, and reactions of matter (chemistry's focus on the molecular and atomic scale distinguishes it from physics). Structures are formed because particles exert electrical forces on each other, properties include physical characteristics of given substances, and reactions are bound by laws of physics, like conservation of energy, mass, and charge. Fundamental physics seeks to better explain and understand phenomena in all spheres, without a specific practical application as a goal, other than the deeper insight into the phenomema themselves.
Applied physics is a general term for physics research and development that is intended for a particular use. An applied physics curriculum usually contains a few classes in an applied discipline, like geology or electrical engineering. It usually differs from engineering in that an applied physicist may not be designing something in particular, but rather is using physics or conducting physics research with the aim of developing new technologies or solving a problem.
The approach is similar to that of applied mathematics. Applied physicists use physics in scientific research. For instance, people working on accelerator physics might seek to build better particle detectors for research in theoretical physics.
Physics is used heavily in engineering. For example, statics, a subfield of mechanics, is used in the building of bridges and other static structures. The understanding and use of acoustics results in sound control and better concert halls; similarly, the use of optics creates better optical devices. An understanding of physics makes for more realistic flight simulators, video games, and movies, and is often critical in forensic investigations.
With the standard consensus that the laws of physics are universal and do not change with time, physics can be used to study things that would ordinarily be mired in uncertainty. For example, in the study of the origin of the Earth, a physicist can reasonably model Earth's mass, temperature, and rate of rotation, as a function of time allowing the extrapolation forward or backward in time and so predict future or prior events. It also allows for simulations in engineering that speed up the development of a new technology.
There is also considerable interdisciplinarity, so many other important fields are influenced by physics (e.g., the fields of econophysics and sociophysics).
== See also ==
Earth science – Fields of natural science related to Earth
Neurophysics – branch of biophysics dealing with the development and use of physical methods to gain information about the nervous systemPages displaying wikidata descriptions as a fallback
Psychophysics – Branch of knowledge relating physical stimuli and psychological perception
Relationship between mathematics and physics
Science tourism – Travel to notable science locations
=== Lists ===
List of important publications in physics
List of physicists
Lists of physics equations
== Notes ==
== References ==
== Sources ==
== External links ==
Physics at Quanta Magazine
Usenet Physics FAQ – FAQ compiled by sci.physics and other physics newsgroups
Website of the Nobel Prize in physics – Award for outstanding contributions to the subject
World of Physics – Online encyclopedic dictionary of physics
Nature Physics – Academic journal
Physics – Online magazine by the American Physical Society
– Directory of physics related media
The Vega Science Trust – Science videos, including physics
HyperPhysics website – Physics and astronomy mind-map from Georgia State University
Physics at MIT OpenCourseWare – Online course material from Massachusetts Institute of Technology
The Feynman Lectures on Physics | Wikipedia/physics |
In cosmology, the equation of state of a perfect fluid is characterized by a dimensionless number
w
{\displaystyle w}
, equal to the ratio of its pressure
p
{\displaystyle p}
to its energy density
ρ
{\displaystyle \rho }
:
w
≡
p
ρ
.
{\displaystyle w\equiv {\frac {p}{\rho }}.}
It is closely related to the thermodynamic equation of state and ideal gas law.
== The equation ==
The perfect gas equation of state may be written as
p
=
ρ
m
R
T
=
ρ
m
C
2
{\displaystyle p=\rho _{m}RT=\rho _{m}C^{2}}
where
ρ
m
{\displaystyle \rho _{m}}
is the mass density,
R
{\displaystyle R}
is the particular gas constant,
T
{\displaystyle T}
is the temperature and
C
=
R
T
{\displaystyle C={\sqrt {RT}}}
is a characteristic thermal speed of the molecules. Thus
w
≡
p
ρ
=
ρ
m
C
2
ρ
m
c
2
=
C
2
c
2
≈
0
{\displaystyle w\equiv {\frac {p}{\rho }}={\frac {\rho _{m}C^{2}}{\rho _{m}c^{2}}}={\frac {C^{2}}{c^{2}}}\approx 0}
where
c
{\displaystyle c}
is the speed of light,
ρ
=
ρ
m
c
2
{\displaystyle \rho =\rho _{m}c^{2}}
and
C
≪
c
{\displaystyle C\ll c}
for a "cold" gas.
=== FLRW equations and the equation of state ===
The equation of state may be used in Friedmann–Lemaître–Robertson–Walker (FLRW) equations to describe the evolution of an isotropic universe filled with a perfect fluid. If
a
{\displaystyle a}
is the scale factor then
ρ
∝
a
−
3
(
1
+
w
)
.
{\displaystyle \rho \propto a^{-3(1+w)}.}
If the fluid is the dominant form of matter in a flat universe, then
a
∝
t
2
3
(
1
+
w
)
,
{\displaystyle a\propto t^{\frac {2}{3(1+w)}},}
where
t
{\displaystyle t}
is the proper time.
In general the Friedmann acceleration equation is
3
a
¨
a
=
Λ
−
4
π
G
(
ρ
+
3
p
)
{\displaystyle 3{\frac {\ddot {a}}{a}}=\Lambda -4\pi G(\rho +3p)}
where
Λ
{\displaystyle \Lambda }
is the cosmological constant and
G
{\displaystyle G}
is Newton's constant, and
a
¨
{\displaystyle {\ddot {a}}}
is the second proper time derivative of the scale factor.
If we define (what might be called "effective") energy density and pressure as
ρ
′
≡
ρ
+
Λ
8
π
G
{\displaystyle \rho '\equiv \rho +{\frac {\Lambda }{8\pi G}}}
p
′
≡
p
−
Λ
8
π
G
{\displaystyle p'\equiv p-{\frac {\Lambda }{8\pi G}}}
and
p
′
=
w
′
ρ
′
{\displaystyle p'=w'\rho '}
the acceleration equation may be written as
a
¨
a
=
−
4
3
π
G
(
ρ
′
+
3
p
′
)
=
−
4
3
π
G
(
1
+
3
w
′
)
ρ
′
{\displaystyle {\frac {\ddot {a}}{a}}=-{\frac {4}{3}}\pi G\left(\rho '+3p'\right)=-{\frac {4}{3}}\pi G(1+3w')\rho '}
=== Non-relativistic particles ===
The equation of state for ordinary non-relativistic 'matter' (e.g. cold dust) is
w
=
0
{\displaystyle w=0}
, which means that its energy density decreases as
ρ
∝
a
−
3
=
V
−
1
{\displaystyle \rho \propto a^{-3}=V^{-1}}
, where
V
{\displaystyle V}
is a volume. In an expanding universe, the total energy of non-relativistic matter remains constant, with its density decreasing as the volume increases.
=== Ultra-relativistic particles ===
The equation of state for ultra-relativistic 'radiation' (including neutrinos, and in the very early universe other particles that later became non-relativistic) is
w
=
1
/
3
{\displaystyle w=1/3}
which means that its energy density decreases as
ρ
∝
a
−
4
{\displaystyle \rho \propto a^{-4}}
. In an expanding universe, the energy density of radiation decreases more quickly than the volume expansion, because its wavelength is red-shifted.
=== Acceleration of cosmic inflation ===
Cosmic inflation and the accelerated expansion of the universe can be characterized by the equation of state of dark energy. In the simplest case, the equation of state of the cosmological constant is
w
=
−
1
{\displaystyle w=-1}
. In this case, the above expression for the scale factor is not valid and
a
∝
e
H
t
{\displaystyle a\propto e^{Ht}}
, where the constant H is the Hubble parameter. More generally, the expansion of the universe is accelerating for any equation of state
w
<
−
1
/
3
{\displaystyle w<-1/3}
. The accelerated expansion of the Universe was indeed observed. The observed value of equation of state of cosmological constant is near −1 from three different major studies.
Hypothetical phantom dark energy would have an equation of state
w
<
−
1
{\displaystyle w<-1}
, and would cause a Big Rip. Using the existing data, it is still impossible to distinguish between phantom
w
<
−
1
{\displaystyle w<-1}
and non-phantom
w
≥
−
1
{\displaystyle w\geq -1}
.
=== Fluids ===
In an expanding universe, fluids with larger equations of state disappear more quickly than those with smaller equations of state. This is the origin of the flatness and monopole problems of the Big Bang: curvature has
w
=
−
1
/
3
{\displaystyle w=-1/3}
and monopoles have
w
=
0
{\displaystyle w=0}
, so if they were around at the time of the early Big Bang, they should still be visible today. These problems are solved by cosmic inflation which has
w
≈
−
1
{\displaystyle w\approx -1}
. Measuring the equation of state of dark energy is one of the largest efforts of observational cosmology. By accurately measuring
w
{\displaystyle w}
, it is hoped that the cosmological constant could be distinguished from quintessence which has
w
≠
−
1
{\displaystyle w\neq -1}
.
=== Scalar modeling ===
A scalar field
ϕ
{\displaystyle \phi }
can be viewed as a sort of perfect fluid with equation of state
w
=
1
2
ϕ
˙
2
−
V
(
ϕ
)
1
2
ϕ
˙
2
+
V
(
ϕ
)
,
{\displaystyle w={\frac {{\frac {1}{2}}{\dot {\phi }}^{2}-V(\phi )}{{\frac {1}{2}}{\dot {\phi }}^{2}+V(\phi )}},}
where
ϕ
˙
{\displaystyle {\dot {\phi }}}
is the time-derivative of
ϕ
{\displaystyle \phi }
and
V
(
ϕ
)
{\displaystyle V(\phi )}
is the potential energy. A free (
V
=
0
{\displaystyle V=0}
) scalar field has
w
=
1
{\displaystyle w=1}
, and one with vanishing kinetic energy is equivalent to a cosmological constant:
w
=
−
1
{\displaystyle w=-1}
. Any equation of state in between, but not crossing the
w
=
−
1
{\displaystyle w=-1}
barrier known as the Phantom Divide Line (PDL), is achievable, which makes scalar fields useful models for many phenomena in cosmology.
== Table ==
Different kinds of energy have different scaling properties.
== Notes == | Wikipedia/Equation_of_state_(cosmology) |
Astronomy & Astrophysics (A&A) is a monthly peer-reviewed scientific journal covering theoretical, observational, and instrumental astronomy and astrophysics. It is operated by an editorial team under the supervision of a board of directors representing 27 sponsoring countries plus a representative of the European Southern Observatory. The journal is published by EDP Sciences and the current editors-in-chief are Thierry Forveille and João Alves.
== History ==
=== Origins ===
Astronomy & Astrophysics was created as an answer to the publishing situation found in Europe in the 1960s. At that time, multiple journals were being published in several countries around the continent. These journals usually had a limited number of subscribers, and articles were written in languages other than English. They were less widely read than American and British journals and the research they reported had therefore less impact in the community.
Starting in 1963, conversations between astronomers from European countries assessed the need for a common astronomical journal. On 8 April 1968, leading astronomers from Belgium, Denmark, France, Germany, the Netherlands, and Scandinavian countries met in Leiden University to prepare a possible merging of some of the principal existing journals. It was proposed that the new journal be called Astronomy and Astrophysics, A European Journal.
The main policy-making body of the new journal was to be the "Board of Directors", consisting of senior astronomers or government representatives of the sponsoring countries. The board appoints the editors-in chief, who are responsible for the scientific contents of the journal. The European Southern Observatory was chosen as an additional body that acts on behalf of the board and handles the administrative, financial, and legal matters of the journal.
A second meeting held in July 1968 in Brussels cemented the agreement discussed in Leiden. Each nation established an annual monetary contribution and appointed its delegates for the board of directors. Also at this meeting, the first editors-in-chief were appointed: Stuart Pottasch and Jean-Louis Steinberg.
The next meeting took place in Paris on 11 October 1968 and is officially regarded as the first meeting of the board of directors. At this meeting, the first chairman of the board, Adriaan Blaauw, was appointed, and the contract with the publisher Springer Science+Business Media was formalized.
=== Early years ===
The first issue of A&A was published in January 1969, merging several national journals of individual European countries into one comprehensive publication. These journals, with their ISSN and date of first publication, are as follows:
Annales d'Astrophysique ISSN 0365-0499 (France), established in 1938
Bulletin of the Astronomical Institutes of the Netherlands ISSN 0365-8910 (Netherlands), established in 1921
Bulletin Astronomique ISSN 0245-9787 (France), established in 1884
Journal des Observateurs ISSN 0368-3389 (France), established in 1915
Zeitschrift für Astrophysik ISSN 0372-8331 (Germany), established in 1930
Arkiv för Astronomi (ISSN 0004-2048), established in 1948 in Sweden, was also incorporated in 1973. The publishing of Astronomy & Astrophysics was further extended in 1992 by the incorporation of Bulletin of the Astronomical Institutes of Czechoslovakia (ISSN 0004-6248), established in 1947.
There were only four issues of the journal in 1969, but it soon became a monthly publication and one of the four major generalist astronomical journals in the world. Initially, papers were submitted in English, French or German, but it soon became clear that, for a given author, the papers in English were cited twice as often as those in other languages.
In addition to regular research papers in several different fields of astrophysics. A&A featured Letters and Research Notes for short manuscripts on a significant result or idea. A Supplement Series for the journal was created in 1970 for publishing extensive tabular material and catalogs.
=== 21st century ===
The turn of the century brought important changes to the journal. In 2001, a new contract was signed with EDP Sciences, which replaced Springer as the publishing house. Special Issues featuring results of astronomical surveys and space missions such as XMM-Newton, Planck, Rosetta, and Gaia were introduced.
The editorial structure of the journal was profoundly changed in 2003 and 2005 to involve more countries in the editorial process and to better handle the increasing number of submissions. Precise criteria for publishing in Astronomy & Astrophysics were explicited in 2004. English language editing was introduced in 2001 as a service to the diverse authorship of the journal. An extensive survey of authors conducted in 2007 showed widespread satisfaction with the new directions of the journal, although the use of structured abstracts proved more controversial.
The evolution of electronic publishing resulted in the extinction of the Supplement Series, which was incorporated in the main journal in 2001, and of the printed edition in 2016. The Research Notes section was also discontinued in 2016.
In 2023, A&A announced the introduction of links between articles and corresponding ESO datasets.
The journal editorial office is located at the Paris Observatory and is supervised by the managing editor. It handles over 2000 papers per year.
An archive of the published articles and related material is maintained by the Centre de données astronomiques de Strasbourg.
== Sponsoring countries ==
The original sponsoring countries were the four countries whose journals merged to form Astronomy & Astrophysics (France, Germany, the Netherlands and Sweden), together with Belgium, Denmark, Finland, and Norway. Norway later withdrew, but Austria, Greece, Italy, Spain, and Switzerland joined during the 1970s and 1980s. The Czech Republic, Estonia, Hungary, Poland, and Slovakia all joined as new members in the 1990s.
In 2001 the words "A European Journal" were removed from the front cover in recognition of the fact that the journal was becoming increasingly global in scope. In effect, Argentina was admitted as an "observer" in 2002. In 2004 the board of directors decided that the journal "will henceforth consider applications for sponsoring membership from any country in the world with well-documented active and excellent astronomical research". Argentina became the first non-European country to gain full membership in 2005, followed by Brazil and Chile in 2006 (Brazil withdrew in 2016). Other European countries also joined during the 21st century: Portugal, Croatia, and Bulgaria during the 2010s, and Armenia, Lithuania, Norway, Serbia and Ukraine in the 2010s. The current list of member countries is listed here.
== Chairs of the Board of Directors ==
The following persons are or have been chairs of the Board of Directors:
2023 - : A. Kučinskas
2022-2023: W. J. Duschl
2016-2022: A. Moitinho
2014-2016: J. Lub
2011-2013: B. Nordstroem
2010: K.S. de Boer
2005-2009: G. Meynet
1999-2004: Aa. Sandqvist
1993-1998: A. Maeder
1979-1992: G. Contopoulos
1969-1978: A. Blaauw
== Editors-in-Chief ==
2012 - : Main Journal: Thierry Forveille, Letters: João Alves (replaced Malcolm Walmsley in 2013)
2006 - 2011: Main Journal: Claude Bertout; Letters: Malcolm Walmsley
2004 - 2005: Main Journal: Claude Bertout; Letters: Peter Schneider
1999 - 2003: Main Journal: Claude Bertout, Harm Habing; Letters: Peter Schneider
1996 - 1998: Main Journal: James Lequeux, Harm Habing; Letters: Peter Schneider (replaced Stuart Pottasch in 1997)
1988 - 1995: Main Journal: James Lequeux, Michael Grewing; Letters: Stuart Pottasch
1986 - 1988: Main Journal: Françoise Praderie, Michael Grewing; Letters:Stuart Pottasch
1983 - 1985: Main Journal: Catherine Cesarsky, Michael Grewing; Letters: Stuart Pottasch
1981 - 1982: Main Journal: James Lequeux, Michael Grewing; Letters: Stuart Pottasch
1979 - 1980: Main Journal: James Lequeux, Hans-Heinrich Voigt; Letters: Stuart Pottasch
1975 - 1978: Main Journal: Jean Heidmann, Hans-Heinrich Voigt; Letters: Stuart Pottasch (from 1976 on)
1973 - 1974: Jean Heidmann, Stuart Pottasch
1969 - 1972: Jean-Louis Steinberg, Stuart Pottasch
== Open access ==
Before 2022, the most recent issue of A&A was available free of charge for readers. Authors had the option to pay article processing charges (APC) for immediate and permanent open access. Furthermore, all Letters to the Editor and all articles published in Sections 12 to 15 were in free access at no cost to the authors. Articles in the other sections of the journal were made freely available 12 months after publication (delayed open-access), through the publisher's site and via the Astrophysics Data System.
Since the beginning of 2022, Astronomy & Astrophysics is published in full open access under the Subscribe to Open (S2O) model.
== Scientific Writing School ==
A&A organises Scientific Writing Schools aimed at postgraduate students and young researchers. The purpose of these schools is to teach young authors how to express their scientific results through adequate and efficient science writing. As of 2025, six of these schools were organised in Belgium (2008 and 2009), Hungary (2014), Chile (2016), China (2019), and Portugal (2025).
== Abstracting and indexing ==
This journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 6.5.
== See also ==
The Astronomical Journal
The Astrophysical Journal
Monthly Notices of the Royal Astronomical Society
== References ==
== External links ==
Official website | Wikipedia/Astronomy_&_Astrophysics |
This list compares various energies in joules (J), organized by order of magnitude.
== Below 1 J ==
== 1 to 105 J ==
== 106 to 1011 J ==
== 1012 to 1017 J ==
== 1018 to 1023 J ==
== Over 1024 J ==
== SI multiples ==
The joule is named after James Prescott Joule. As with every SI unit named after a person, its symbol starts with an upper case letter (J), but when written in full, it follows the rules for capitalisation of a common noun; i.e., joule becomes capitalised at the beginning of a sentence and in titles but is otherwise in lower case.
== See also ==
Conversion of units of energy
Energy conversion efficiency
Energy density
Metric system
Outline of energy
Scientific notation
TNT equivalent
== Notes == | Wikipedia/Energy_scale |
In cosmology, phantom dark energy is a hypothetical form of dark energy. It possesses negative kinetic energy, and predicts expansion of the universe in excess of that predicted by a cosmological constant, which leads to a Big Rip. The idea of phantom dark energy is often dismissed, as it would suggest that the vacuum is unstable with negative mass particles bursting into existence. The concept is hence tied to emerging theories of a continuously created negative mass dark fluid, in which the cosmological constant can vary as a function of time. It is a special type of quintessence.
The term was coined by Robert R. Caldwell in 1999.
== Equation of state ==
In cosmology, the equation of state of a perfect fluid is given by
p
=
w
ρ
,
{\displaystyle p=w\rho ,}
where p is the pressure, ρ is the energy density and w is the ratio between the two. For normal baryonic matter
w
=
0
{\displaystyle w=0}
and for a cosmological constant
w
=
−
1
{\displaystyle w=-1}
. Phantom dark energy is defined as having
w
<
−
1
{\displaystyle w<-1}
.
== Big Rip mechanism ==
The existence of phantom dark energy could cause the expansion of the universe to accelerate so quickly that a scenario known as the Big Rip, a possible end to the universe, occurs. The expansion of the universe reaches an infinite degree in finite time, causing expansion to accelerate without bounds. This acceleration necessarily passes the speed of light (since it involves expansion of the universe itself, not particles moving within it), causing more and more objects to leave our observable universe faster than its expansion, as light and information emitted from distant stars and other cosmic sources cannot "catch up" with the expansion. As the observable universe expands, objects will be unable to interact with each other via fundamental forces, and eventually, the expansion will prevent any action of forces between any particles, even within atoms, "ripping apart" the universe, making distances between individual particles infinite.
One application of phantom dark energy in 2007 was to a cyclic model of the universe, which reverses its expansion extremely shortly before the would-be Big Rip. This cyclic model can be more complicated if the mass–energy of every point in the universe is dense enough to collapse into black hole core substance that will bounce after reaching a maximum threshold of compression causing the next big bang (the overall scenario is highly unlikely).
== Possible evidence ==
In 2025, the Dark Energy Spectroscopic Instrument (DESI) collaboration, published a survey on baryon acoustic oscillations. They found violations of the standard model of cosmology, the Lambda-CDM model, within 4 standard deviations. They reported acceleration of the universe that was stronger in the past, suggesting the presence of phantom dark energy in the early universe.
== See also ==
Quintom scenario
== References ==
== Further reading ==
Robert R. Caldwell et al.: Phantom Energy and Cosmic Doomsday | Wikipedia/Phantom_dark_energy |
Petrography is a branch of petrology that focuses on detailed descriptions of rocks. Someone who studies petrography is called a petrographer. The mineral content and the textural relationships within the rock are described in detail. The classification of rocks is based on the information acquired during the petrographic analysis. Petrographic descriptions start with the field notes at the outcrop and include macroscopic description of hand-sized specimens. The most important petrographer's tool is the petrographic microscope. The detailed analysis of minerals by optical mineralogy in thin section and the micro-texture and structure are critical to understanding the origin of the rock.
Electron microprobe or atom probe tomography analysis of individual grains as well as whole rock chemical analysis by atomic absorption, X-ray fluorescence, and laser-induced breakdown spectroscopy are used in a modern petrographic lab. Individual mineral grains from a rock sample may also be analyzed by X-ray diffraction when optical means are insufficient. Analysis of microscopic fluid inclusions within mineral grains with a heating stage on a petrographic microscope provides clues to the temperature and pressure conditions existent during the mineral formation.
== History ==
Petrography as a science began in 1828 when Scottish physicist William Nicol invented the technique for producing polarized light by cutting a crystal of Iceland spar, a variety of calcite, into a special prism which became known as the Nicol prism. The addition of two such prisms to the ordinary microscope converted the instrument into a polarizing, or petrographic microscope. Using transmitted light and Nicol prisms, it was possible to determine the internal crystallographic character of very tiny mineral grains, greatly advancing the knowledge of a rock's constituents.
During the 1840s, a development by Henry C. Sorby and others firmly laid the foundation of petrography. This was a technique to study very thin slices of rock. A slice of rock was affixed to a microscope slide and then ground so thin that light could be transmitted through mineral grains that otherwise appeared opaque. The position of adjoining grains was not disturbed, thus permitting analysis of rock texture. Thin section petrography became the standard method of rock study. Since textural details contribute greatly to knowledge of the sequence of crystallization of the various mineral constituents in a rock, petrography progressed into petrogenesis and ultimately into petrology.
Petrography principally advanced in Germany in the latter 19th century.
== Methods of investigation ==
=== Macroscopic characters ===
The macroscopic characters of rocks, those visible in hand-specimens without the aid of the microscope, are very varied and difficult to describe accurately and fully. The geologist in the field depends principally on them and on a few rough chemical and physical tests; and to the practical engineer, architect and quarry-master they are all-important. Although frequently insufficient in themselves to determine the true nature of a rock, they usually serve for a preliminary classification, and often give all the information needed.
With a small bottle of acid to test for carbonate of lime, a knife to ascertain the hardness of rocks and minerals, and a pocket lens to magnify their structure, the field geologist is rarely at a loss to what group a rock belongs. The fine grained species are often indeterminable in this way, and the minute mineral components of all rocks can usually be ascertained only by microscopic examination. But it is easy to see that a sandstone or grit consists of more or less rounded, water-worn sand grains and if it contains dull, weathered particles of feldspar, shining scales of mica or small crystals of calcite these also rarely escape observation. Shales and clay rocks generally are soft, fine grained, often laminated and not infrequently contain minute organisms or fragments of plants. Limestones are easily marked with a knife-blade, effervesce readily with weak cold acid and often contain entire or broken shells or other fossils. The crystalline nature of a granite or basalt is obvious at a glance, and while the former contains white or pink feldspar, clear vitreous quartz and glancing flakes of mica, the other shows yellow-green olivine, black augite, and gray stratiated plagioclase.
Other simple tools include the blowpipe (to test the fusibility of detached crystals), the goniometer, the magnet, the magnifying glass and the specific gravity balance.
=== Microscopic characteristics ===
When dealing with unfamiliar types or with rocks so fine grained that their component minerals cannot be determined with the aid of a hand lens, a microscope is used. Characteristics observed under the microscope include colour, colour variation under plane polarised light (pleochroism, produced by the lower Nicol prism, or more recently polarising films), fracture characteristics of the grains, refractive index (in comparison to the mounting adhesive, typically Canada balsam), and optical symmetry (birefringent or isotropic). In toto, these characteristics are sufficient to identify the mineral, and often to quite tightly estimate its major element composition.
The process of identifying minerals under the microscope is fairly subtle, but also mechanistic – it would be possible to develop an identification key that would allow a computer to do it. The more difficult and skilful part of optical petrography is identifying the interrelationships between grains and relating them to features seen in hand-sized specimen, at outcrop, or in mapping.
=== Separation of components ===
Separation of the ingredients of a crushed rock powder to obtain pure samples for analysis is a common approach. It may
be performed with a powerful, adjustable-strength electromagnet. A weak magnetic field attracts magnetite, then haematite and other iron ores. Silicates that contain iron follow in definite order—biotite, enstatite, augite, hornblende, garnet, and similar ferro-magnesian minerals are successively abstracted. Finally, only the colorless, non-magnetic compounds, such as muscovite, calcite, quartz, and feldspar remain. Chemical methods also are useful.
A weak acid dissolves calcite from crushed limestone, leaving only dolomite, silicates, or quartz. Hydrofluoric acid attacks feldspar before quartz and, if used cautiously, dissolves these and any glassy material in a rock powder before it dissolves augite or hypersthene.
Methods of separation by specific gravity have a still wider application. The simplest of these is levigation, which is extensively employed in mechanical analysis of soils and treatment of ores, but is not so successful with rocks, as their components do not, as a rule, differ greatly in specific gravity. Fluids are used that do not attack most rock-forming minerals, but have a high specific gravity. Solutions of potassium mercuric iodide (sp. gr. 3.196), cadmium borotungstate (sp. gr. 3.30), methylene iodide (sp. gr. 3.32), bromoform (sp. gr. 2.86), or acetylene bromide (sp. gr. 3.00) are the principal fluids employed. They may be diluted (with water, benzene, etc.) or concentrated by evaporation.
If the rock is granite consisting of biotite (sp. gr. 3.1), muscovite (sp. gr. 2.85), quartz (sp. gr. 2.65), oligoclase (sp. gr. 2.64), and orthoclase (sp. gr. 2.56), the crushed minerals float in methylene iodide. On gradual dilution with benzene they precipitate in the order above. Simple in theory, these methods are tedious in practice, especially as it is common for one rock-making mineral to enclose another. Expert handling of fresh and suitable rocks yields excellent results.
=== Chemical analysis ===
In addition to naked-eye and microscopic investigation, chemical research methods are of great practical importance to the petrographer. Crushed and separated powders, obtained by the processes above, may be analyzed to determine chemical composition of minerals in the rock qualitatively or quantitatively. Chemical testing, and microscopic examination of minute
grains is an elegant and valuable means of discriminating between mineral components of fine-grained rocks.
Thus, the presence of apatite in rock-sections is established by covering a bare rock-section with ammonium molybdate solution. A turbid yellow precipitate forms over the crystals of the mineral in question (indicating the presence of phosphates). Many
silicates are insoluble in acids and cannot be tested in this way, but others are partly dissolved, leaving a film of gelatinous
silica that can be stained with coloring matters, such as the aniline dyes (nepheline, analcite, zeolites, etc.).
Complete chemical analysis of rocks are also widely used and important, especially in describing new species. Rock analysis has of late years (largely under the influence of the chemical laboratory of the United States Geological Survey) reached a high pitch of refinement and complexity. As many as twenty or twenty-five components may be determined, but for practical purposes a knowledge of the relative proportions of silica, alumina, ferrous and ferric oxides, magnesia, lime, potash, soda and water carry us a long way in determining a rock's position in the conventional classifications.
A chemical analysis is usually sufficient to indicate whether a rock is igneous or sedimentary, and in either case to accurately show what subdivision of these classes it
belongs to. In the case of metamorphic rocks it often establishes whether the original mass was a sediment or of volcanic origin.
=== Specific gravity ===
Specific gravity of rocks is determined by use of a balance and pycnometer. It is greatest in rocks containing the most magnesia, iron, and heavy metal while least in rocks rich in alkalis, silica, and water.
It diminishes with weathering. Generally, the specific gravity of rocks with the same chemical composition is higher if highly crystalline and lower if wholly or partly vitreous. The specific gravity of the more common rocks range from about 2.5 to 3.2.
== Archaeological applications ==
Archaeologists use petrography to identify mineral components in pottery. This information ties the artifacts to geological areas where the raw materials for the pottery were obtained. In addition to clay, potters often used rock fragments, usually called "temper" or "aplastics", to modify the clay's properties. The geological information obtained from the pottery components provides insight into how potters selected and used local and non-local resources. Archaeologists are able to determine whether pottery found in a particular location was locally produced or traded from elsewhere. This kind of information, along with other evidence, can support conclusions about settlement patterns, group and individual mobility, social contacts, and trade networks. In addition, an understanding of how certain minerals are altered at specific temperatures can allow archaeological petrographers to infer aspects of the ceramic production process itself, such as minimum and maximum temperatures reached during the original firing of the pot.
== See also ==
Ceramic petrography
== References ==
== External links ==
Atlas of Rocks, Minerals, and Textures Petrographical description of rocks and minerals.
Uncommon igneous, metamorphic and metasomatic rocks in thin section, in unpolarized light and under crossed polarizers.
Name that Mineral Datatable for comparing observable properties of minerals in thin sections, under transmitted or reflected light. | Wikipedia/Petrography |
Exploration geophysics is an applied branch of geophysics and economic geology, which uses physical methods at the surface of the Earth, such as seismic, gravitational, magnetic, electrical and electromagnetic, to measure the physical properties of the subsurface, along with the anomalies in those properties. It is most often used to detect or infer the presence and position of economically useful geological deposits, such as ore minerals; fossil fuels and other hydrocarbons; geothermal reservoirs; and groundwater reservoirs. It can also be used to detect the presence of unexploded ordnance.
Exploration geophysics can be used to directly detect the target style of mineralization by measuring its physical properties directly. For example, one may measure the density contrasts between the dense iron ore and the lighter silicate host rock, or one may measure the electrical conductivity contrast between conductive sulfide minerals and the resistive silicate host rock.
== Geophysical methods ==
The main techniques used are:
Seismic tomography to locate earthquakes and assist in Seismology.
Reflection seismology and seismic refraction to map the surface structure of a region.
Geodesy and gravity techniques, including gravity gradiometry.
Magnetic techniques, including aeromagnetic surveys to map magnetic anomalies.
Electrical techniques, including electrical resistivity tomography and induced polarization.
Electromagnetic methods, such as magnetotellurics, ground penetrating radar, transient/time-domain electromagnetics, and SNMR.
Borehole geophysics, also called well logging.
Remote sensing techniques, including hyperspectral imaging and airborne geophysics.
Many other techniques, or methods of integration of the above techniques, have been developed and are currently used. However these are not as common due to cost-effectiveness, wide applicability, and/or uncertainty in the results produced.
== Uses ==
Exploration geophysics is also used to map the subsurface structure of a region, to elucidate the underlying structures, to recognize spatial distribution of rock units, and to detect structures such as faults, folds and intrusive rocks. This is an indirect method for assessing the likelihood of ore deposits or hydrocarbon accumulations.
Methods devised for finding mineral or hydrocarbon deposits can also be used in other areas such as monitoring environmental impact, imaging subsurface archaeological sites, ground water investigations, subsurface salinity mapping, civil engineering site investigations, and interplanetary imaging.
=== Mineral exploration ===
Magnetometric surveys can be useful in defining magnetic anomalies which represent ore (direct detection), or in some cases gangue minerals associated with ore deposits (indirect or inferential detection).
The most direct method of detection of ore via magnetism involves detecting iron ore mineralization via mapping magnetic anomalies associated with banded iron formations which usually contain magnetite in some proportion. Skarn mineralization, which often contains magnetite, can also be detected though the ore minerals themselves would be non-magnetic. Similarly, magnetite, hematite, and often pyrrhotite are common minerals associated with hydrothermal alteration, which can be detected to provide an inference that some mineralizing hydrothermal event has affected the rocks.
Gravity surveying can be used to detect dense bodies of rocks within host formations of less dense wall rocks. This can be used to directly detect Mississippi Valley Type ore deposits, IOCG ore deposits, iron ore deposits, skarn deposits, and salt diapirs which can form oil and gas traps.
Electromagnetic (EM) surveys can be used to help detect a wide variety of mineral deposits, especially base metal sulphides via detection of conductivity anomalies which can be generated around sulphide bodies in the subsurface. EM surveys are also used in diamond exploration (where the kimberlite pipes tend to have lower resistance than enclosing rocks), graphite exploration, palaeochannel-hosted uranium deposits (which are associated with shallow aquifers, which often respond to EM surveys in a conductive overburden). These are indirect inferential methods of detecting mineralization, as the commodity being sought is not directly conductive, or not sufficiently conductive to be measurable. EM surveys are also used in unexploded ordnance, archaeological, and geotechnical investigations.
Regional EM surveys are conducted via airborne methods, using either fixed-wing aircraft or helicopter-borne EM rigs. Surface EM methods are based mostly on Transient EM methods using surface loops with a surface receiver, or a downhole tool lowered into a borehole which transects a body of mineralization. These methods can map out sulphide bodies within the earth in three dimensions, and provide information to geologists to direct further exploratory drilling on known mineralization. Surface loop surveys are rarely used for regional exploration, however in some cases such surveys can be used with success (e.g.; SQUID surveys for nickel ore bodies).
Electric-resistance methods such as induced polarization methods can be useful for directly detecting sulfide bodies, coal, and resistive rocks such as salt and carbonates.
Seismic methods can also be used for mineral exploration, since they can provide high-resolution images of geologic structures hosting mineral deposits. It is not just surface seismic surveys which are used, but also borehole seismic methods. All in all, the usage of seismic methods for mineral exploration is steadily increasing.
=== Hydrocarbon exploration ===
Seismic reflection and refraction techniques are the most widely used geophysical technique in hydrocarbon exploration. They are used to map the subsurface distribution of stratigraphy and its structure which can be used to delineate potential hydrocarbon accumulations, both stratigraphic and structural deposits or "traps". Well logging is another widely used technique as it provides necessary high resolution information about rock and fluid properties in a vertical section, although they are limited in areal extent. This limitation in areal extent is the reason why seismic reflection techniques are so popular; they provide a method for interpolating and extrapolating well log information over a much larger area.
Gravity and magnetics are also used, with considerable frequency, in oil and gas exploration. These can be used to determine the geometry and depth of covered geological structures including uplifts, subsiding basins, faults, folds, igneous intrusions, and salt diapirs due to their unique density and magnetic susceptibility signatures compared to the surrounding rocks; the latter is particularly useful for metallic ores.
Remote sensing techniques, specifically hyperspectral imaging, have been used to detect hydrocarbon microseepages using the spectral signature of geochemically altered soils and vegetation.
Specifically at sea, two methods are used: marine seismic reflection and electromagnetic seabed logging (SBL). Marine magnetotellurics (mMT), or marine Controlled Source Electro-Magnetics (mCSEM), can provide pseudo-direct detection of hydrocarbons by detecting resistivity changes over geological traps (signalled by seismic surveys).
=== Civil engineering ===
==== Ground penetrating radar ====
Ground penetrating radar is a non-invasive technique, and is used within civil construction and engineering for a variety of uses, including detection of utilities (buried water, gas, sewerage, electrical and telecommunication cables), mapping of soft soils, overburden for geotechnical characterization, and other similar uses.
==== Spectral-Analysis-of-Surface-Waves ====
The Spectral-Analysis-of-Surface-Waves (SASW) method is another non-invasive technique, which is widely used in practice to detect the shear wave velocity profile of the soil. The SASW method relies on the dispersive nature of Raleigh waves in layered media, i.e., the wave-velocity depends on the load's frequency. A material profile, based on the SASW method, is thus obtained according to: a) constructing an experimental dispersion curve, by performing field experiments, each time using a different loading frequency, and measuring the surface wave-speed for each frequency; b) constructing a theoretical dispersion curve, by assuming a trial distribution for the material properties of a layered profile; c) varying the material properties of the layered profile, and repeating the previous step, until a match between the experimental dispersion curve, and the theoretical dispersion curve is attained. The SASW method renders a layered (one-dimensional) shear wave velocity profile for the soil.
===== Full waveform inversion =====
Full-waveform-inversion (FWI) methods are among the most recent techniques for geotechnical site characterization, and are still under continuous development. The method is fairly general, and is capable of imaging the arbitrarily heterogeneous compressional and shear wave velocity profiles of the soil.
Elastic waves are used to probe the site under investigation, by placing seismic vibrators on the ground surface. These waves propagate through the soil, and due to the heterogeneous geological structure of the site under investigation, multiple reflections and refractions occur. The response of the site to the seismic vibrator is measured by sensors (geophones), also placed on the ground surface. Two key-components are required for the profiling based on full-waveform inversion. These components are: a) a computer model for the simulation of elastic waves in semi-infinite domains; and b) an optimization framework, through which the computed response is matched to the measured response by iteratively updating an initially assumed material distribution for the soil.
===== Other techniques =====
Civil engineering can also use remote sensing information for topographical mapping, planning, and environmental impact assessment. Airborne electromagnetic surveys are also used to characterize soft sediments in planning and engineering roads, dams, and other structures.
Magnetotellurics has proven useful for delineating groundwater reservoirs, mapping faults around areas where hazardous substances are stored (e.g. nuclear power stations and nuclear waste storage facilities), and earthquake precursor monitoring in areas with major structures such as hydro-electric dams subject to high levels of seismic activity.
BS 5930 is the standard used in the UK as a code of practice for site investigations.
=== Archaeology ===
Ground penetrating radar can be used to map buried artifacts, such as graves, mortuaries, wreck sites, and other shallowly buried archaeological sites.
Ground magnetometric surveys can be used for detecting buried ferrous metals, useful in surveying shipwrecks, modern battlefields strewn with metal debris, and even subtle disturbances such as large-scale ancient ruins.
Sonar systems can be used to detect shipwrecks. Active sonar systems emit sound pulses into the water which then bounce off of objects and are returned to the sonar transducer. The sonar transducer is able to determine both the range and orientation of an underwater object by measuring the amount of time between the release of the sound pulse and its returned reception. Passive sonar systems are used to detect noises from marine objects or animals. This system does not emit sound pulses itself but instead focuses on sound detection from marine sources. This system simply 'listens' to the ocean, rather than measuring the range or orientation of an object.
=== Forensics ===
Ground penetrating radar can be used to detect grave sites. This detection is of both legal and cultural importance, providing an opportunity for affected families to pursue justice through legal punishment of those responsible and to experience closure over the loss of a loved one.
=== Unexploded ordnance detection ===
Unexploded ordnance (or UXO) refers to the dysfunction or non-explosion of military explosives. Examples of these include, but are not limited to: bombs, flares, and grenades. It is important to be able to locate and contain unexploded ordnance to avoid injuries, and even possible death, to those who may come in contact with them.
The issue of unexploded ordnance originated as a result of the Crimean War (1853-1856). Before this, most unexploded ordnance was locally contained in smaller volumes, and was thus not a huge public issue. However, with the introduction of more widespread warfare, these quantities increased and were thus easy to lose track of and contain. According to Hooper & Hambric in their piece Unexploded Ordnance (UXO): The Problem, if we are unable to move away from war in the context of conflict resolution, this problem will only continue to get worse and will likely take more than a century to resolve.
Since our global method of conflict resolution banks on warfare, we must be able to rely on specific practices to detect this unexploded ordnance, such as magnetic and electromagnetic surveys. By looking at differences in magnetic susceptibility and/or electrical conductivity in relation to the unexploded ordnance and the surrounding geology (soil, rock, etc.), we are able to detect and contain unexploded ordnance.
== See also ==
Archaeological geophysics
Hydrocarbon exploration
Kola Superdeep Borehole
Leibniz Institute for Applied Geophysics
Mineral exploration
Ore genesis
Petroleum geology
Society of Exploration Geophysicists
== References == | Wikipedia/Exploration_geophysics |
Earth's energy budget (or Earth's energy balance) is the balance between the energy that Earth receives from the Sun and the energy the Earth loses back into outer space. Smaller energy sources, such as Earth's internal heat, are taken into consideration, but make a tiny contribution compared to solar energy. The energy budget also takes into account how energy moves through the climate system.: 2227 The Sun heats the equatorial tropics more than the polar regions. Therefore, the amount of solar irradiance received by a certain region is unevenly distributed. As the energy seeks equilibrium across the planet, it drives interactions in Earth's climate system, i.e., Earth's water, ice, atmosphere, rocky crust, and all living things.: 2224 The result is Earth's climate.
Earth's energy budget depends on many factors, such as atmospheric aerosols, greenhouse gases, surface albedo, clouds, and land use patterns. When the incoming and outgoing energy fluxes are in balance, Earth is in radiative equilibrium and the climate system will be relatively stable. Global warming occurs when earth receives more energy than it gives back to space, and global cooling takes place when the outgoing energy is greater.
Multiple types of measurements and observations show a warming imbalance since at least year 1970. The rate of heating from this human-caused event is without precedent.: 54 The main origin of changes in the Earth's energy is from human-induced changes in the composition of the atmosphere. During 2005 to 2019 the Earth's energy imbalance (EEI) averaged about 460 TW or globally 0.90±0.15 W/m2.
It takes time for any changes in the energy budget to result in any significant changes in the global surface temperature. This is due to the thermal inertia of the oceans, land and cryosphere. Most climate models make accurate calculations of this inertia, energy flows and storage amounts.
== Definition ==
Earth's energy budget includes the "major energy flows of relevance for the climate system". These are "the top-of-atmosphere energy budget; the surface energy budget; changes in the global energy inventory and internal flows of energy within the climate system".: 2227
== Earth's energy flows ==
In spite of the enormous transfers of energy into and from the Earth, it maintains a relatively constant temperature because, as a whole, there is little net gain or loss: Earth emits via atmospheric and terrestrial radiation (shifted to longer electromagnetic wavelengths) to space about the same amount of energy as it receives via solar insolation (all forms of electromagnetic radiation).
The main origin of changes in the Earth's energy is from human-induced changes in the composition of the atmosphere, amounting to about 460 TW or globally 0.90±0.15 W/m2.
=== Incoming solar energy (shortwave radiation) ===
The total amount of energy received per second at the top of Earth's atmosphere (TOA) is measured in watts and is given by the solar constant times the cross-sectional area of the Earth corresponded to the radiation. Because the surface area of a sphere is four times the cross-sectional area of a sphere (i.e. the area of a circle), the globally and yearly averaged TOA flux is one quarter of the solar constant and so is approximately 340 watts per square meter (W/m2). Since the absorption varies with location as well as with diurnal, seasonal and annual variations, the numbers quoted are multi-year averages obtained from multiple satellite measurements.
Of the ~340 W/m2 of solar radiation received by the Earth, an average of ~77 W/m2 is reflected back to space by clouds and the atmosphere and ~23 W/m2 is reflected by the surface albedo, leaving ~240 W/m2 of solar energy input to the Earth's energy budget. This amount is called the absorbed solar radiation (ASR). It implies a value of about 0.3 for the mean net albedo of Earth, also called its Bond albedo (A):
A
S
R
=
(
1
−
A
)
×
340
W
m
−
2
≃
240
W
m
−
2
.
{\displaystyle ASR=(1-A)\times 340~\mathrm {W} ~\mathrm {m} ^{-2}\simeq 240~\mathrm {W} ~\mathrm {m} ^{-2}.}
=== Outgoing longwave radiation ===
Thermal energy leaves the planet in the form of outgoing longwave radiation (OLR). Longwave radiation is electromagnetic thermal radiation emitted by Earth's surface and atmosphere. Longwave radiation is in the infrared band. But, the terms are not synonymous, as infrared radiation can be either shortwave or longwave. Sunlight contains significant amounts of shortwave infrared radiation. A threshold wavelength of 4 microns is sometimes used to distinguish longwave and shortwave radiation.
Generally, absorbed solar energy is converted to different forms of heat energy. Some of the solar energy absorbed by the surface is converted to thermal radiation at wavelengths in the "atmospheric window"; this radiation is able to pass through the atmosphere unimpeded and directly escape to space, contributing to OLR. The remainder of absorbed solar energy is transported upwards through the atmosphere through a variety of heat transfer mechanisms, until the atmosphere emits that energy as thermal energy which is able to escape to space, again contributing to OLR. For example, heat is transported into the atmosphere via evapotranspiration and latent heat fluxes or conduction/convection processes, as well as via radiative heat transport. Ultimately, all outgoing energy is radiated into space in the form of longwave radiation.
The transport of longwave radiation from Earth's surface through its multi-layered atmosphere is governed by radiative transfer equations such as Schwarzschild's equation for radiative transfer (or more complex equations if scattering is present) and obeys Kirchhoff's law of thermal radiation.
A one-layer model produces an approximate description of OLR which yields temperatures at the surface (Ts=288 Kelvin) and at the middle of the troposphere (Ta=242 K) that are close to observed average values:
O
L
R
≃
ϵ
σ
T
a
4
+
(
1
−
ϵ
)
σ
T
s
4
.
{\displaystyle OLR\simeq \epsilon \sigma T_{\text{a}}^{4}+(1-\epsilon )\sigma T_{\text{s}}^{4}.}
In this expression σ is the Stefan–Boltzmann constant and ε represents the emissivity of the atmosphere, which is less than 1 because the atmosphere does not emit within the wavelength range known as the atmospheric window.
Aerosols, clouds, water vapor, and trace greenhouse gases contribute to an effective value of about ε = 0.78. The strong (fourth-power) temperature sensitivity maintains a near-balance of the outgoing energy flow to the incoming flow via small changes in the planet's absolute temperatures.
As viewed from Earth's surrounding space, greenhouse gases influence the planet's atmospheric emissivity (ε). Changes in atmospheric composition can thus shift the overall radiation balance. For example, an increase in heat trapping by a growing concentration of greenhouse gases (i.e. an enhanced greenhouse effect) forces a decrease in OLR and a warming (restorative) energy imbalance. Ultimately when the amount of greenhouse gases increases or decreases, in-situ surface temperatures rise or fall until the absorbed solar radiation equals the outgoing longwave radiation, or ASR equals OLR.
=== Earth's internal heat sources and other minor effects ===
The geothermal heat flow from the Earth's interior is estimated to be 47 terawatts (TW) and split approximately equally between radiogenic heat and heat left over from the Earth's formation. This corresponds to an average flux of 0.087 W/m2 and represents only 0.027% of Earth's total energy budget at the surface, being dwarfed by the 173000 TW of incoming solar radiation.
Human production of energy is even lower at an average 18 TW, corresponding to an estimated 160,000 TW-hr, for all of year 2019. However, consumption is growing rapidly and energy production with fossil fuels also produces an increase in atmospheric greenhouse gases, leading to a more than 20 times larger imbalance in the incoming/outgoing flows that originate from solar radiation.
Photosynthesis also has a significant effect: An estimated 140 TW (or around 0.08%) of incident energy gets captured by photosynthesis, giving energy to plants to produce biomass. A similar flow of thermal energy is released over the course of a year when plants are used as food or fuel.
Other minor sources of energy are usually ignored in the calculations, including accretion of interplanetary dust and solar wind, light from stars other than the Sun and the thermal radiation from space. Earlier, Joseph Fourier had claimed that deep space radiation was significant in a paper often cited as the first on the greenhouse effect.
== Budget analysis ==
In simplest terms, Earth's energy budget is balanced when the incoming flow equals the outgoing flow. Since a portion of incoming energy is directly reflected, the balance can also be stated as absorbed incoming solar (shortwave) radiation equal to outgoing longwave radiation:
A
S
R
=
O
L
R
.
{\displaystyle ASR=OLR.}
=== Internal flow analysis ===
To describe some of the internal flows within the budget, let the insolation received at the top of the atmosphere be 100 units (= 340 W/m2), as shown in the accompanying Sankey diagram. Called the albedo of Earth, around 35 units in this example are directly reflected back to space: 27 from the top of clouds, 2 from snow and ice-covered areas, and 6 by other parts of the atmosphere. The 65 remaining units (ASR = 220 W/m2) are absorbed: 14 within the atmosphere and 51 by the Earth's surface.
The 51 units reaching and absorbed by the surface are emitted back to space through various forms of terrestrial energy: 17 directly radiated to space and 34 absorbed by the atmosphere (19 through latent heat of vaporisation, 9 via convection and turbulence, and 6 as absorbed infrared by greenhouse gases). The 48 units absorbed by the atmosphere (34 units from terrestrial energy and 14 from insolation) are then finally radiated back to space. This simplified example neglects some details of mechanisms that recirculate, store, and thus lead to further buildup of heat near the surface.
Ultimately the 65 units (17 from the ground and 48 from the atmosphere) are emitted as OLR. They approximately balance the 65 units (ASR) absorbed from the sun in order to maintain a net-zero gain of energy by Earth.
=== Heat storage reservoirs ===
Land, ice, and oceans are active material constituents of Earth's climate system along with the atmosphere. They have far greater mass and heat capacity, and thus much more thermal inertia. When radiation is directly absorbed or the surface temperature changes, thermal energy will flow as sensible heat either into or out of the bulk mass of these components via conduction/convection heat transfer processes. The transformation of water between its solid/liquid/vapor states also acts as a source or sink of potential energy in the form of latent heat. These processes buffer the surface conditions against some of the rapid radiative changes in the atmosphere. As a result, the daytime versus nighttime difference in surface temperatures is relatively small. Likewise, Earth's climate system as a whole shows a slow response to shifts in the atmospheric radiation balance.
The top few meters of Earth's oceans harbor more thermal energy than its entire atmosphere. Like atmospheric gases, fluidic ocean waters transport vast amounts of such energy over the planet's surface. Sensible heat also moves into and out of great depths under conditions that favor downwelling or upwelling.
Over 90 percent of the extra energy that has accumulated on Earth from ongoing global warming since 1970 has been stored in the ocean. About one-third has propagated to depths below 700 meters. The overall rate of growth has also risen during recent decades, reaching close to 500 TW (1 W/m2) as of 2020. That led to about 14 zettajoules (ZJ) of heat gain for the year, exceeding the 570 exajoules (=160,000 TW-hr) of total primary energy consumed by humans by a factor of at least 20.
=== Heating/cooling rate analysis ===
Generally speaking, changes to Earth's energy flux balance can be thought of as being the result of external forcings (both natural and anthropogenic, radiative and non-radiative), system feedbacks, and internal system variability. Such changes are primarily expressed as observable shifts in temperature (T), clouds (C), water vapor (W), aerosols (A), trace greenhouse gases (G), land/ocean/ice surface reflectance (S), and as minor shifts in insolaton (I) among other possible factors. Earth's heating/cooling rate can then be analyzed over selected timeframes (Δt) as the net change in energy (ΔE) associated with these attributes:
Δ
E
/
Δ
t
=
(
Δ
E
T
+
Δ
E
C
+
Δ
E
W
+
Δ
E
A
+
Δ
E
G
+
Δ
E
S
+
Δ
E
I
+
.
.
.
)
/
Δ
t
=
A
S
R
−
O
L
R
.
{\displaystyle {\begin{aligned}\Delta E/\Delta t&=(\ \Delta E_{T}+\Delta E_{C}+\Delta E_{W}+\Delta E_{A}+\Delta E_{G}+\Delta E_{S}+\Delta E_{I}+...\ )/\Delta t\\\\&=ASR-OLR.\end{aligned}}}
Here the term ΔET, corresponding to the Planck response, is negative-valued when temperature rises due to its strong direct influence on OLR.
The recent increase in trace greenhouse gases produces an enhanced greenhouse effect, and thus a positive ΔEG forcing term. By contrast, a large volcanic eruption (e.g. Mount Pinatubo 1991, El Chichón 1982) can inject sulfur-containing compounds into the upper atmosphere. High concentrations of stratospheric sulfur aerosols may persist for up to a few years, yielding a negative forcing contribution to ΔEA. Various other types of anthropogenic aerosol emissions make both positive and negative contributions to ΔEA. Solar cycles produce ΔEI smaller in magnitude than those of recent ΔEG trends from human activity.
Climate forcings are complex since they can produce direct and indirect feedbacks that intensify (positive feedback) or weaken (negative feedback) the original forcing. These often follow the temperature response. Water vapor trends as a positive feedback with respect to temperature changes due to evaporation shifts and the Clausius-Clapeyron relation. An increase in water vapor results in positive ΔEW due to further enhancement of the greenhouse effect. A slower positive feedback is the ice-albedo feedback. For example, the loss of Arctic ice due to rising temperatures makes the region less reflective, leading to greater absorption of energy and even faster ice melt rates, thus positive influence on ΔES. Collectively, feedbacks tend to amplify global warming or cooling.: 94
Clouds are responsible for about half of Earth's albedo and are powerful expressions of internal variability of the climate system. They may also act as feedbacks to forcings, and could be forcings themselves if for example a result of cloud seeding activity. Contributions to ΔEC vary regionally and depending upon cloud type. Measurements from satellites are gathered in concert with simulations from models in an effort to improve understanding and reduce uncertainty.
== Earth's energy imbalance (EEI) ==
The Earth's energy imbalance (EEI) is defined as "the persistent and positive (downward) net top of atmosphere energy flux associated with greenhouse gas forcing of the climate system".: 2227
If Earth's incoming energy flux (ASR) is larger or smaller than the outgoing energy flux (OLR), then the planet will gain (warm) or lose (cool) net heat energy in accordance with the law of energy conservation:
E
E
I
≡
A
S
R
−
O
L
R
{\displaystyle EEI\equiv ASR-OLR}
.
Positive EEI thus defines the overall rate of planetary heating and is typically expressed as watts per square meter (W/m2). During 2005 to 2019 the Earth's energy imbalance averaged about 460 TW or globally 0.90 ± 0.15 W per m2.
When Earth's energy imbalance (EEI) shifts by a sufficiently large amount, the shift is measurable by orbiting satellite-based instruments. Imbalances that fail to reverse over time will also drive long-term temperature changes in the atmospheric, oceanic, land, and ice components of the climate system. Temperature, sea level, ice mass and related shifts thus also provide measures of EEI.
The biggest changes in EEI arise from changes in the composition of the atmosphere through human activities, thereby interfering with the natural flow of energy through the climate system. The main changes are from increases in carbon dioxide and other greenhouse gases, that produce heating (positive EEI), and pollution. The latter refers to atmospheric aerosols of various kinds, some of which absorb energy while others reflect energy and produce cooling (or lower EEI).
It is not (yet) possible to measure the absolute magnitude of EEI directly at top of atmosphere, although changes over time as observed by satellite-based instruments are thought to be accurate. The only practical way to estimate the absolute magnitude of EEI is through an inventory of the changes in energy in the climate system. The biggest of these energy reservoirs is the ocean.
=== Energy inventory assessments ===
The planetary heat content that resides in the climate system can be compiled given the heat capacity, density and temperature distributions of each of its components. Most regions are now reasonably well sampled and monitored, with the most significant exception being the deep ocean.
Estimates of the absolute magnitude of EEI have likewise been calculated using the measured temperature changes during recent multi-decadal time intervals. For the 2006 to 2020 period EEI was about +0.76±0.2 W/m2 and showed a significant increase above the mean of +0.48±0.1 W/m2 for the 1971 to 2020 period.
EEI has been positive because temperatures have increased almost everywhere for over 50 years. Global surface temperature (GST) is calculated by averaging temperatures measured at the surface of the sea along with air temperatures measured over land. Reliable data extending to at least 1880 shows that GST has undergone a steady increase of about 0.18 °C per decade since about year 1970.
Ocean waters are especially effective absorbents of solar energy and have a far greater total heat capacity than the atmosphere. Research vessels and stations have sampled sea temperatures at depth and around the globe since before 1960. Additionally, after the year 2000, an expanding network of nearly 4000 Argo robotic floats has measured the temperature anomaly, or equivalently the ocean heat content change (ΔOHC). Since at least 1990, OHC has increased at a steady or accelerating rate. ΔOHC represents the largest portion of EEI since oceans have thus far taken up over 90% of the net excess energy entering the system over time (Δt):
E
E
I
≳
Δ
O
H
C
/
Δ
t
{\displaystyle EEI\gtrsim \Delta OHC/\Delta t}
.
Earth's outer crust and thick ice-covered regions have taken up relatively little of the excess energy. This is because excess heat at their surfaces flows inward only by means of thermal conduction, and thus penetrates only several tens of centimeters on the daily cycle and only several tens of meters on the annual cycle. Much of the heat uptake goes either into melting ice and permafrost or into evaporating more water from soils.
=== Measurements at top of atmosphere (TOA) ===
Several satellites measure the energy absorbed and radiated by Earth, and thus by inference the energy imbalance. These are located top of atmosphere (TOA) and provide data covering the globe. The NASA Earth Radiation Budget Experiment (ERBE) project involved three such satellites: the Earth Radiation Budget Satellite (ERBS), launched October 1984; NOAA-9, launched December 1984; and NOAA-10, launched September 1986.
NASA's Clouds and the Earth's Radiant Energy System (CERES) instruments are part of its Earth Observing System (EOS) since March 2000. CERES is designed to measure both solar-reflected (short wavelength) and Earth-emitted (long wavelength) radiation. The CERES data showed increases in EEI from +0.42±0.48 W/m2 in 2005 to +1.12±0.48 W/m2 in 2019. Contributing factors included more water vapor, less clouds, increasing greenhouse gases, and declining ice that were partially offset by rising temperatures. Subsequent investigation of the behavior using the GFDL CM4/AM4 climate model concluded there was a less than 1% chance that internal climate variability alone caused the trend.
Other researchers have used data from CERES, AIRS, CloudSat, and other EOS instruments to look for trends of radiative forcing embedded within the EEI data. Their analysis showed a forcing rise of +0.53±0.11 W/m2 from years 2003 to 2018. About 80% of the increase was associated with the rising concentration of greenhouse gases which reduced the outgoing longwave radiation.
Further satellite measurements including TRMM and CALIPSO data have indicated additional precipitation, which is sustained by increased energy leaving the surface through evaporation (the latent heat flux), offsetting some of the increase in the longwave greenhouse flux to the surface.
It is noteworthy that radiometric calibration uncertainties limit the capability of the current generation of satellite-based instruments, which are otherwise stable and precise. As a result, relative changes in EEI are quantifiable with an accuracy which is not also achievable for any single measurement of the absolute imbalance.
=== Geodetic and hydrographic surveys ===
Observations since 1994 show that ice has retreated from every part of Earth at an accelerating rate. Mean global sea level has likewise risen as a consequence of the ice melt in combination with the overall rise in ocean temperatures.
These shifts have contributed measurable changes to the geometric shape and gravity of the planet.
Changes to the mass distribution of water within the hydrosphere and cryosphere have been deduced using gravimetric observations by the GRACE satellite instruments. These data have been compared against ocean surface topography and further hydrographic observations using computational models that account for thermal expansion, salinity changes, and other factors. Estimates thereby obtained for ΔOHC and EEI have agreed with the other (mostly) independent assessments within uncertainties.
=== Importance as a climate change metric ===
Climate scientists Kevin Trenberth, James Hansen, and colleagues have identified the monitoring of Earth's energy imbalance as an important metric to help policymakers guide the pace for mitigation and adaptation measures. Because of climate system inertia, longer-term EEI (Earth's energy imbalance) trends can forecast further changes that are "in the pipeline".
Scientists found that the EEI is the most important metric related to climate change. It is the net result of all the processes and feedbacks in play in the climate system. Knowing how much extra energy affects weather systems and rainfall is vital to understand the increasing weather extremes.
In 2012, NASA scientists reported that to stop global warming atmospheric CO2 concentration would have to be reduced to 350 ppm or less, assuming all other climate forcings were fixed. As of 2020, atmospheric CO2 reached 415 ppm and all long-lived greenhouse gases exceeded a 500 ppm CO2-equivalent concentration due to continued growth in human emissions.
== See also ==
Lorenz energy cycle
Planetary equilibrium temperature
Climate sensitivity
Tipping points in the climate system
Climate change portal
== References ==
== External links ==
NASA: The Atmosphere's Energy Budget
Clouds and Earth's Radiant Energy System (CERES)
NASA/GEWEX Surface Radiation Budget (SRB) Project | Wikipedia/Earth's_energy_budget |
Nanogeoscience is the study of nanoscale phenomena related to geological systems. Predominantly, this is investigated by studying environmental nanoparticles between 1–100 nanometers in size. Other applicable fields of study include studying materials with at least one dimension restricted to the nanoscale (e.g. thin films, confined fluids) and the transfer of energy, electrons, protons, and matter across environmental interfaces.
== The atmosphere ==
As more dust enters the atmosphere due to the consequences of human activity (from direct effects, such as clearing of land and desertification, versus indirect effects, such as global warming), it becomes more important to understand the effects of mineral dust on the gaseous composition of the atmosphere, cloud formation conditions, and global-mean radiative forcing (i.e., heating or cooling effects).
== The ocean ==
Oceanographers generally study particles that measure 0.2 micrometres and larger, which means a lot of nanoscale particles are not examined, particularly with respect to formation mechanisms.
== The soils ==
Water–rock–bacteria nanoscience
Although by no means developed, nearly all aspects (both geo- and bioprocesses) of weathering, soil, and water–rock interaction science are inexorably linked to nanoscience. Within the Earth's near-surface, materials that are broken down, as well as materials that are produced, are often in the nanoscale regime. Further, as organic molecules, simple and complex, as well as bacteria and all flora and fauna in soils and rocks interact with the mineral components present, nanodimensions and nanoscale processes are the order of the day.
Metal transport nanoscience
On land, researchers study how nanosized minerals capture toxins such as arsenic, copper, and lead from the soil. Facilitating this process, called soil remediation, is a tricky business.
Nanogeoscience is in a relatively early stage of development. The future directions of nanoscience in the geosciences will include a determination of the identity, distribution, and unusual chemical properties of nanosized particles and/or films in the oceans, on the continents, and in the atmosphere, and how they drive Earth processes in unexpected ways. Further, nanotechnology will be the key to developing the next generation of Earth and environmental sensing systems.
== Size-dependent stability and reactivity of nanoparticles ==
Nanogeoscience deals with structures, properties and behaviors of nanoparticles in soils, aquatic systems and atmospheres. One of the key features of nanoparticles is the size-dependence of the nanoparticle stability and reactivity. This arises from the large specific surface area and differences in surface atomic structure of nanoparticles at small particle sizes. In general, the free energy of nanoparticles is inversely proportional to their particle size. For materials that can adopt two or more structures, size-dependent free energy may result in phase stability crossover at certain sizes. Free energy reduction drives crystal growth (atom-by-atom or by oriented attachment ), which may again drive the phase transformation due to the change of the relative phase stability at increasing sizes. These processes impact the surface reactivity and mobility of nanoparticles in natural systems.
Well-identified size-dependent phenomena of nanoparticles include:
Phase stability reversal of bulk (macroscopic) particles at small sizes. Usually, a less stable bulk-phase at low temperature (and/or low pressure) becomes more stable than the bulk-stable phase as the particle size decreases below a certain critical size. For instance, bulk anatase (TiO2) is metastable with respect to bulk rutile (TiO2). However, in air, anatase becomes more stable than rutile at particle sizes below 14 nm. Similarly, below 1293 K, wurtzite (ZnS) is less stable than sphalerite (ZnS). In vacuum, wurtzite becomes more stable than sphalerite when the particle size is less than 7 nm at 300 K. At very small particle sizes, the addition of water to the surface of ZnS nanoparticles can induce a change in nanoparticle structure and surface-surface interactions can drive a reversible structural transformation upon aggregation/disaggregation. Other examples of size-dependent phase stability include systems of Al2O3, ZrO2, C, CdS, BaTiO3, Fe2O3, Cr2O3, Mn2O3, Nb2O3, Y2O3, and Au-Sb.
Phase transformation kinetics is size-dependent and transformations usually occur at low temperatures (less than several hundred degrees). Under such conditions, rates of surface nucleation and bulk nucleation are low due to their high activation energies. Thus, phase transformation occurs predominantly via interface nucleation that depends on contact between nanoparticles. As a consequence, the transformation rate is particle number (size)-dependent and it proceeds faster in densely packed (or highly aggregated) than in loosely packed nanoparticles. Complex concurrent phase transformation and particle coarsening often occur in nanoparticles.
Size-dependent adsorption on nanoparticles and oxidation of nanominerals.
These size-dependent properties highlight the importance of the particle size in nanoparticle stability and reactivity.
== References ==
== Further reading ==
== External links ==
Table of contents of special issue on nanogeoscience (Elements magazine)
Nanogeoscience research groups:
Berkeley Nanogeoscience Group
University of California-Berkeley
Virginia Tech
University of Florida
University of Wisconsin–Madison
University of Minnesota
University of Copenhagen
University of Vienna | Wikipedia/Nanogeoscience |
Marine chemistry, also known as ocean chemistry or chemical oceanography, is the study of the chemical composition and processes of the world’s oceans, including the interactions between seawater, the atmosphere, the seafloor, and marine organisms. This field encompasses a wide range of topics, such as the cycling of elements like carbon, nitrogen, and phosphorus, the behavior of trace metals, and the study of gases and nutrients in marine environments. Marine chemistry plays a crucial role in understanding global biogeochemical cycles, ocean circulation, and the effects of human activities, such as pollution and climate change, on oceanic systems. It is influenced by plate tectonics and seafloor spreading, turbidity, currents, sediments, pH levels, atmospheric constituents, metamorphic activity, and ecology.
The impact of human activity on the chemistry of the Earth's oceans has increased over time, with pollution from industry and various land-use practices significantly affecting the oceans. Moreover, increasing levels of carbon dioxide in the Earth's atmosphere have led to ocean acidification, which has negative effects on marine ecosystems. The international community has agreed that restoring the chemistry of the oceans is a priority, and efforts toward this goal are tracked as part of Sustainable Development Goal 14.
Due to the interrelatedness of the ocean, chemical oceanographers frequently work on problems relevant to physical oceanography, geology and geochemistry, biology and biochemistry, and atmospheric science. Many of them are investigating biogeochemical cycles, and the marine carbon cycle in particular attracts significant interest due to its role in carbon sequestration and ocean acidification. Other major topics of interest include analytical chemistry of the oceans, marine pollution, and anthropogenic climate change.
== Organic compounds in the oceans ==
=== Dissolved Organic Matter (DOM) ===
DOM is a critical component of the ocean's carbon pool and includes many molecules such as amino acids, sugars, and lipids. It represents about 90% of the total organic carbon in marine environments. Colored dissolved organic matter (CDOM) is estimated to range from 20-70% of the carbon content of the oceans, being higher near river outlets and lower in the open ocean. DOM can be recycled and put back into the food web through a process called microbial loop which is essential for nutrient cycling and supporting primary productivity. It also plays a vital role in the global regulation of oceanic carbon storage, as some forms resist microbial degradation and may exist within the ocean for centuries. Marine life is similar mainly in biochemistry to terrestrial organisms, and is the most prolific source of halogenated organic compounds.
=== Particulate Organic Matter (POM) ===
POM includes of large organic particles, such as organisms, fecal pellets, and detritus, which settle through the water column. It is a major component of the biological pump, a process by which carbon is transferred from the surface ocean to the deep sea. As POM sinks, it decomposes by bacterial activity, releasing nutrients and carbon dioxide. The refractory POM fraction can settle on the ocean floor and make relevant contributions to carbon sequestration over a very long period of time
== Chemical ecology of extremophiles ==
The ocean is home to a variety of marine organisms known as extremophiles – organisms that thrive in extreme conditions of temperature, pressure, and light availability. Extremophiles inhabit many unique habitats in the ocean, such as hydrothermal vents, black smokers, cold seeps, hypersaline regions, and sea ice brine pockets. Some scientists have speculated that life may have evolved from hydrothermal vents in the ocean.In hydrothermal vents and similar environments, many extremophiles acquire energy through chemoautotrophy, using chemical compounds as energy sources, rather than light as in photoautotrophy. Hydrothermal vents enrich the nearby environment in chemicals such as elemental sulfur, H2, H2S, Fe2+, and methane. Chemoautotrophic organisms, primarily prokaryotes, derive energy from these chemicals through redox reactions. These organisms then serve as food sources for higher trophic levels, forming the basis of unique ecosystems.
Several different metabolisms are present in hydrothermal vent ecosystems. Many marine microorganisms, including Thiomicrospira, Halothiobacillus, and Beggiatoa, are capable of oxidizing sulfur compounds, including elemental sulfur and the often toxic compound H2S. H2S is abundant in hydrothermal vents, formed through interactions between seawater and rock at the high temperatures found within vents. This compound is a major energy source, forming the basis of the sulfur cycle in hydrothermal vent ecosystems. In the colder waters surrounding vents, sulfur-oxidation can occur using oxygen as an electron acceptor; closer to the vents, organisms must use alternate metabolic pathways or utilize another electron acceptor, such as nitrate. Some species of Thiomicrospira can utilize thiosulfate as an electron donor, producing elemental sulfur. Additionally, many marine microorganisms are capable of iron-oxidation, such as Mariprofundus ferrooxydans. Iron-oxidation can be oxic, occurring in oxygen-rich parts of the ocean, or anoxic, requiring either an electron acceptor such as nitrate or light energy. In iron-oxidation, Fe(II) is used as an electron donor; conversely, iron-reducers utilize Fe(III) as an electron acceptor. These two metabolisms form the basis of the iron-redox cycle and may have contributed to banded iron formations.
At another extreme, some marine extremophiles inhabit sea ice brine pockets where temperature is very low and salinity is very high. Organisms trapped within freezing sea ice must adapt to a rapid change in salinity up to 3 times higher than that of regular seawater, as well as the rapid change to regular seawater salinity when ice melts. Most brine-pocket dwelling organisms are photosynthetic, therefore, these microenvironments can become hyperoxic, which can be toxic to its inhabitants. Thus, these extremophiles often produce high levels of antioxidants.
== Plate tectonics ==
Seafloor spreading on mid-ocean ridges is a global scale ion-exchange system. Hydrothermal vents at spreading centers introduce various amounts of iron, sulfur, manganese, silicon and other elements into the ocean, some of which are recycled into the ocean crust. Helium-3, an isotope that accompanies volcanism from the mantle, is emitted by hydrothermal vents and can be detected in plumes within the ocean.
Spreading rates on mid-ocean ridges vary between 10 and 200 mm/yr. Rapid spreading rates cause increased basalt reactions with seawater. The magnesium/calcium ratio will be lower because more magnesium ions are being removed from seawater and consumed by the rock, and more calcium ions are being removed from the rock and released to seawater. Hydrothermal activity at ridge crest is efficient in removing magnesium. A lower Mg/Ca ratio favors the precipitation of low-Mg calcite polymorphs of calcium carbonate (calcite seas).
Slow spreading at mid-ocean ridges has the opposite effect and will result in a higher Mg/Ca ratio favoring the precipitation of aragonite and high-Mg calcite polymorphs of calcium carbonate (aragonite seas).
Experiments show that most modern high-Mg calcite organisms would have been low-Mg calcite in past calcite seas, meaning that the Mg/Ca ratio in an organism's skeleton varies with the Mg/Ca ratio of the seawater in which it was grown.
The mineralogy of reef-building and sediment-producing organisms is thus regulated by chemical reactions occurring along the mid-ocean ridge, the rate of which is controlled by the rate of sea-floor spreading.
== Human impacts ==
=== Marine pollution ===
=== Climate change ===
Increased carbon dioxide levels, mostly from burning fossil fuels, are changing ocean chemistry. Global warming and changes in salinity have significant implications for the ecology of marine environments.
==== Acidification ====
==== Deoxygenation ====
== History ==
Early inquiries about marine chemistry usually concerned the origin of salinity in the ocean, including work by Robert Boyle. Modern chemical oceanography began as a field with the 1872–1876 Challenger expedition, led by the British Royal Navy which made the first systematic measurements of ocean chemistry. The chemical analysis of these samples providing the first systematic study of the composition of seawater was conducted by John Murray and George Forchhammer, leading to a better understanding of elements like chloride, sodium, and sulfate in ocean waters
The early 20th century saw significant advancements in marine chemistry, particularly with more accurate analytical techniques. Scientists like Martin Knudsen created the Knudsen Bottle, an instrument used to collect water samples from different ocean depths. Over the past three decades (1970s, 19802, and 1990s), a comprehensive evaluation of advancements in chemical oceanography was compiled through a National Science Foundation initiative known as Futures of Ocean Chemistry in the United States (FOCUS). This project brought together numerous prominent chemical oceanographers, marine chemists, and geochemists to contribute to the FOCUS report.
After World War II, advancements in geochemical techniques propelled marine chemistry into a new era. Researchers began using isotopic analysis to study ocean circulation and the carbon cycle. Roger Revelle and Hans Suess pioneered using radiocarbon dating to investigate oceanic carbon reservoirs and their exchange with the atmosphere.
Since the 1970s, the development of highly sophisticated instruments and computational models has revolutionized marine chemistry. Scientists can now measure trace metals, organic compounds, and isotopic ratios with unprecedented precision. Studies of marine biogeochemical cycles, including the carbon, nitrogen, and sulfur cycles, have become an area of interest to understand global climate change. The use of remote sensing technology and global ocean observation programs, such as the International Geosphere-Biosphere Programme (IGBP), has provided large-scale data on ocean chemistry, allowing scientists to monitor ocean acidification, deoxygenation, and other critical issues affecting the marine environment.
== Tools used for analysis ==
Chemical oceanographers collect and measure chemicals in seawater, using the standard toolset of analytical chemistry as well as instruments like pH meters, electrical conductivity meters, fluorometers, and dissolved CO₂ meters. Most data are collected through shipboard measurements and from autonomous floats or buoys, but remote sensing is used as well. On an oceanographic research vessel, a CTD is used to measure electrical conductivity, temperature, and pressure, and is often mounted on a rosette of Nansen bottles to collect seawater for analysis. Sediments are commonly studied with a box corer or a sediment trap, and older sediments may be recovered by scientific drilling.
Advanced analytical equipment such as mass spectrometers and chromatographs are applied to detect trace elements, isotopes, and organic compounds. This allows for precisely measuring nutrients, gases, and pollutants in marine environments. In recent years, autonomous underwater vehicles (AUVs) and remote sensing technology have enabled continuous, large-scale ocean chemistry monitoring, particularly for tracking changes in ocean acidification and nutrient cycles.
== Marine chemistry on other planets and their moons ==
The chemistry of the subsurface ocean of Europa may be Earthlike. The subsurface ocean of Enceladus vents hydrogen and carbon dioxide to space.
== See also ==
Global Ocean Data Analysis Project
Oceanography
Physical oceanography
World Ocean Atlas
Seawater
RISE project
== References == | Wikipedia/Chemical_oceanography |
Petrophysics (from the Greek πέτρα, petra, "rock" and φύσις, physis, "nature") is the study of physical and chemical rock properties and their interactions with fluids.
A major application of petrophysics is in studying reservoirs for the hydrocarbon industry. Petrophysicists work together with reservoir engineers and geoscientists to understand the porous media properties of the reservoir. Particularly how the pores are interconnected in the subsurface, controlling the accumulation and migration of hydrocarbons. Some fundamental petrophysical properties determined are lithology, porosity, water saturation, permeability, and capillary pressure.
The petrophysicists workflow measures and evaluates these petrophysical properties through well-log interpretation (i.e. in-situ reservoir conditions) and core analysis in the laboratory. During well perforation, different well-log tools are used to measure the petrophysical and mineralogical properties through radioactivity and seismic technologies in the borehole. In addition, core plugs are taken from the well as sidewall core or whole core samples. These studies are combined with geological, geophysical, and reservoir engineering studies to model the reservoir and determine its economic feasibility.
While most petrophysicists work in the hydrocarbon industry, some also work in the mining, water resources, geothermal energy, and carbon capture and storage industries. Petrophysics is part of the geosciences, and its studies are used by petroleum engineering, geology, geochemistry, exploration geophysics and others.
== Fundamental petrophysical properties ==
The following are the fundamental petrophysical properties used to characterize a reservoir:
Lithology: A description of the rock's physical characteristics, such as grain size, composition and texture. By studying the lithology of local geological outcrops and core samples, geoscientists can use a combination of log measurements, such as natural gamma, neutron, density and resistivity, to determine the lithology down the well.
Porosity: The pore space volume portion related to the bulk rock volume, symbolized as
ϕ
{\displaystyle \phi }
. It is typically calculated using data from an instrument that measures the reaction of the rock to bombardment by neutrons or gamma rays but can also be derived from sonic and NMR logging. A helium porosimeter is the main technique to measure grain volume and porosity in the laboratory.
Water saturation: The fraction of the pore space occupied by water. This is typically calculated using data from an instrument that measures the resistivity of the rock and applying empirical or theoretical water saturation models; the most worldwide used is Archie's (1942) model. It is known by the symbol
S
w
{\displaystyle S_{w}}
.
Permeability: The quantity of fluid (water or hydrocarbon) that can flow through a rock as a function of time and pressure, related to how interconnected the pores are, and it is known by the symbol
k
{\displaystyle k}
. Formation testing is the only tool that can directly measure a rock formation's permeability down a well. In case of its absence, which is common in most cases, an estimate for permeability can be derived from empirical relationships with other measurements such as porosity, NMR and sonic logging. Darcy's law is applied in the laboratory to measure the core plug permeability with an inert gas or liquid (i.e. that does not react with the rock).
Formation thickness (h) of rock with enough permeability to deliver fluids to a well bore, this property is often called “net reservoir rock.” In the oil and gas industry, another quantity “net pay” is computed which is the thickness of rock that can deliver hydrocarbons to the well bore at a profitable rate.
== Rock mechanical properties ==
The rock's mechanical or geomechanical properties are also used within petrophysics to determine the reservoir strength, elastic properties, hardness, ultrasonic behaviour, index characteristics and in situ stresses.
Petrophysicists use acoustic and density measurements of rocks to compute their mechanical properties and strength. They measure the compressional (P) wave velocity of sound through the rock and the shear (S) wave velocity and use these with the density of the rock to compute the rock's compressive strength, which is the compressive stress that causes a rock to fail, and the rocks' flexibility, which is the relationship between stress and deformation for a rock. Converted-wave analysis is also determines the subsurface lithology and porosity.
Geomechanics measurements are useful for drillability assessment, wellbore and open-hole stability design, log strength and stress correlations, and formation and strength characterization. These measurements are also used to design dams, roads, foundations for buildings, and many other large construction projects. They can also help interpret seismic signals from the Earth, either manufactured seismic signals or those from earthquakes.
== Methods of petrophysical analysis ==
=== Core analysis ===
Core samples are pieces of rock collected from a subsurface formation during drilling operations, to study the physical and mechanical properties of the formation in detail. They provide the only direct evidence of the reservoir's formation rock structure. Core analysis is the "ground truth" data measured in a laboratory to determine the key petrophysical features of the in-situ rock. In the petroleum industry, rock samples are retrieved from the subsurface and measured by oil or service companies' core laboratories. This process is time-consuming and expensive; thus, it can only be applied to some of the wells drilled in a field. Also, proper design, planning and supervision decrease data redundancy and uncertainty. Client and laboratory teams must work aligned to optimise the core analysis process.
=== Well-logging ===
Well Logging is a relatively inexpensive method to obtain petrophysical properties downhole. Measurement tools are conveyed downhole using either wireline or LWD method.
An example of wireline logs is shown in Figure 1. The first “track” shows the natural gamma radiation level of the rock. The gamma radiation level “log” shows increasing radiation to the right and decreasing radiation to the left. The rocks emitting less radiation have more yellow shading. The detector is very sensitive, and the amount of radiation is very low. In clastic rock formations, rocks with smaller amounts of radiation are more likely to be coarser-grained and have more pore space, while rocks with higher amounts of radiation are more likely to have finer grains and less pore space.
The second track in the plot records the depth below the reference point, usually the Kelly bush or rotary table in feet, so these rock formations are 11,900 feet below the Earth's surface.
In the third track, the electrical resistivity of the rock is presented. The water in this rock is salty. The electrolytes flowing inside the pore space within the water conduct electricity resulting in lower resistivity of the rock. This also indicates an increased water saturation and decreased hydrocarbon saturation.
The fourth track shows the computed water saturation, both as “total” water (including the water bound to the rock) in magenta and the “effective water” or water that is free to flow in black. Both quantities are given as a fraction of the total pore space.
The fifth track shows the fraction of the total rock that is pore space filled with fluids (i.e. porosity). The display of the pore space is divided into green for oil and blue for movable water. The black line shows the fraction of the pore space, which contains either water or oil that can move or be "produced" (i.e. effective porosity). While the magenta line indicates the toral porosity, meaning that it includes the water that is permanently bound to the rock.
The last track represents the rock lithology divided into sandstone and shale portions. The yellow pattern represents the fraction of the rock (excluding fluids) composed of coarser-grained sandstone. The gray pattern represents the fraction of rock composed of finer-grained, i.e. "shale." The sandstone is the part of the rock that contains the producible hydrocarbons and water.
=== Modelling ===
Reservoir models are built by reservoir engineering in specialised software with the petrophysical dataset elaborated by the petrophysicist to estimate the amount of hydrocarbon present in the reservoir, the rate at which that hydrocarbon can be produced to the Earth's surface through wellbores and the fluid flow in rocks. Similar models in the water resource industry compute how much water can be produced to the surface over long periods without depleting the aquifer.
== Rock volumetric model for shaly sand formation ==
Shaly sand is a term referred to as a mixture of shale or clay and sandstone. Hence, a significant portion of clay minerals and silt-size particles results in a fine-grained sandstone with higher density and rock complexity.
The shale/clay volume is an essential petrophysical parameter to estimate since it contributes to the rock bulk volume, and for correct porosity and water saturation, evaluation needs to be correctly defined. As shown in Figure 2, for modelling clastic rock formation, there are four components whose definitions are typical for shaly or clayey sands that assume: the rock matrix (grains), clay portion that surrounds the grains, water, and hydrocarbons. These two fluids are stored only in pore space in the rock matrix.
Due to the complex microstructure, for a water-wet rock, the following terms comprised a clastic reservoir formation:
Vma = volume of matrix grains.
Vdcl = volme of dry clay.
Vcbw = volume of clay bound water.
Vcl = volume of wet clay (Vdcl +Vcbw).
Vcap = volume of capillary bound water.
Vfw = volume of free water.
Vhyd = volume of hydrocarbon.
ΦT = Total porosity (PHIT), which includes the connected and not connected pore throats.
Φe = Effective porosity which includes only the inter-connected pore throats.
Vb = bulk volume of the rock.
Key equations:
Vma + Vcl + Vfw + Vhyd = 1
Rock matrix volume + wet clay volume + water free volume + hydrocarbon volume = bulk rock volume
== Scholarly societies ==
The Society of Petrophysicists and Well Log Analysts (SPWLA) is an organisation whose mission is to increase the awareness of petrophysics, formation evaluation, and well logging best practices in the oil and gas industry and the scientific community at large.
== See also ==
Archie's law – Relationship between the electrical conductivity of a rock to its porosity
Formation evaluation – Assessing if boreholes drilled for oil or gas are able to deliver a profitable production
Gardner's relation – Equation that relates seismic P-wave velocity to the bulk density of the lithology
Petrology – Branch of geology that studies the formation, composition, distribution and structure of rocks
== References ==
=== Further reading ===
Guéguen, Yves; Palciauskas, Victor (1994), Introduction to the Physics of Rocks, Princeton University Press, ISBN 978-0-691-03452-2
Mavko, Gary; Mukerji, Tapan; Dvorkin, Jack (2003), The Rock Physics Handbook, Cambridge University Press, ISBN 978-0-521-54344-6
Santamarina, J. Carlos; Klein, Katherine A.; Fam, Moheb A. (2001), Soils and Waves: Particulate Materials Behavior, Characterization and Process Monitoring, John Wiley & Sons, Ltd., ISBN 978-0-471-49058-6
Tiab, Djebbar; Donaldson, Erle C. (2012). Petrophysics Theory and Practice of Measuring Reservoir Rock and Fluid Transport Properties (3rd ed.). Oxford: Gulf Professional Pub. ISBN 978-0-12-383848-3.
Raquel, S.; Benítez, G.; Molina, L.; Pedroza, C. (2016). "Neural networks for defining spatial variation of rock properties in sparsely instrumented media" (PDF). Boletín de la Sociedad Geológica Mexicana. Vol. 553. Retrieved 12 October 2018.
== External links ==
Petrophysics Forum
Crains Petrophysical Handbook
RockPhysicists
Society of Petrophysicists and Well Log Analysts (SPWLA) | Wikipedia/Petrophysics |
The geology of solar terrestrial planets mainly deals with the geological aspects of the four terrestrial planets of the Solar System – Mercury, Venus, Earth, and Mars – and one terrestrial dwarf planet: Ceres. Earth is the only terrestrial planet known to have an active hydrosphere.
Terrestrial planets are substantially different from the giant planets, which might not have solid surfaces and are composed mostly of some combination of hydrogen, helium, and water existing in various physical states. Terrestrial planets have a compact, rocky surfaces, and Venus, Earth, and Mars each also has an atmosphere. Their size, radius, and density are all similar.
Terrestrial planets have numerous similarities to dwarf planets (objects like Pluto), which also have a solid surface, but are primarily composed of icy materials. During the formation of the Solar System, there were probably many more (planetesimals), but they have all merged with or been destroyed by the four remaining worlds in the solar nebula.
The terrestrial planets all have roughly the same structure: a central metallic core, mostly iron, with a surrounding silicate mantle. The Moon is similar, but lacks a substantial iron core. Three of the four solar terrestrial planets (Venus, Earth, and Mars) have substantial atmospheres; all have impact craters and tectonic surface features such as rift valleys and volcanoes.
The term inner planet should not be confused with inferior planet, which refers to any planet that is closer to the Sun than the observer's planet is, but usually refers to Mercury and Venus.
== Formation of solar planets ==
The Solar System is believed to have formed according to the nebular hypothesis, first proposed in 1755 by Immanuel Kant and independently formulated by Pierre-Simon Laplace. This theory holds that 4.6 billion years ago the Solar System formed from the gravitational collapse of a giant molecular cloud. This initial cloud was likely several light-years across and probably birthed several stars.
The first solid particles were microscopic in size. These particles orbited the Sun in nearly circular orbits right next to each other, as the gas from which they condensed. Gradually, gentle collisions allowed the flakes to stick together and make larger particles which, in turn, attracted more solid particles towards them. This process is known as accretion. The objects formed by accretion are called planetesimals—they act as seeds for planet formation. Initially, planetesimals were closely packed. They coalesced into larger objects, forming clumps up to a few kilometers across in a few million years, a small time in comparison to the age of the Solar System. After the planetesimals grew bigger in sizes, collisions became highly destructive, making further growth more difficult. Only the biggest planetesimals survived the fragmentation process and continued to slowly grow into protoplanets by accretion of planetesimals of similar composition. After the protoplanet formed, accumulation of heat from radioactive decay of short-lived elements melted the planet, allowing materials to differentiate (i.e. to separate according to their density).
=== Terrestrial planets ===
In the warmer inner Solar System, planetesimals formed from rocks and metals cooked billions of years ago in the cores of massive stars.
These elements constituted only 0.6% of the material in the solar nebula. That is why the terrestrial planets could not grow very large and could not exert a strong pull on hydrogen and helium gas. Also, the faster collisions among particles close to the Sun were more destructive on average. Even if the terrestrial planets had had hydrogen and helium, the Sun would have heated the gases and caused them to escape. Hence, solar terrestrial planets such as Mercury, Venus, Earth, and Mars are dense, small planets composed mostly from 2% of heavier elements contained in the solar nebula.
== Surface geology of inner solar planets ==
The four inner or terrestrial planets have dense, rocky compositions, few or no moons, and no ring systems. They are composed largely of minerals with high melting points, such as the silicates which form their solid crusts and semi-liquid mantles, and metals such as iron and nickel, which form their cores.
=== Mercury ===
The Mariner 10 mission (1974) mapped about half the surface of Mercury. On the basis of that data, scientists have a first-order understanding of the geology and history of the planet. Mercury's surface shows intercrater plains, basins, smooth plains, craters, and tectonic features.
Mercury's oldest surface is its intercrater plains, which are present (but much less extensive) on the Moon. The intercrater plains are level to gently rolling terrain that occur between and around large craters. The plains predate the heavily cratered terrain, and have obliterated many of the early craters and basins of Mercury; they probably formed by widespread volcanism early in Mercurian history.
Mercurian craters have the morphological elements of lunar craters—the smaller craters are bowl-shaped, and with increasing size they develop scalloped rims, central peaks, and terraces on the inner walls. The ejecta sheets have a hilly, lineated texture and swarms of secondary impact craters. Fresh craters of all sizes have dark or bright halos and well-developed ray systems. Although Mercurian and lunar craters are superficially similar, they show subtle differences, especially in deposit extent. The continuous ejecta and fields of secondary craters on Mercury are far less extensive (by a factor of about 0.65) for a given rim diameter than those of comparable lunar craters. This difference results from the 2.5 times higher gravitational field on Mercury compared with the Moon. As on the Moon, impact craters on Mercury are progressively degraded by subsequent impacts. The freshest craters have ray systems and a crisp morphology. With further degradation, the craters lose their crisp morphology and rays and features on the continuous ejecta become more blurred until only the raised rim near the crater remains recognizable. Because craters become progressively degraded with time, the degree of degradation gives a rough indication of the crater's relative age. On the assumption that craters of similar size and morphology are roughly the same age, it is possible to place constraints on the ages of other underlying or overlying units and thus to globally map the relative age of craters.
At least 15 ancient basins have been identified on Mercury. Tolstoj is a true multi-ring basin, displaying at least two, and possibly as many as four, concentric rings. It has a well-preserved ejecta blanket extending outward as much as 500 kilometres (311 mi) from its rim. The basin interior is flooded with plains that clearly postdate the ejecta deposits. Beethoven has only one, subdued massif-like rim 625 kilometres (388 mi) in diameter, but displays an impressive, well lineated ejecta blanket that extends as far as 500 kilometres (311 mi). As at Tolstoj, Beethoven ejecta is asymmetric. The Caloris basin is defined by a ring of mountains 1,300 kilometres (808 mi) in diameter. Individual massifs are typically 30 kilometres (19 mi) to 50 kilometres (31 mi) long; the inner edge of the unit is marked by basin-facing scarps. Lineated terrain extends for about 1,000 kilometres (621 mi) out from the foot of a weak discontinuous scarp on the outer edge of the Caloris mountains; this terrain is similar to the sculpture surrounding the Imbrium basin on the Moon. Hummocky material forms a broad annulus about 800 kilometres (497 mi) from the Caloris mountains. It consists of low, closely spaced to scattered hills about 0.3 to 1 kilometre (1 mi) across and from tens of meters to a few hundred meters high. The outer boundary of this unit is gradational with the (younger) smooth plains that occur in the same region. A hilly and furrowed terrain is found antipodal to the Caloris basin, probably created by antipodal convergence of intense seismic waves generated by the Caloris impact.
The floor of the Caloris basin is deformed by sinuous ridges and fractures, giving the basin fill a grossly polygonal pattern. These plains may be volcanic, formed by the release of magma as part of the impact event, or a thick sheet of impact melt. Widespread areas of Mercury are covered by relatively flat, sparsely cratered plains materials. They fill depressions that range in size from regional troughs to crater floors. The smooth plains are similar to the maria of the Moon, an obvious difference being that the smooth plains have the same albedo as the intercrater plains. Smooth plains are most strikingly exposed in a broad annulus around the Caloris basin. No unequivocal volcanic features, such as flow lobes, leveed channels, domes, or cones are visible. Crater densities indicate that the smooth plains are significantly younger than ejecta from the Caloris basin. In addition, distinct color units, some of lobate shape, are observed in newly processed color data. Such relations strongly support a volcanic origin for the mercurian smooth plains, even in the absence of diagnostic landforms.
Lobate scarps are widely distributed over Mercury and consist of sinuous to arcuate scarps that transect preexisting plains and craters. They are most convincingly interpreted as thrust faults, indicating a period of global compression. The lobate scarps typically transect smooth plains materials (early Calorian age) on the floors of craters, but post-Caloris craters are superposed on them. These observations suggest that lobate-scarp formation was confined to a relatively narrow interval of time, beginning in the late pre-Tolstojan period and ending in the middle to late Calorian Period. In addition to scarps, wrinkle ridges occur in the smooth plains materials. These ridges probably were formed by local to regional surface compression caused by lithospheric loading by dense stacks of volcanic lavas, as suggested for those of the lunar maria.
=== Venus ===
The surface of Venus is comparatively very flat. When 93% of the topography was mapped by Pioneer Venus, scientists found that the total distance from the lowest point to the highest point on the entire surface was about 13 kilometres (8 mi), while on the Earth the distance from the basins to the Himalayas is about 20 kilometres (12.4 mi).
According to the data of the altimeters of the Pioneer, nearly 51% of the surface is found located within 500 metres (1,640 ft) of the median radius of 6,052 km (3760 mi); only 2% of the surface is located at greater elevations than 2 kilometres (1 mi) from the median radius.
Venus shows no evidence of active plate tectonics. There is debatable evidence of active tectonics in the planet's distant past; however, events taking place since then (such as the plausible and generally accepted hypothesis that the Venusian lithosphere has thickened greatly over the course of several hundred million years) has made constraining the course of its geologic record difficult. However, the numerous well-preserved impact craters have been utilized as a dating method to approximately date the Venusian surface (since there are thus far no known samples of Venusian rock to be dated by more reliable methods). Dates derived are primarily in the range ~500 Mya–750Mya, although ages of up to ~1.2 Gya have been calculated. This research has led to the fairly well accepted hypothesis that Venus has undergone an essentially complete volcanic resurfacing at least once in its distant past, with the last event taking place approximately within the range of estimated surface ages. While the mechanism of such an impressionable thermal event remains a debated issue in Venusian geosciences, some scientists are advocates of processes involving plate motion to some extent. There are almost 1,000 impact craters on Venus, more or less evenly distributed across its surface.
Earth-based radar surveys made it possible to identify some topographic patterns related to craters, and the Venera 15 and Venera 16 probes identified almost 150 such features of probable impact origin. Global coverage from Magellan subsequently made it possible to identify nearly 900 impact craters.
Crater counts give an important estimate for the age of the surface of a planet. Over time, bodies in the Solar System are randomly impacted, so the more craters a surface has, the older it is. Compared to Mercury, the Moon and other such bodies, Venus has very few craters. In part, this is because Venus's dense atmosphere burns up smaller meteorites before they hit the surface. The Venera and Magellan data agree: there are very few impact craters with a diameter less than 30 kilometres (19 mi), and data from Magellan show an absence of any craters less than 2 kilometres (1 mi) in diameter. However, there are also fewer of the large craters, and those appear relatively young; they are rarely filled with lava, showing that they happened after volcanic activity in the area, and radar shows that they are rough and have not had time to be eroded down.
Much of Venus' surface appears to have been shaped by volcanic activity. Overall, Venus has several times as many volcanoes as Earth, and it possesses some 167 giant volcanoes that are over 100 kilometres (62 mi) across. The only volcanic complex of this size on Earth is the Big Island of Hawaii. However, this is not because Venus is more volcanically active than Earth, but because its crust is older. Earth's crust is continually recycled by subduction at the boundaries of tectonic plates, and has an average age of about 100 million years, while Venus' surface is estimated to be about 500 million years old.
Venusian craters range from 3 kilometres (2 mi) to 280 kilometres (174 mi) in diameter. There are no craters smaller than 3 km, because of the effects of the dense atmosphere on incoming objects. Objects with less than a certain kinetic energy are slowed down so much by the atmosphere that they do not create an impact crater.
=== Earth ===
The Earth's terrain varies greatly from place to place. About 70.8% of the surface is covered by water. The sea floor has mountainous features, including a globe-spanning mid-ocean ridge system, as well as undersea volcanoes, oceanic trenches, submarine canyons, oceanic plateaus, and abyssal plains. The remaining 29.2% not covered by water consists of mountains, deserts, plains, plateaus, and other geomorphologies.
The planetary surface undergoes reshaping over geological time periods due to the effects of tectonics and erosion. Surface features built up or deformed through plate tectonics are subject to steady weathering from precipitation, thermal cycles, and chemical effects. Glaciation, coastal erosion, the build-up of coral reefs, and large meteorite impacts also act to reshape the landscape.
As the continental plates migrate across the planet, the ocean floor is subducted under the leading edges. At the same time, upwellings of mantle material create a divergent boundary along mid-ocean ridges. The combination of these processes continually recycles the ocean plate material. Most of the ocean floor is less than 100 million years in age. The oldest ocean plate is located in the Western Pacific, and has an estimated age of about 200 million years. By comparison, the oldest fossils found on land have an age of about 3 billion years.
The continental plates consist of lower density material such as the igneous rocks granite and andesite. Less common is basalt, a denser volcanic rock that is the primary constituent of the ocean floors. Sedimentary rock
is formed from the accumulation of sediment that becomes compacted together. Nearly 75% of the continental surfaces are covered by sedimentary rocks, although they form only about 5% of the crust. The third form of rock material found on Earth is metamorphic rock, which is created from the transformation of pre-existing rock types through high pressures, high temperatures, or both. The most abundant silicate minerals on the Earth's surface include quartz, the feldspars, amphibole, mica, pyroxene, and olivine. Common carbonate minerals include calcite (found in limestone), aragonite, and dolomite.
The pedosphere is the outermost layer of the Earth that is composed of soil and subject to soil formation processes. It exists at the interface of the lithosphere, atmosphere, hydrosphere, and biosphere. Currently the total arable land is 13.31% of the land surface, with only 4.71% supporting permanent crops. Close to 40% of the Earth's land surface is presently used for cropland and pasture, or an estimated 13 million square kilometres (5.0 million square miles) of cropland and 34 million square kilometres (13 million square miles) of pastureland.
The physical features of land are remarkably varied. The largest mountain ranges—the Himalayas in Asia and the Andes in South America—extend for thousands of kilometres. The longest rivers are the river Nile in Africa (6,695 kilometres or 4,160 miles) and the Amazon river in South America (6,437 kilometres or 4,000 miles). Deserts cover about 20% of the total land area. The largest is the Sahara, which covers nearly one-third of Africa.
The elevation of the land surface of the Earth varies from the low point of −418 m (−1,371 ft) at the Dead Sea, to a 2005-estimated maximum altitude of 8,848 m (29,028 ft) at the top of Mount Everest. The mean height of land above sea level is 686 m (2,250 ft).
The geological history of Earth can be broadly classified into two periods namely:
Precambrian: extends for approximately 90% of geologic time, from 4.6 billion years ago to the beginning of the Cambrian Period (539 Ma). It is generally believed that small proto-continents existed prior to 3000 Ma, and that most of the Earth's landmasses collected into a single supercontinent around 1000 Ma.
Phanerozoic: the current eon in the geologic timescale. It covers 539 million years. During this time, continents drifted about, eventually collected into a single landmass known as Pangea and then split up into the current continental landmasses.
=== Mars ===
The surface of Mars is thought to be primarily composed of basalt, based upon the observed lava flows from volcanos, the Martian meteorite collection, and data from landers and orbital observations. The lava flows from Martian volcanos show that lava has a very low viscosity, typical of basalt.
Analysis of the soil samples collected by the Viking landers in 1976 indicate iron-rich clays consistent with weathering of basaltic rocks. There is some evidence that some portion of the Martian surface might be more silica-rich than typical basalt, perhaps similar to andesitic rocks on Earth, though these observations may also be explained by silica glass, phyllosilicates, or opal. Much of the surface is deeply covered by dust as fine as talcum powder. The red/orange appearance of Mars' surface is caused by iron(III) oxide (rust). Mars has twice as much iron oxide in its outer layer as Earth does, despite their supposed similar origin. It is thought that Earth, being hotter, transported much of the iron downwards in the 1,800 kilometres (1,118 mi) deep, 3,200 °C (5,792 °F), lava seas of the early planet, while Mars, with a lower lava temperature of 2,200 °C (3,992 °F) was too cool for this to happen.
The core is surrounded by a silicate mantle that formed many of the tectonic and volcanic features on the planet. The average thickness of the planet's crust is about 50 km, and it is no thicker than 125 kilometres (78 mi), which is much thicker than Earth's crust which varies between 5 kilometres (3 mi) and 70 kilometres (43 mi). As a result, Mars' crust does not easily deform, as was shown by the recent radar map of the south polar ice cap which does not deform the crust despite being about 3 km thick.
Crater morphology provides information about the physical structure and composition of the surface. Impact craters allow us to look deep below the surface and into Mars geological past. Lobate ejecta blankets (pictured left) and central pit craters are common on Mars but uncommon on the Moon, which may indicate the presence of near-surface volatiles (ice and water) on Mars. Degraded impact structures record variations in volcanic, fluvial, and aeolian activity.
The Yuty crater is an example of a Rampart crater so called because of the rampart like edge of the ejecta. In the Yuty crater the ejecta completely covers an older crater at its side, showing that the ejected material is just a thin layer.
The geological history of Mars can be broadly classified into many epochs, but the following are the three major ones:
Noachian epoch (named after Noachis Terra): Formation of the oldest extant surfaces of Mars, 3.8 billion years ago to 3.5 billion years ago. Noachian age surfaces are scarred by many large impact craters. The Tharsis bulge volcanic upland is thought to have formed during this period, with extensive flooding by liquid water late in the epoch.
Hesperian epoch (named after Hesperia Planum): 3.5 billion years ago to 1.8 billion years ago. The Hesperian epoch is marked by the formation of extensive lava plains.
Amazonian epoch (named after Amazonis Planitia): 1.8 billion years ago to present. Amazonian regions have few meteorite impact craters but are otherwise quite varied. Olympus Mons, the largest volcano in the known Universe, formed during this period along with lava flows elsewhere on Mars.
=== Ceres ===
The geology of the dwarf planet, Ceres, was largely unknown until Dawn spacecraft explored it in early 2015. However, certain surface features such as "Piazzi", named after the dwarf planets' discoverer, had been resolved.[a] Ceres's oblateness is consistent with a differentiated body, a rocky core overlain with an icy mantle. This 100-kilometer-thick mantle (23%–28% of Ceres by mass; 50% by volume) contains 200 million cubic kilometers of water, which is more than the amount of fresh water on Earth. This result is supported by the observations made by the Keck telescope in 2002 and by evolutionary modeling. Also, some characteristics of its surface and history (such as its distance from the Sun, which weakened solar radiation enough to allow some fairly low-freezing-point components to be incorporated during its formation), point to the presence of volatile materials in the interior of Ceres. It has been suggested that a remnant layer of liquid water may have survived to the present under a layer of ice.
The surface composition of Ceres is broadly similar to that of C-type asteroids. Some differences do exist. The ubiquitous features of the Cererian IR spectra are those of hydrated materials, which indicate the presence of significant amounts of water in the interior. Other possible surface constituents include iron-rich clay minerals (cronstedtite) and carbonate minerals (dolomite and siderite), which are common minerals in carbonaceous chondrite meteorites. The spectral features of carbonates and clay minerals are usually absent in the spectra of other C-type asteroids. Sometimes Ceres is classified as a G-type asteroid.
The Cererian surface is relatively warm. The maximum temperature with the Sun overhead was estimated from measurements to be 235 K (about −38 °C, −36 °F) on 5 May 1991.
Prior to the Dawn mission, only a few Cererian surface features had been unambiguously detected. High-resolution ultraviolet Hubble Space Telescope images taken in 1995 showed a dark spot on its surface, which was nicknamed "Piazzi" in honor of the discoverer of Ceres. This was thought to be a crater. Later near-infrared images with a higher resolution taken over a whole rotation with the Keck telescope using adaptive optics showed several bright and dark features moving with Ceres's rotation. Two dark features had circular shapes and are presumably craters; one of them was observed to have a bright central region, whereas another was identified as the "Piazzi" feature. More recent visible-light Hubble Space Telescope images of a full rotation taken in 2003 and 2004 showed 11 recognizable surface features, the natures of which are currently unknown. One of these features corresponds to the "Piazzi" feature observed earlier.
These last observations also determined that the north pole of Ceres points in the direction of right ascension 19 h 24 min (291°), declination +59°, in the constellation Draco. This means that Ceres's axial tilt is very small—about 3°.
==== Atmosphere ====
There are indications that Ceres may have a tenuous atmosphere and water frost on the surface. Surface water ice is unstable at distances less than 5 AU from the Sun, so it is expected to vaporize if it is exposed directly to solar radiation. Water ice can migrate from the deep layers of Ceres to the surface, but escapes in a very short time. As a result, it is difficult to detect water vaporization. Water escaping from polar regions of Ceres was possibly observed in the early 1990s but this has not been unambiguously demonstrated. It may be possible to detect escaping water from the surroundings of a fresh impact crater or from cracks in the subsurface layers of Ceres. Ultraviolet observations by the IUE spacecraft detected statistically significant amounts of hydroxide ions near the Cererean north pole, which is a product of water-vapor dissociation by ultraviolet solar radiation.
In early 2014, using data from the Herschel Space Observatory, it was discovered that there are several localized (not more than 60 km in diameter) mid-latitude sources of water vapor on Ceres, which each give off about 1026 molecules (or 3 kg) of water per second. Two potential source regions, designated Piazzi (123°E, 21°N) and Region A (231°E, 23°N), have been visualized in the near infrared as dark areas (Region A also has a bright center) by the W. M. Keck Observatory. Possible mechanisms for the vapor release are sublimation from about 0.6 km2 of exposed surface ice, or cryovolcanic eruptions resulting from radiogenic internal heat or from pressurization of a subsurface ocean due to growth of an overlying layer of ice. Surface sublimation would be expected to decline as Ceres recedes from the Sun in its eccentric orbit, whereas internally powered emissions should not be affected by orbital position. The limited data available are more consistent with cometary-style sublimation. The spacecraft Dawn is approaching Ceres at aphelion, which may constrain Dawn's ability to observe this phenomenon.
Note: This info was taken directly from the main article, sources for the material are included there.
== Small Solar System bodies ==
Asteroids, comets, and meteoroids are all debris remaining from the nebula in which the Solar System formed 4.6 billion years ago.
=== Asteroid belt ===
The asteroid belt is located between Mars and Jupiter. It is made of thousands of rocky planetesimals from 1,000 kilometres (621 mi) to a few meters across. These are thought to be debris of the formation of the Solar System that could not form a planet due to Jupiter's gravity. When asteroids collide they produce small fragments that occasionally fall on Earth. These rocks are called meteorites and provide information about the primordial solar nebula. Most of these fragments have the size of sand grains. They burn up in the Earth's atmosphere, causing them to glow like meteors.
=== Comets ===
A comet is a small Solar System body that orbits the Sun and (at least occasionally) exhibits a coma (or atmosphere) and/or a tail—both primarily from the effects of solar radiation upon the comet's nucleus, which itself is a minor body composed of rock, dust, and ice.
=== Kuiper belt ===
The Kuiper belt, sometimes called the Edgeworth–Kuiper belt, is a region of the Solar System beyond the planets extending from the orbit of Neptune (at 30 AU) to approximately 55 AU from the Sun. It is similar to the asteroid belt, although it is far larger; 20 times as wide and 20–200 times as massive. Like the asteroid belt, it consists mainly of small bodies (remnants from the Solar System's formation) and at least one dwarf planet—Pluto, which may be geologically active. But while the asteroid belt is composed primarily of rock and metal, the Kuiper belt is composed largely of ices, such as methane, ammonia, and water. The objects within the Kuiper belt, together with the members of the scattered disc and any potential Hills cloud or Oort cloud objects, are collectively referred to as trans-Neptunian objects (TNOs). Two TNOs have been visited and studied at close range, Pluto and 486958 Arrokoth.
== See also ==
Lunar soil
Martian soil
Water on terrestrial planets
== References ==
== External links ==
International Astronomical Union
Solar System Live (an interactive orrery)
Solar System Viewer (animation)
Pictures of the Solar System Archived 2008-02-16 at the Wayback Machine
Renderings of the planets
NASA Planet Quest
Illustration comparing the sizes of the planets with each other, the sun, and other stars
Q&A: The IAU's Proposed Planet Definition
Q&A New planets proposal
Solar system – About Space
Atlas of Mercury – NASA
Nine Planets Information
NASA's fact sheet
Planetary Science Research Discoveries | Wikipedia/Geology_of_solar_terrestrial_planets |
Earth sciences graphics software is a plotting and image processing software used in atmospheric sciences, meteorology, climatology, oceanography and other Earth science disciplines.
Earth Sciences graphics software includes the capability to read specialized data formats such as netCDF, HDR and GRIB. Such software is sometimes able to access the data from remote data centers. Examples of applications include satellite data processing, analysis of output from complex meteorological models and display of time series of data. Graphics capabilities range from simple line plots to complex three-dimensional visualizations.
This type of graphics software is often used to display results from earth sciences numerical models.
== External links ==
List of many graphical packages which use NetCDF to provide a glimpse of graphical packages used in Earth Sciences. | Wikipedia/Earth_sciences_graphics_software |
Nature-based solutions (or nature-based systems, and abbreviated as NBS or NbS) describe the development and use of nature (biodiversity) and natural processes to address diverse socio-environmental issues. These issues include climate change mitigation and adaptation, human security issues such as water security and food security, and disaster risk reduction. The aim is that resilient ecosystems (whether natural, managed, or newly created) provide solutions for the benefit of both societies and biodiversity. The 2019 UN Climate Action Summit highlighted nature-based solutions as an effective method to combat climate change. For example, nature-based systems for climate change adaptation can include natural flood management, restoring natural coastal defences, and providing local cooling.: 310
The concept of NBS is related to the concept of ecological engineering and ecosystem-based adaptation.: 284 NBS are also related, conceptually to the practice of ecological restoration. The sustainable management approach is a key aspect of NBS development and implementation.
Mangrove restoration efforts along coastlines provide an example of a nature-based solution that can achieve multiple goals. Mangroves moderate the impact of waves and wind on coastal settlements or cities, and they sequester carbon. They also provide nursery zones for marine life which is important for sustaining fisheries. Additionally, mangrove forests can help to control coastal erosion resulting from sea level rise.
Green roofs, blue roofs and green walls (as part of green infrastructure) are also nature-based solutions that can be implemented in urban areas. They can reduce the effects of urban heat islands, capture stormwater, abate pollution, and act as carbon sinks. At the same time, they can enhance local biodiversity.
NBS systems and solutions are forming an increasing part of national and international policies on climate change. They are included in climate change policy, infrastructure investment, and climate finance mechanisms. The European Commission has paid increasing attention to NBS since 2013. This is reflected in the majority of global NBS case studies reviewed by Debele et al (2023) being located in Europe. While there is much scope for scaling-up nature-based systems and solutions globally, they frequently encounter numerous challenges during planning and implementation.
The IPCC pointed out that the term is "the subject of ongoing debate, with concerns that it may lead to the misunderstanding that NbS on its own can provide a global solution to climate change".: 24 To clarify this point further, the IPCC also stated that "nature-based systems cannot be regarded as an alternative to, or a reason to delay, deep cuts in GHG emissions".: 203
== Definition ==
The International Union for Conservation of Nature (IUCN) defines NBS as "actions to protect, sustainably manage, and restore natural or modified ecosystems, that address societal challenges effectively and adaptively, simultaneously providing human well-being and biodiversity benefits". Societal challenges of relevance here include climate change, food security, disaster risk reduction, water security.
In other words: "Nature-based solutions are interventions that use the natural functions of healthy ecosystems to protect the environment but also provide numerous economic and social benefits.": 1403 They are used both in the context of climate change mitigation as well as adaptation.: 469
The European Commission's definition of NBS states that these solutions are "inspired and supported by nature, which are cost-effective, simultaneously provide environmental, social and economic benefits and help build resilience. Such solutions bring more, and more diverse, nature and natural features and processes into cities, landscapes, and seascapes, through locally adapted, resource-efficient and systemic interventions". In 2020, the EC definition was updated to further emphasise that "Nature-based solutions must benefit biodiversity and support the delivery of a range of ecosystem services."
The IPCC Sixth Assessment Report pointed out that the term nature-based solutions is "widely but not universally used in the scientific literature".: 24 As of 2017, the term NBS was still regarded as "poorly defined and vague".
The term ecosystem-based adaptation (EbA) is a subset of nature-based solutions and "aims to maintain and increase the resilience and reduce the vulnerability of ecosystems and people in the face of the adverse effects of climate change".: 284
=== History of the term ===
The term nature-based solutions was put forward by practitioners in the late 2000s. At that time it was used by international organisations such as the International Union for Conservation of Nature and the World Bank in the context of finding new solutions to mitigate and adapt to climate change effects by working with natural ecosystems rather than relying purely on engineering interventions.: 3
Many indigenous peoples have recognised the natural environment as playing an important role in human well-being as part of their traditional knowledge systems, but this idea did not enter into modern scientific literature until the 1970's with the concept of ecosystem services.: 2
The IUCN referred to NBS in a position paper for the United Nations Framework Convention on Climate Change. The term was also adopted by European policymakers, in particular by the European Commission, in a report stressing that NBS can offer innovative means to create jobs and growth as part of a green economy. The term started to make appearances in the mainstream media around the time of the Global Climate Action Summit in California in September 2018.
== Objectives and framing ==
Nature-based solutions stress the sustainable use of nature in solving coupled environmental-social-economic challenges. NBS go beyond traditional biodiversity conservation and management principles by "re-focusing" the debate on humans and specifically integrating societal factors such as human well-being and poverty reduction, socio-economic development, and governance principles.
The general objective of NBS is clear, namely the sustainable management and use of Nature for tackling societal challenges. However, different stakeholders view NBS from a variety of perspectives. For instance, the IUCN puts the need for well-managed and restored ecosystems at the heart of NBS, with the overarching goal of "Supporting the achievement of society's development goals and safeguard human well-being in ways that reflect cultural and societal values and enhance the resilience of ecosystems, their capacity for renewal and the provision of services".
The European Commission underlines that NBS can transform environmental and societal challenges into innovation opportunities, by turning natural capital into a source for green growth and sustainable development. Within this viewpoint, nature-based solutions to societal challenges "bring more, and more diverse, nature and natural features and processes into cities, landscapes and seascapes, through locally adapted, resource-efficient and systemic interventions". As a result, NBS has been suggested as a means of implementing the nature-positive goal to halt and reverse nature loss by 2030, and achieve full nature recovery by 2050.
== Categories ==
The IUCN proposes to consider NBS as an umbrella concept. Categories and examples of NBS approaches according to the IUCN include:
== Types ==
Scientists have proposed a typology to characterise NBS along two gradients:
"How much engineering of biodiversity and ecosystems is involved in NBS", and
"How many ecosystem services and stakeholder groups are targeted by a given NBS".
The typology highlights that NBS can involve very different actions on ecosystems (from protection, to management, or even the creation of new ecosystems) and is based on the assumption that the higher the number of services and stakeholder groups targeted, the lower the capacity to maximise the delivery of each service and simultaneously fulfil the specific needs of all stakeholder groups.
As such, three types of NBS are distinguished (hybrid solutions exist along this gradient both in space and time. For instance, at a landscape scale, mixing protected and managed areas could be required to fulfill multi-functionality and sustainability goals):
=== Type 1 – Minimal intervention in ecosystems ===
Type 1 consists of no or minimal intervention in ecosystems, with the objectives of maintaining or improving the delivery of a range of ecosystem services both inside and outside of these conserved ecosystems. Examples include the protection of mangroves in coastal areas to limit risks associated with extreme weather conditions; and the establishment of marine protected areas to conserve biodiversity within these areas while exporting fish and other biomass into fishing grounds. This type of NBS is connected to, for example, the concept of biosphere reserves.
=== Type 2 – Some interventions in ecosystems and landscapes ===
Type 2 corresponds to management approaches that develop sustainable and multifunctional ecosystems and landscapes (extensively or intensively managed). These types improve the delivery of selected ecosystem services compared to what would be obtained through a more conventional intervention. Examples include innovative planning of agricultural landscapes to increase their multi-functionality; using existing agrobiodiversity to increase biodiversity, connectivity, and resilience in landscapes; and approaches for enhancing tree species and genetic diversity to increase forest resilience to extreme events. This type of NBS is strongly connected to concepts like agroforestry.
=== Type 3 – Managing ecosystems in extensive ways ===
Type 3 consists of managing ecosystems in very extensive ways or even creating new ecosystems (e.g., artificial ecosystems with new assemblages of organisms for green roofs and walls to mitigate city warming and clean polluted air). Type 3 is linked to concepts like green and blue infrastructures and objectives like restoration of heavily degraded or polluted areas and greening cities. Constructed wetlands are one example for a Type 3 NBS.
== Applications ==
=== Climate change mitigation and adaptation ===
The 2019 UN Climate Action Summit highlighted nature-based solutions as an effective method to combat climate change. For example, NBS in the context of climate action can include natural flood management, restoring natural coastal defences, providing local cooling, restoring natural fire regimes.: 310
The Paris Agreement calls on all Parties to recognise the role of natural ecosystems in providing services such as that of carbon sinks. Article 5.2 encourages Parties to adopt conservation and management as a tool for increasing carbon stocks and Article 7.1 encourages Parties to build the resilience of socioeconomic and ecological systems through economic diversification and sustainable management of natural resources. The Agreement refers to nature (ecosystems, natural resources, forests) in 13 distinct places. An in-depth analysis of all Nationally Determined Contributions submitted to UNFCCC, revealed that around 130 NDCs or 65% of signatories commit to nature-based solutions in their climate pledges. This suggests a broad consensus for the role of nature in helping to meet climate change goals. However, high-level commitments rarely translate into robust, measurable actions on-the-ground.
A global systemic map of evidence was produced to determine and illustrate the effectiveness of NBS for climate change adaptation. After sorting through 386 case studies with computer programs, the study found that NBS were just as, if not more, effective than traditional or alternative flood management strategies. 66% of cases evaluated reported positive ecological outcomes, 24% did not identify a change in ecological conditions and less than 1% reported negative impacts. Furthermore, NBS always had better social and climate change mitigation impacts.
In the 2019 UN Climate Action Summit, nature-based solutions were one of the main topics covered, and were discussed as an effective method to combat climate change. A "Nature-Based Solution Coalition" was created, including dozens of countries, led by China and New Zealand.
=== Urban areas ===
Since around 2017, many studies have proposed ways of planning and implementing nature-based solutions in urban areas.
It is crucial that grey infrastructures continue to be used with green infrastructure. Multiple studies recognise that while NBS is very effective and improves flood resilience, it is unable to act alone and must be in coordination with grey infrastructure. Using green infrastructure alone or grey infrastructure alone are less effective than when the two are used together. When NBS is used alongside grey infrastructure the benefits transcend flood management and improve social conditions, increase carbon sequestration and prepare cities for planning for resilience.
In the 1970s a popular approach in the U.S. was that of Best Management Practices (BMP) for using nature as a model for infrastructure and development while the UK had a model for flood management called "sustainable drainage systems". Another framework called "Water Sensitive Urban Design" (WSUD) came out of Australia in the 1990s while Low Impact Development (LID) came out of the U.S. Eventually New Zealand reframed LID to create "Low Impact Urban Design and Development" (LIUDD) with a focus on using diverse stakeholders as a foundation. Then in the 2000s the western hemisphere largely adopted "Green Infrastructure" for stormwater management as well as enhancing social, economic and environmental conditions for sustainability.
In a Chinese National Government program, the Sponge Cities Program, planners are using green grey infrastructure in 30 Chinese cities as a way to manage pluvial flooding and climate change risk after rapid urbanization.
=== Water management aspects ===
With respect to water issues, NBS can achieve the following:
Use natural processes to enhance water availability (e.g., soil moisture retention, groundwater recharge),
Improve water quality (e.g., natural wetlands and constructed wetlands to treat wastewater; riparian buffer strips), and
Reduce risks associated with water‐related disasters and climate change (e.g., floodplain restoration, green roofs).
The UN has also tried to promote a shift in perspective towards NBS: the theme for World Water Day 2018 was "Nature for Water", while UN-Water's accompanying UN World Water Development Report was titled "Nature-based Solutions for Water".
For example, the Lancaster Environment Centre has implemented catchments at different scales on flood basins in conjunction with modelling software that allows observers to calculate the factor by which the floodplain expanded during two storm events. The idea is to divert higher floods flows into expandable areas of storage in the landscape.
=== Forest restoration for multiple benefits ===
Forest restoration can benefit both biodiversity and human livelihoods (eg. providing food, timber and medicinal products). Diverse, native tree species are also more likely to be resilient to climate change than plantation forests. Agricultural expansion has been the main driver of deforestation globally. Forest loss has been estimated at around 4.7 million ha per year in 2010–2020. Over the same period, Asia had the highest net gain of forest area followed by Oceania and Europe. Forest restoration, as part of national development strategies, can help countries achieve sustainable development goals. For example, in Rwanda, the Rwanda Natural Resources Authority, World Resources Institute and IUCN began a program in 2015 for forest landscape restoration as a national priority. NBS approaches used were ecological restoration and ecosystem-based mitigation and the program was meant to address the following societal issues: food security, water security, disaster risk reduction.: 50 The Great Green Wall, a joint campaign among African countries to combat desertification launched in 2007.
== Implementation ==
=== Guidance for effective implementation ===
A number of studies and reports have proposed principles and frameworks to guide effective and appropriate implementation.: 5 One primary principle, for example, is that NBS seek to embrace, rather than replace, nature conservation norms. NBS can be implemented alone or in an integrated manner along with other solutions to societal challenges (e.g. technological and engineering solutions) and are applied at the landscape scale.
Researchers have pointed out that "instead of framing NBS as an alternative to engineered approaches, we should focus on finding synergies among different solutions".
The concept of NBS is gaining acceptance outside the conservation community (e.g. urban planning) and is now on its way to be mainstreamed into policies and programmes (climate change policy, law, infrastructure investment, and financing mechanisms), although NBS still face many implementation barriers and challenges.
Multiple case studies have demonstrated that NBS can be more economically viable than traditional technological infrastructures.
Implementation of NBS requires measures like adaptation of economic subsidy schemes, and the creation of opportunities for conservation finance, to name a few.
=== Using geographic information systems (GIS) ===
NBS are also determined by site-specific natural and cultural contexts that include traditional, local and scientific knowledge. Geographic information systems (GIS) can be used as an analysis tool to determine sites that may succeed as NBS. GIS can function in such a way that site conditions including slope gradients, water bodies, land use and soils are taken into account in analyzing for suitability. The resulting maps are often used in conjunction with historic flood maps to determine the potential of floodwater storage capacity on specific sites using 3D modeling tools.
=== Projects supported by the European Union ===
Since 2016, the EU has supported a multi-stakeholder dialogue platform (ThinkNature) to promote the co-design, testing, and deployment of improved and innovative NBS in an integrated way. The creation of such science-policy-business-society interfaces could promote market uptake of NBS. The project was part of the EU’s Horizon 2020 Research and Innovation programme, and ran for 3 years.
In 2017, as part of the Presidency of the Estonian Republic of the Council of the European Union, a conference called "Nature-based Solutions: From Innovation to Common-use" was organised by the Ministry of the Environment of Estonia and the University of Tallinn. This conference aimed to strengthen synergies among various recent initiatives and programs related to NBS, focusing on policy and governance of NBS, research, and innovation.
== Concerns ==
The Indigenous Environmental Network has stated that "Nature-based solutions (NBS) is a greenwashing tool that does not address the root causes of climate change." and "The legacy of colonial power continues through nature-based solutions." For example, NBS activities can involve converting non-forest land into forest plantations (for climate change mitigation) but this carries risks of climate injustice through taking land away from smallholders and pastoralists.: 163
However, the IPCC pointed out that the term is "the subject of ongoing debate, with concerns that it may lead to the misunderstanding that NbS on its own can provide a global solution to climate change".: 24 To clarify this point further, the IPCC also stated that "nature-based systems cannot be regarded as an alternative to, or a reason to delay, deep cuts in GHG emissions".: 203
The majority of case studies and examples of NBS are from the Global North, resulting in a lack of data for many medium- and low-income nations. Consequently, many ecosystems and climates are excluded from existing studies as well as cost analyses in these locations. Further research needs to be conducted in the Global South to determine the efficacy of NBS on climate, social and ecological standards.
== Related concepts ==
NBS is closely related to concepts like ecosystem approaches and ecological engineering. This includes concepts such as ecosystem-based adaptation: 284 and green infrastructure.
For instance, ecosystem-based approaches are increasingly promoted for climate change adaptation and mitigation by organisations like the United Nations Environment Programme and non-governmental organisations such as The Nature Conservancy. These organisations refer to "policies and measures that take into account the role of ecosystem services in reducing the vulnerability of society to climate change, in a multi-sectoral and multi-scale approach".
== See also ==
Forest restoration – Actions to reinstate forest health
Sustainability – Societal goal and normative concept
Sustainable architecture – Architecture designed to minimize environmental impact
Sustainable development – Mode of human development
Tree planting – Process of transplanting tree seedlings
Urban forestry – Land use management system in which trees or shrubs are cared or protected for well-being
== References ==
== External links ==
Nature-based solutions in the context of climate change:
Nature-based Solutions Initiative - interdisciplinary programme of research, education and policy advice based in the Departments of Biology and Geography at the University of Oxford
An Introduction to Nature-based Solutions (by weADAPT)
Shortfilm by Greta Thunberg and George Monbiot: Nature Now 2020
Q&A: Can ‘nature-based solutions’ help address climate change? by CarbonBrief. 2021.
Nature-based solutions in other contexts:
Sustainable cities: Nature-based solutions in urban design (The Nature Conservancy): https://vimeo.com/155849692
Video: Think Nature: A guide to using nature-based solutions (IUCN) | Wikipedia/Nature-based_solutions |
A transform fault or transform boundary, is a fault along a plate boundary where the motion is predominantly horizontal. It ends abruptly where it connects to another plate boundary, either another transform, a spreading ridge, or a subduction zone. A transform fault is a special case of a strike-slip fault that also forms a plate boundary.
Most such faults are found in oceanic crust, where they accommodate the lateral offset between segments of divergent boundaries, forming a zigzag pattern. This results from oblique seafloor spreading where the direction of motion is not perpendicular to the trend of the overall divergent boundary. A smaller number of such faults are found on land, although these are generally better-known, such as the San Andreas Fault and North Anatolian Fault.
== Nomenclature ==
Transform boundaries are also known as conservative plate boundaries because they involve no addition or loss of lithosphere at the Earth's surface.
== Background ==
Geophysicist and geologist John Tuzo Wilson recognized that the offsets of oceanic ridges by faults do not follow the classical pattern of an offset fence or geological marker in Reid's rebound theory of faulting, from which the sense of slip is derived. The new class of faults, called transform faults, produce slip in the opposite direction from what one would surmise from the standard interpretation of an offset geological feature. Slip along transform faults does not increase the distance between the ridges it separates; the distance remains constant in earthquakes because the ridges are spreading centers. This hypothesis was confirmed in a study of the fault plane solutions that showed the slip on transform faults points in the opposite direction than classical interpretation would suggest.
== Difference between transform and transcurrent faults ==
Transform faults are closely related to transcurrent faults and are commonly confused. Both types of fault are strike-slip or side-to-side in movement; nevertheless, transform faults always end at a junction with another plate boundary, while transcurrent faults may die out without a junction with another fault. Finally, transform faults form a tectonic plate boundary, while transcurrent faults do not.
== Mechanics ==
Faults in general are focused areas of deformation or strain, which are the response of built-up stresses in the form of compression, tension, or shear stress in rock at the surface or deep in the Earth's subsurface. Transform faults specifically accommodate lateral strain by transferring displacement between mid-ocean ridges or subduction zones. They also act as the plane of weakness, which may result in splitting in rift zones.
== Transform faults and divergent boundaries ==
Transform faults are commonly found linking segments of divergent boundaries (mid-oceanic ridges or spreading centres). These mid-oceanic ridges are where new seafloor is constantly created through the upwelling of new basaltic magma. With new seafloor being pushed and pulled out, the older seafloor slowly slides away from the mid-oceanic ridges toward the continents. Although separated only by tens of kilometers, this separation between segments of the ridges causes portions of the seafloor to push past each other in opposing directions. This lateral movement of seafloors past each other is where transform faults are currently active.
Transform faults move differently from a strike-slip fault at the mid-oceanic ridge. Instead of the ridges moving away from each other, as they do in other strike-slip faults, transform-fault ridges remain in the same, fixed locations, and the new ocean seafloor created at the ridges is pushed away from the ridge. Evidence of this motion can be found in paleomagnetic striping on the seafloor.
A paper written by geophysicist Taras Gerya theorizes that the creation of the transform faults between the ridges of the mid-oceanic ridge is attributed to rotated and stretched sections of the mid-oceanic ridge. This occurs over a long period of time with the spreading center or ridge slowly deforming from a straight line to a curved line. Finally, fracturing along these planes forms transform faults. As this takes place, the fault changes from a normal fault with extensional stress to a strike-slip fault with lateral stress. In the study done by Bonatti and Crane, peridotite and gabbro rocks were discovered in the edges of the transform ridges. These rocks are created deep inside the Earth's mantle and then rapidly exhumed to the surface. This evidence helps to prove that new seafloor is being created at the mid-oceanic ridges and further supports the theory of plate tectonics.
Active transform faults are between two tectonic structures or faults. Fracture zones represent the previously active transform-fault lines, which have since passed the active transform zone and are being pushed toward the continents. These elevated ridges on the ocean floor can be traced for hundreds of miles and in some cases even from one continent across an ocean to the other continent.
== Types ==
In his work on transform-fault systems, geologist Tuzo Wilson said that transform faults must be connected to other faults or tectonic-plate boundaries on both ends; because of that requirement, transform faults can grow in length, keep a constant length, or decrease in length. These length changes are dependent on which type of fault or tectonic structure connect with the transform fault. Wilson described six types of transform faults:
Growing length: In situations where a transform fault links a spreading center and the upper block of a subduction zone or where two upper blocks of subduction zones are linked, the transform fault itself will grow in length.
Constant length: In other cases, transform faults will remain at a constant length. This steadiness can be attributed to many different causes. In the case of ridge-to-ridge transforms, the constancy is caused by the continuous growth by both ridges outward, canceling any change in length. The opposite occurs when a ridge linked to a subducting plate, where all the lithosphere (new seafloor) being created by the ridge is subducted, or swallowed up, by the subduction zone. Finally, when two upper subduction plates are linked there is no change in length. This is due to the plates moving parallel with each other and no new lithosphere is being created to change that length.
Decreasing length faults: In rare cases, transform faults can shrink in length. These occur when two descending subduction plates are linked by a transform fault. In time as the plates are subducted, the transform fault will decrease in length until the transform fault disappears completely, leaving only two subduction zones facing in opposite directions.
== Examples ==
The most prominent examples of the mid-oceanic ridge transform zones are in the Atlantic Ocean between South America and Africa. Known as the St. Paul, Romanche, Chain, and Ascension fracture zones, these areas have deep, easily identifiable transform faults and ridges. Other locations include: the East Pacific Ridge located in the South Eastern Pacific Ocean, which meets up with San Andreas Fault to the North.
Transform faults are not limited to oceanic crust and spreading centers; many of them are on continental margins. The best example is the San Andreas Fault on the Pacific coast of the United States. The San Andreas Fault links the East Pacific Rise off the West coast of Mexico (Gulf of California) to the Mendocino triple junction (Part of the Juan de Fuca plate) off the coast of the Northwestern United States, making it a ridge-to-transform-style fault. The formation of the San Andreas Fault system occurred fairly recently during the Oligocene Period between 34 million and 24 million years ago. During this period, the Farallon plate, followed by the Pacific plate, collided into the North American plate. The collision led to the subduction of the Farallon plate underneath the North American plate. Once the spreading center separating the Pacific and the Farallon plates was subducted beneath the North American plate, the San Andreas Continental Transform-Fault system was created.
In New Zealand, the South Island's Alpine Fault is a transform fault for much of its length. This has resulted in the folded land of the Southland Syncline being split into an eastern and western section several hundred kilometres apart. The majority of the syncline is found in Southland and The Catlins in the island's southeast, but a smaller section is also present in the Tasman District in the island's northwest.
Another example is the Húsavík‐Flatey fault. This oceanic transform fault is nearly completely submerged, but ~10 km is exposed in northern Iceland, near the town of Húsavík. There, it manifests as a series of half-grabens and sharp fault scarps. Since oceanic transform faults are often difficult to research because of their submerged nature, this fault represents a rare opportunity for research. Scientists inspected Holocene earthquake activity by looking cross sections of the fault, and found the approximate earthquake frequency in the region to be 600 ± 200 years.
Other examples include:
Middle East's Dead Sea Transform Fault
Pakistan's Chaman Fault
Turkey's North Anatolian Fault
North America's Queen Charlotte Fault
Myanmar's Sagaing Fault
== See also ==
Fracture zone – Linear feature on the ocean floor
Leaky transform fault – Transform fault producing new crust
List of tectonic plate interactions – Movements of Earth's lithosphere
Plate tectonics – Movement of Earth's lithosphere
Strike-slip tectonics – Deformation dominated by horizontal movement in Earth's lithosphere
Structural geology – Science of the description and interpretation of deformation in the Earth's crust
== References == | Wikipedia/Transform_fault |
Vertebrates () are animals with a vertebral column (backbone or spine), and a cranium, or skull. The vertebral column surrounds and protects the spinal cord, while the cranium protects the brain.
The vertebrates make up the subphylum Vertebrata with some 65,000 species, by far the largest ranked grouping in the phylum Chordata. The vertebrates include mammals, birds, amphibians, and various classes of fish and reptiles. The fish include the jawless Agnatha, and the jawed Gnathostomata. The jawed fish include both the cartilaginous fish and the bony fish. Bony fish include the lobe-finned fish, which gave rise to the tetrapods, the animals with four limbs. Despite their success, vertebrates still only make up less than five percent of all described animal species.
The first vertebrates appeared in the Cambrian explosion some 518 million years ago. Jawed vertebrates evolved in the Ordovician, followed by bony fishes in the Devonian. The first amphibians appeared on land in the Carboniferous. During the Triassic, mammals and dinosaurs appeared, the latter giving rise to birds in the Jurassic. Extant species are roughly equally divided between fishes of all kinds, and tetrapods. Populations of many species have been in steep decline since 1970 because of land-use change, overexploitation of natural resources, climate change, pollution and the impact of invasive species.
== Characteristics ==
=== Unique features ===
Vertebrates belong to Chordata, a phylum characterised by five synapomorphies (unique characteristics): namely a notochord, a hollow nerve cord along the back, an endostyle (often as a thyroid gland), and pharyngeal gills arranged in pairs. Vertebrates share these characteristics with other chordates.
Vertebrates are distinguished from all other animals, including other chordates, by multiple synapomorphies: namely the vertebral column, skull of bone or cartilage, large brain divided into 3 or more sections, a muscular heart with multiple chambers; an inner ear with semicircular canals; sense organs including eyes, ears, and nose; and digestive organs including intestine, liver, pancreas, and stomach.
=== Physical ===
Vertebrates (and other chordates) belong to the Bilateria, a group of animals with mirror symmetrical bodies. They move, typically by swimming, using muscles along the back, supported by a strong but flexible skeletal structure, the spine or vertebral column. The name 'vertebrate' derives from the Latin vertebratus, 'jointed', from vertebra, 'joint', in turn from Latin vertere, 'to turn'.
As embryos, vertebrates still have a notochord; as adults, all but the jawless fishes have a vertebral column, made of bone or cartilage, instead. Vertebrate embryos have pharyngeal arches; in adult fish, these support the gills, while in adult tetrapods they develop into other structures.
In the embryo, a layer of cells along the back folds and fuses into a hollow neural tube. This develops into the spinal cord, and at its front end, the brain. The brain receives information about the world through nerves which carry signals from sense organs in the skin and body. Because the ancestors of vertebrates usually moved forwards, the front of the body encountered stimuli before the rest of the body, favouring cephalisation, the evolution of a head containing sense organs and a brain to process the sensory information.
Vertebrates have a tubular gut that extends from the mouth to the anus. The vertebral column typically continues beyond the anus to form an elongated tail.
The ancestral vertebrates, and most extant species, are aquatic and carry out gas exchange in their gills. The gills are finely-branched structures which bring the blood close to the water. They are positioned just behind the head, supported by cartilaginous or bony branchial arches. In jawed vertebrates, the first gill arch pair evolved into the jaws. In amphibians and some primitive bony fishes, the larvae have external gills, branching off from the gill arches. Oxygen is carried from the gills to the body in the blood, and carbon dioxide is returned to the gills, in a closed circulatory system driven by a chambered heart. The tetrapods have lost the gills of their fish ancestors; they have adapted the swim bladder (that fish use for buoyancy) into lungs to breathe air, and the circulatory system is adapted accordingly. At the same time, they adapted the bony fins of the lobe-finned fishes into two pairs of walking legs, carrying the weight of the body via the shoulder and pelvic girdles.
Vertebrates vary in size from the smallest frog species such as Brachycephalus pulex, with a minimum adult snout–vent length of 6.45 millimetres (0.254 in) to the blue whale, at up to 33 m (108 ft) and weighing some 150 tonnes.
=== Molecular ===
Molecular markers known as conserved signature indels in protein sequences have been identified and provide distinguishing criteria for the vertebrate subphylum. Five molecular markers are exclusively shared by all vertebrates and reliably distinguish them from all other animals; these include protein synthesis elongation factor-2, eukaryotic translation initiation factor 3, adenosine kinase and a protein related to ubiquitin carboxyl-terminal hydrolase). A specific relationship between vertebrates and tunicates is supported by two molecular markers, the proteins Rrp44 (associated with the exosome complex) and serine C-palmitoyltransferase. These are exclusively shared by species from these two subphyla, but not by cephalochordates.
== Evolutionary history ==
=== Cambrian explosion: first vertebrates ===
Vertebrates originated during the Cambrian explosion at the start of the Paleozoic, which saw a rise in animal diversity. The earliest known vertebrates belong to the Chengjiang biota and lived about 518 million years ago. These include Haikouichthys, Myllokunmingia, Zhongjianichthys, and probably Yunnanozoon. Unlike other Cambrian animals, these groups had the basic vertebrate body plan: a notochord, rudimentary vertebrae, and a well-defined head and tail, but lacked jaws. A vertebrate group of uncertain phylogeny, small eel-like conodonts, are known from microfossils of their paired tooth segments from the late Cambrian to the end of the Triassic. Zoologists have debated whether teeth mineralized first, given the hard teeth of the soft-bodied conodonts, and then bones, or vice versa, but it seems that the mineralized skeleton came first.
=== Paleozoic: from fish to amphibians ===
The first jawed vertebrates may have appeared in the late Ordovician (~445 mya) and became common in the Devonian period, often known as the "Age of Fishes". The two groups of bony fishes, Actinopterygii and Sarcopterygii, evolved and became common. By the middle of the Devonian, a lineage of sarcopterygii with both gills and air-breathing lungs adapted to life in swampy pools used their muscular paired fins to propel themselves on land. The fins, already possessing bones and joints, evolved into two pairs of walking legs. These established themselves as amphibians, terrestrial tetrapods, in the next geological period, the Carboniferous. A group of vertebrates, the amniotes, with membranes around the embryo allowing it to survive on dry land, branched from amphibious tetrapods in the Carboniferous.
=== Mesozoic: from reptiles to mammals and birds ===
At the onset of the Mesozoic, all larger vertebrate groups were devastated after the largest mass extinction in earth history. The following recovery phase saw the emergence of many new vertebrate groups that are still around today, and this time has been described as the origin of modern ecosystems. On the continents, the ancestors of modern lissamphibians, turtles, crocodilians, lizards, and mammals appeared, as well as dinosaurs, which gave rise to birds later in the Mesozoic. In the seas, various groups of marine reptiles evolved, as did new groups of fish. At the end of the Mesozoic, another extinction event extirpated dinosaurs (other than birds) and many other vertebrate groups.
=== Cenozoic: Age of Mammals ===
The Cenozoic, the current era, is sometimes called the "Age of Mammals", because of the dominance of the terrestrial environment by that group. Placental mammals have predominantly occupied the Northern Hemisphere, with marsupial mammals in the Southern Hemisphere.
== Approaches to classification ==
=== Taxonomic history ===
In 1811, Jean-Baptiste Lamarck defined the vertebrates as a taxonomic group, a phylum distinct from the invertebrates he was studying. He described them as consisting of four classes, namely fish, reptiles, birds, and mammals, but treated the cephalochordates and tunicates as molluscs. In 1866, Ernst Haeckel called both his "Craniata" (vertebrates) and his "Acrania" (cephalochordates) "Vertebrata". In 1877, Ray Lankester grouped the Craniates, cephalochordates, and "Urochordates (tunicates) as "Vertebrata". In 1880–1881, Francis Maitland Balfour placed the Vertebrata as a subphylum within the Chordates. In 2018, Naoki Irie and colleagues proposed making Vertebrata a full phylum.
=== Traditional taxonomy ===
Conventional evolutionary taxonomy groups extant vertebrates into seven classes based on traditional interpretations of gross anatomical and physiological traits. The commonly held classification lists three classes of fish and four of tetrapods. This ignores some of the natural relationships between the groupings. For example, the birds derive from a group of reptiles, so "Reptilia" excluding "Aves" is not a natural grouping; it is described as paraphyletic.
Subphylum Vertebrata
Class Agnatha (jawless fishes, paraphyletic)
Class Chondrichthyes (cartilaginous fishes)
Class Osteichthyes (bony fishes, paraphyletic)
Class Amphibia (traditional amphibians, paraphyletic)
Class Reptilia (reptiles, paraphyletic)
Class Aves (birds)
Class Mammalia (mammals)
In addition to these, there are two classes of extinct armoured fishes, Placodermi and Acanthodii, both paraphyletic.
Other ways of classifying the vertebrates have been devised, particularly with emphasis on the phylogeny of early amphibians and reptiles. An example based on work by M.J. Benton in 2004 is given here († = extinct):
Subphylum Vertebrata
Infraphylum "Agnatha" (lampreys and other jawless fishes)
Superclass †Anaspidomorphi (anaspids and relatives)
Class †Anaspida (anaspids)
Superclass Cyclostomata (cyclostomes)
Class Myxini (hagfish)
Class Petromyzontida (lampreys)
Class †Cephalaspidomorphi (cephalaspidomorphs)
Class †Conodonta (conodonts)
Class †Pteraspidomorpha (pteraspidomorphs)
Class †Thelodonti (thelodonts)
Infraphylum Gnathostomata (vertebrates with jaws)
Class †"Placodermi" (extinct armoured fishes)
Class Chondrichthyes (cartilaginous fishes)
Class †"Acanthodii" (extinct spiny "sharks")
Superclass "Osteichthyes" (bony fishes)
Class Actinopterygii (ray-finned bony fishes)
Class "Sarcopterygii" (lobe-finned fishes, cladistically including the tetrapods)
Superclass Tetrapoda (four-limbed vertebrates)
Class "Amphibia" (amphibians, some ancestral to the amniotes)—now a paraphyletic group
Class Synapsida (mammals and their extinct relatives)
Class Sauropsida (reptiles and birds)
Incertae sedis
Genus †Nuucichthys
Genus †Palaeospondylus
While this traditional taxonomy is orderly, most of the groups are paraphyletic, meaning that the structure does not accurately reflect the natural evolved grouping. For instance, descendants of the first reptiles include modern reptiles, mammals and birds; the agnathans have given rise to the jawed vertebrates; the bony fishes have given rise to the land vertebrates; a group of amphibians, the labyrinthodonts, have given rise to the reptiles (traditionally including the mammal-like synapsids), which in turn have given rise to the mammals and birds. Most scientists working with vertebrates use a classification based purely on phylogeny, organized by their known evolutionary history.
=== External phylogeny ===
The closest relatives of vertebrates have been debated over the years. It was once thought that the Cephalochordata was the sister taxon to Vertebrata. This group, Notochordata, was taken to be sister to the Tunicata. Since 2006, analysis has shown that the tunicates + vertebrates form a clade, the Olfactores, with Cephalochordata as its sister (the Olfactores hypothesis), as shown in the following phylogenetic tree.
=== Internal phylogeny ===
The internal phylogeny of the vertebrates is shown in the below tree.
The placement of hagfishes within the vertebrates has been controversial. Their lack of proper vertebrae (among other characteristics of jawless lampreys and jawed vertebrates) led authors of phylogenetic analyses based on morphology to place them outside Vertebrata. Molecular data however indicates that they are vertebrates, being most closely related to lampreys. An older view is that they are a sister group of vertebrates in the common taxon of Craniata. In 2019, Tetsuto Miyashita and colleagues reconciled the two types of analysis, supporting the Cyclostomata hypothesis using only morphological data.
== Diversity ==
=== Species by group ===
Described and extant vertebrate species are split roughly evenly but non-phylogenetically between non-tetrapod "fish" and tetrapods. The following table lists the number of described extant species for each vertebrate class as estimated in the IUCN Red List of Threatened Species, 2014.3. Paraphyletic groups are shown in quotation marks.
The IUCN estimates that 1,305,075 extant invertebrate species have been described, which means that less than 5% of the described animal species in the world are vertebrates.
=== Population trends ===
The Living Planet Index, following 16,704 populations of 4,005 species of vertebrates, shows a decline of 60% between 1970 and 2014. Since 1970, freshwater species declined 83%, and tropical populations in South and Central America declined 89%. The authors note that "An average trend in population change is not an average of total numbers of animals lost." According to WWF, this could lead to a sixth major extinction event. The five main causes of biodiversity loss are land-use change, overexploitation of natural resources, climate change, pollution and invasive species.
== Notes ==
== See also ==
Marine vertebrate – Marine animals with a vertebrate column
Taxonomy of the vertebrates (Young, 1962) – Classification of spine-possessing animals according to some authorities
== References ==
== Bibliography ==
Kardong, Kenneth V. (1998). Vertebrates: Comparative Anatomy, Function, Evolution (second ed.). USA: McGraw-Hill. pp. 747 pp. ISBN 978-0-697-28654-3.
"Vertebrata". Integrated Taxonomic Information System. Retrieved 6 August 2007.
== External links ==
Tree of Life
Tunicates and not cephalochordates are the closest living relatives of vertebrates
Vertebrate Pests chapter in United States Environmental Protection Agency and University of Florida/Institute of Food and Agricultural Sciences National Public Health Pesticide Applicator Training Manual
The Vertebrates
The Origin of Vertebrates Marc W. Kirschner, iBioSeminars, 2008. | Wikipedia/Vertebrate |
Paleogeosciences are those associated with the past states or processes associated with Earth science or geoscience. Earth science or geoscience is an all-embracing term referring to the fields of science dealing with planet Earth. These studies of Earth's history encompass the Biosphere, Cryosphere, Hydrosphere, Atmosphere, and Lithosphere; the Geosphere. One of the most socially prominent facets of the paleogeosciences would be their applications to Earth's changing climate system.
== Etymology ==
The term "Paleogeoscience" was coined by the Collaboration and Cyberinfrastructure for Paleogeosciences (C4P) research coordination network (RCN), a National Science Foundation EarthCube funded project intending to foster collaboration among paleogeoscientists, paleobiologists, bioinformaticists, stratigraphers, geochronologists, geographers, data scientists, and computer scientists with an aim to dramatically improve the application of modern data management approaches, data mining technologies, and computational methods to better analyze data within the paleogeosciences and other domains and disciplines.
== Definition ==
"Paleogeoscience" is the collective term for geologic studies that pertain to past geological processes. It combines paleoenvironmental and paleobiologial perspectives towards the goal of furthering our understanding of the interactions between life and the Earth through time. It encompasses subjects such as Paleobiology, Paleoclimatology, Geochemistry, Geochronology, Stratigraphy, Paleobotany, Paleogeography, and more.
Goals of paleogeosciences include understanding and recreating the Earth System over time for use in understanding the future of the Earth. It uses tangible data and proxy data.
See Resources section for links to catalogs of hundreds of resources for data, software, and sample collections pertaining to many realms of paleogeoscience.
== Resources ==
NSF EarthCube Paleogeoscience RCN Catalog of Software Resources
NSF EarthCube Paleogeoscience RCN Catalog of Physical Sample Repository Resources
NSF EarthCube Paleogeoscience RCN Catalog of Database Resources
OGC Catalog Service web service primer and instructions for accessing NSF EarthCube Paleogeoscience RCN Catalogs
== References == | Wikipedia/Paleogeoscience |
Geophysical fluid dynamics, in its broadest meaning, is the application of fluid dynamics to naturally occurring flows, such as lava, oceans, and atmospheres, on Earth and other planets.
Two physical features that are common to many of the phenomena studied in geophysical fluid dynamics are rotation of the fluid due to the planetary rotation and stratification (layering).
The applications of geophysical fluid dynamics do not generally include the circulation of the mantle, which is the subject of geodynamics, or fluid phenomena in the magnetosphere.
Ocean circulation and air circulation are typically studied in oceanography and meteorology.
== Fundamentals ==
To describe the flow of geophysical fluids, equations are needed for conservation of momentum (or Newton's second law) and conservation of energy. The former leads to the Navier–Stokes equations which cannot be solved analytically (yet). Therefore, further approximations are generally made in order to be able to solve these equations. First, the fluid is assumed to be incompressible. Remarkably, this works well even for a highly compressible fluid like air as long as sound and shock waves can be ignored.: 2–3 Second, the fluid is assumed to be a Newtonian fluid, meaning that there is a linear relation between the shear stress τ and the strain u, for example
τ
=
μ
d
u
d
x
,
{\displaystyle \tau =\mu {\frac {du}{dx}},}
where μ is the viscosity.: 2–3 Under these assumptions the Navier-Stokes equations are
ρ
(
∂
v
∂
t
⏟
Eulerian
acceleration
+
v
⋅
∇
v
⏟
Advection
)
⏞
Inertia (per volume)
=
−
∇
p
⏟
Pressure
gradient
+
μ
∇
2
v
⏟
Viscosity
⏞
Divergence of stress
+
f
⏟
Other
body
forces
.
{\displaystyle \overbrace {\rho {\Big (}\underbrace {\frac {\partial \mathbf {v} }{\partial t}} _{\begin{smallmatrix}{\text{Eulerian}}\\{\text{acceleration}}\end{smallmatrix}}+\underbrace {\mathbf {v} \cdot \nabla \mathbf {v} } _{\begin{smallmatrix}{\text{Advection}}\end{smallmatrix}}{\Big )}} ^{\text{Inertia (per volume)}}=\overbrace {\underbrace {-\nabla p} _{\begin{smallmatrix}{\text{Pressure}}\\{\text{gradient}}\end{smallmatrix}}+\underbrace {\mu \nabla ^{2}\mathbf {v} } _{\text{Viscosity}}} ^{\text{Divergence of stress}}+\underbrace {\mathbf {f} } _{\begin{smallmatrix}{\text{Other}}\\{\text{body}}\\{\text{forces}}\end{smallmatrix}}.}
The left hand side represents the acceleration that a small parcel of fluid would experience in a reference frame that moved with the parcel (a Lagrangian frame of reference). In a stationary (Eulerian) frame of reference, this acceleration is divided into the local rate of change of velocity and advection, a measure of the rate of flow in or out of a small region.: 44–45
The equation for energy conservation is essentially an equation for heat flow. If heat is transported by conduction, the heat flow is governed by a diffusion equation. If there are also buoyancy effects, for example hot air rising, then natural convection, also known as free convection, can occur.: 171 Convection in the Earth's outer core drives the geodynamo that is the source of the Earth's magnetic field.: Chapter 8 In the ocean, convection can be thermal (driven by heat), haline (where the buoyancy is due to differences in salinity), or thermohaline, a combination of the two.
== Buoyancy and stratification ==
Fluid that is less dense than its surroundings tends to rise until it has the same density as its surroundings. If there is not much energy input to the system, it will tend to become stratified. On a large scale, Earth's atmosphere is divided into a series of layers. Going upwards from the ground, these are the troposphere, stratosphere, mesosphere, thermosphere, and exosphere.
The density of air is mainly determined by temperature and water vapor content, the density of sea water by temperature and salinity, and the density of lake water by temperature. Where stratification occurs, there may be thin layers in which temperature or some other property changes more rapidly with height or depth than the surrounding fluid. Depending on the main sources of buoyancy, this layer may be called a pycnocline (density), thermocline (temperature), halocline (salinity), or chemocline (chemistry, including oxygenation).
The same buoyancy that gives rise to stratification also drives gravity waves. If the gravity waves occur within the fluid, they are called internal waves.: 208–214
In modeling buoyancy-driven flows, the Navier-Stokes equations are modified using the Boussinesq approximation. This ignores variations in density except where they are multiplied by the gravitational acceleration g.: 188
If the pressure depends only on density and vice versa, the fluid dynamics are called barotropic. In the atmosphere, this corresponds to a lack of fronts, as in the tropics. If there are fronts, the flow is baroclinic, and instabilities such as cyclones can occur.
== Rotation ==
Coriolis effect
Circulation
Kelvin's circulation theorem
Vorticity equation
Thermal wind
Geostrophic current
Geostrophic wind
Taylor–Proudman theorem
Hydrostatic equilibrium
Ekman spiral
Ekman layer
== General circulation ==
Atmospheric circulation
Ocean current
Ocean dynamics
Thermohaline circulation
Boundary current
Sverdrup balance
Subsurface currents
== Waves ==
=== Barotropic ===
Kelvin wave
Rossby wave
Sverdrup wave (Poincaré wave)
=== Baroclinic ===
Gravity wave
== See also ==
Geophysical Fluid Dynamics Laboratory
== References ==
== Further reading ==
== External links ==
Geophysical Fluid Dynamics Program (Woods Hole Oceanographic Institution)
Geophysical Fluid Dynamics Laboratory (University of Washington) | Wikipedia/Geophysical_fluid_dynamics |
Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the re-construction of past climate at various intervals. Paleoceanographic research is also intimately tied to paleoclimatology.
== Source and methods of information ==
Paleoceanography makes use of so-called proxy methods as a way to infer information about the past state and evolution of the world's oceans. Several geochemical proxy tools include long-chain organic molecules (e.g. alkenones), stable and radioactive isotopes, and trace metals. Additionally, sediment cores rich with fossils and shells (tests) can also be useful; the field of paleoceanography is closely related to sedimentology and paleontology.
=== Sea-surface temperature ===
Sea-surface temperature (SST) records can be extracted from deep-sea sediment cores using oxygen isotope ratios and the ratio of magnesium to calcium (Mg/Ca) in shell secretions from plankton, from long-chain organic molecules such as alkenone, from tropical corals near the sea surface, and from mollusk shells.
Oxygen isotope ratios (δ18O) are useful in reconstructing SST because of the influence temperature has on the isotope ratio. Plankton take up oxygen in building their shells and will be less enriched in their δ18O when formed in warmer waters, provided they are in thermodynamic equilibrium with the seawater. When these shells precipitate, they sink and form sediments on the ocean floor whose δ18O can be used to infer past SSTs. Oxygen isotope ratios are not perfect proxies, however. The volume of ice trapped in continental ice sheets can have an impact of the δ18O. Freshwater characterized by lower values of δ18O becomes trapped in the continental ice sheets, so that during glacial periods seawater δ18O is elevated and calcite shells formed during these times will have a larger δ18O value.
The substitution of magnesium in place of calcium in CaCO3 shells can be used as a proxy for the SST in which the shells formed. Mg/Ca ratios have several other influencing factors other than temperature, such as vital effects, shell-cleaning, and postmortem and post-depositional dissolution effects, to name a few. Other influences aside, Mg/Ca ratios have successfully quantified the tropical cooling that occurred during the last glacial period.
Alkenones are long-chain, complex organic molecules produced by photosynthetic algae. They are temperature sensitive and can be extracted from marine sediments. Use of alkenones represents a more direct relationship between SST and algae and does not rely on knowing biotic and physical-chemical thermodynamic relationships needed in CaCO3 studies. Another advantage of using alkenones is that they are a product of photosynthesis, necessitating formation in the sunlight of the upper surface layers. As such, it better records near-surface SST.
=== Bottom-water temperature ===
The most commonly used proxy to infer deep-sea temperature history are the Mg/Ca ratios in benthic foraminifera and ostracodes. The temperatures inferred from the Mg/Ca ratios have confirmed an up to 3 °C cooling of the deep ocean during the late Pleistocene glacial periods. One notable study is that by Lear et al. [2002] who worked to calibrate bottom water temperature to Mg/Ca ratios in 9 locations covering a variety of depths from up to six different benthic foraminifera (depending on location). The authors found an equation calibrating bottom water temperature of Mg/Ca ratios that takes on an exponential form:
M
g
/
C
a
=
0.867
±
0.049
∗
exp
(
0.109
±
0.007
∗
B
W
T
)
:
{\displaystyle \mathrm {Mg/Ca} =0.867\pm 0.049*\exp(0.109\pm 0.007*\mathrm {BWT} ):}
where Mg/Ca is the Mg/Ca ratio found in the benthic foraminifera and BWT is the bottom water temperature.
=== Sediment Records ===
Sediment records have been used to make inferences about the past and predictions about the future, and has been used in Paleoceanography research since the 1930s. Modern time scale reconstructive research has advanced using sediment core-scanning methods. These methods have enabled research similar to that conducted with ice core records in Antarctica. These records can inform on the relative abundance of organisms present at a given time using paleoproductivity methods such as measuring the total diatom abundance. Records can also inform on historic weather patterns and ocean circulation such as Deschamps et al. described with their research into sediment records from the Chukchi-Alaskan and Canadian Beaufort Margins.
=== Salinity ===
Salinity is a more challenging quantity to infer from paleorecords. Deuterium excess in core records can provide a better inference of sea-surface salinity than oxygen isotopes, and certain species, such as diatoms, can provide a semiquantitative salinity record due to the relative abundances of diatoms that are limited to certain salinity regimes. There have been changes to global water cycle and the salinity balance of the oceans with the North Atlantic and becoming more saline and the sub-tropical Indian and pacific oceans becoming less so. With changes to the water cycle, there have also been variations with the vertical distribution of salt and haloclines. Large incursions of freshwater and changing salinity can also contribute to a reduction in sea ice extent.
=== Ocean circulation ===
Several proxy methods have been used to infer past ocean circulation and changes to it. They include carbon isotope ratios, cadmium/calcium (Cd/Ca) ratios, protactinium/thorium isotopes (231Pa and 230Th), radiocarbon activity (δ14C), neodymium isotopes (143Nd and 144Nd), and sortable silt (fraction of deep-sea sediment between 10 and 63 μm). Carbon isotope and cadmium/calcium ratio proxies are used because variability in their ratios is due partly to changes in bottom-water chemistry, which is in turn related the source of deep-water formation. These ratios, however, are influenced by biological, ecological, and geochemical processes which complicate circulation inferences.
All proxies included are useful in inferring the behavior of the meridional overturning circulation. For example, McManus et al. [2004] used protactinium/thorium isotopes (231Pa and 230Th) to show that the Atlantic Meridional Overturning Circulation had been nearly (or completely) shut off during the last glacial period. 231Pa and 230Th are both formed from the radioactive decay of dissolved uranium in seawater, with 231Pa able to remain supported in the water column longer than 230Th: 231Pa has a residence time ~100–200 years while 230Th has one ~20–40 years. In today's Atlantic Ocean and current overturning circulation, 230Th transport to the Southern Ocean is minimal due to its short residence time, and 231Pa transport is high. This results in relatively low 231Pa / 230Th ratios found by McManus et al. [2004] in a core at 33N 57W, and a depth of 4.5 km. When the overturning circulation shuts down (as hypothesized) during glacial periods, the 231Pa / 230Th ratio becomes elevated due to the lack of removal of 231Pa to the Southern Ocean. McManus et al. [2004] also note a small raise in the 231Pa / 230Th ratio during the Younger Dryas event, another period in climate history thought to have experienced a weakening overturning circulation.
=== Acidity, pH, and alkalinity ===
Boron isotope ratios (δ11B) can be used to infer both recent as well as millennial time scale changes in the acidity, pH, and alkalinity of the ocean, which is mainly forced by atmospheric CO2 concentrations and bicarbonate ion concentration in the ocean. δ11B has been identified in corals from the southwestern Pacific to vary with ocean pH, and shows that climate variabilities such as the Pacific decadal oscillation (PDO) can modulate the impact of ocean acidification due to rising atmospheric CO2 concentrations. Another application of δ11B in plankton shells can be used as an indirect proxy for atmospheric CO2 concentrations over the past several million years.
== See also ==
Oceanography – Study of physical, chemical, and biological processes in the ocean
Paleoclimatology – Study of changes in ancient climate
Paleogeography – Study of physical geography of past landscapesPages displaying short descriptions of redirect targets
Paleothermometer — Study of ancient temperatures
== References ==
== External links ==
Media related to Paleoceanography at Wikimedia Commons | Wikipedia/Paleoceanography |
In electrical engineering, a transformer is a passive component that transfers electrical energy from one electrical circuit to another circuit, or multiple circuits. A varying current in any coil of the transformer produces a varying magnetic flux in the transformer's core, which induces a varying electromotive force (EMF) across any other coils wound around the same core. Electrical energy can be transferred between separate coils without a metallic (conductive) connection between the two circuits. Faraday's law of induction, discovered in 1831, describes the induced voltage effect in any coil due to a changing magnetic flux encircled by the coil.
Transformers are used to change AC voltage levels, such transformers being termed step-up or step-down type to increase or decrease voltage level, respectively. Transformers can also be used to provide galvanic isolation between circuits as well as to couple stages of signal-processing circuits. Since the invention of the first constant-potential transformer in 1885, transformers have become essential for the transmission, distribution, and utilization of alternating current electric power. A wide range of transformer designs is encountered in electronic and electric power applications. Transformers range in size from RF transformers less than a cubic centimeter in volume, to units weighing hundreds of tons used to interconnect the power grid.
== Principles ==
Ideal transformer equations
By Faraday's law of induction:
where
V
{\displaystyle V}
is the instantaneous voltage,
N
{\displaystyle N}
is the number of turns in a winding, dΦ/dt is the derivative of the magnetic flux Φ through one turn of the winding over time (t), and subscripts P and S denotes primary and secondary.
Combining the ratio of eq. 1 & eq. 2:
where for a step-up transformer a < 1 and for a step-down transformer a > 1.
By the law of conservation of energy, apparent, real and reactive power are each conserved in the input and output:
where
S
{\displaystyle S}
is apparent power and
I
{\displaystyle I}
is current.
Combining Eq. 3 & Eq. 4 with this endnote gives the ideal transformer identity:
where
L
P
{\displaystyle L_{\text{P}}}
is the primary winding self-inductance and
L
S
{\displaystyle L_{\text{S}}}
is the secondary winding self-inductance.
By Ohm's law and ideal transformer identity:
where
Z
L
{\displaystyle Z_{\text{L}}}
is the load impedance of the secondary circuit &
Z
L
′
{\displaystyle Z'_{\text{L}}}
is the apparent load or driving point impedance of the primary circuit, the superscript
′
{\displaystyle '}
denoting referred to the primary.
=== Ideal transformer ===
An ideal transformer is linear, lossless and perfectly coupled. Perfect coupling implies infinitely high core magnetic permeability and winding inductance and zero net magnetomotive force (i.e. ipnp − isns = 0).
A varying current in the transformer's primary winding creates a varying magnetic flux in the transformer core, which is also encircled by the secondary winding. This varying flux at the secondary winding induces a varying electromotive force or voltage in the secondary winding. This electromagnetic induction phenomenon is the basis of transformer action and, in accordance with Lenz's law, the secondary current so produced creates a flux equal and opposite to that produced by the primary winding.
The windings are wound around a core of infinitely high magnetic permeability so that all of the magnetic flux passes through both the primary and secondary windings. With a voltage source connected to the primary winding and a load connected to the secondary winding, the transformer currents flow in the indicated directions and the core magnetomotive force cancels to zero.
According to Faraday's law, since the same magnetic flux passes through both the primary and secondary windings in an ideal transformer, a voltage is induced in each winding proportional to its number of turns. The transformer winding voltage ratio is equal to the winding turns ratio.
An ideal transformer is a reasonable approximation for a typical commercial transformer, with voltage ratio and winding turns ratio both being inversely proportional to the corresponding current ratio.
The load impedance referred to the primary circuit is equal to the turns ratio squared times the secondary circuit load impedance.
=== Real transformer ===
==== Deviations from ideal transformer ====
The ideal transformer model neglects many basic linear aspects of real transformers, including unavoidable losses and inefficiencies.
(a) Core losses, collectively called magnetizing current losses, consisting of
Hysteresis losses due to nonlinear magnetic effects in the transformer core, and
Eddy current losses due to joule heating in the core that are proportional to the square of the transformer's applied voltage.
(b) Unlike the ideal model, the windings in a real transformer have non-zero resistances and inductances associated with:
Joule losses due to resistance in the primary and secondary windings
Leakage flux that escapes from the core and passes through one winding only resulting in primary and secondary reactive impedance.
(c) similar to an inductor, parasitic capacitance and self-resonance phenomenon due to the electric field distribution. Three kinds of parasitic capacitance are usually considered and the closed-loop equations are provided
Capacitance between adjacent turns in any one layer;
Capacitance between adjacent layers;
Capacitance between the core and the layer(s) adjacent to the core;
Inclusion of capacitance into the transformer model is complicated, and is rarely attempted; the 'real' transformer model's equivalent circuit shown below does not include parasitic capacitance. However, the capacitance effect can be measured by comparing open-circuit inductance, i.e. the inductance of a primary winding when the secondary circuit is open, to a short-circuit inductance when the secondary winding is shorted.
==== Leakage flux ====
The ideal transformer model assumes that all flux generated by the primary winding links all the turns of every winding, including itself. In practice, some flux traverses paths that take it outside the windings. Such flux is termed leakage flux, and results in leakage inductance in series with the mutually coupled transformer windings. Leakage flux results in energy being alternately stored in and discharged from the magnetic fields with each cycle of the power supply. It is not directly a power loss, but results in inferior voltage regulation, causing the secondary voltage not to be directly proportional to the primary voltage, particularly under heavy load. Transformers are therefore normally designed to have very low leakage inductance.
In some applications increased leakage is desired, and long magnetic paths, air gaps, or magnetic bypass shunts may deliberately be introduced in a transformer design to limit the short-circuit current it will supply. Leaky transformers may be used to supply loads that exhibit negative resistance, such as electric arcs, mercury- and sodium- vapor lamps and neon signs or for safely handling loads that become periodically short-circuited such as electric arc welders.: 485
Air gaps are also used to keep a transformer from saturating, especially audio-frequency transformers in circuits that have a DC component flowing in the windings. A saturable reactor exploits saturation of the core to control alternating current.
Knowledge of leakage inductance is also useful when transformers are operated in parallel. It can be shown that if the percent impedance and associated winding leakage reactance-to-resistance (X/R) ratio of two transformers were
the same, the transformers would share the load power in proportion to their respective ratings. However, the impedance tolerances of commercial transformers are significant. Also, the impedance and X/R ratio of different capacity transformers tends to vary.
==== Equivalent circuit ====
Referring to the diagram, a practical transformer's physical behavior may be represented by an equivalent circuit model, which can incorporate an ideal transformer.
Winding joule losses and leakage reactance are represented by the following series loop impedances of the model:
Primary winding: RP, XP
Secondary winding: RS, XS.
In normal course of circuit equivalence transformation, RS and XS are in practice usually referred to the primary side by multiplying these impedances by the turns ratio squared, (NP/NS) 2 = a2.
Core loss and reactance is represented by the following shunt leg impedances of the model:
Core or iron losses: RC
Magnetizing reactance: XM.
RC and XM are collectively termed the magnetizing branch of the model.
Core losses are caused mostly by hysteresis and eddy current effects in the core and are proportional to the square of the core flux for operation at a given frequency.: 142–143 The finite permeability core requires a magnetizing current IM to maintain mutual flux in the core. Magnetizing current is in phase with the flux, the relationship between the two being non-linear due to saturation effects. However, all impedances of the equivalent circuit shown are by definition linear and such non-linearity effects are not typically reflected in transformer equivalent circuits.: 142 With sinusoidal supply, core flux lags the induced EMF by 90°. With open-circuited secondary winding, magnetizing branch current I0 equals transformer no-load current.
The resulting model, though sometimes termed 'exact' equivalent circuit based on linearity assumptions, retains a number of approximations. Analysis may be simplified by assuming that magnetizing branch impedance is relatively high and relocating the branch to the left of the primary impedances. This introduces error but allows combination of primary and referred secondary resistances and reactance by simple summation as two series impedances.
Transformer equivalent circuit impedance and transformer ratio parameters can be derived from the following tests: open-circuit test, short-circuit test, winding resistance test, and transformer ratio test.
=== Transformer EMF equation ===
If the flux in the core is purely sinusoidal, the relationship for either winding between its rms voltage Erms of the winding, and the supply frequency f, number of turns N, core cross-sectional area A in m2 and peak magnetic flux density Bpeak in Wb/m2 or T (tesla) is given by the universal EMF equation:
E
rms
=
2
π
f
N
A
B
peak
2
≈
4.44
f
N
A
B
peak
{\displaystyle E_{\text{rms}}={\frac {2\pi fNAB_{\text{peak}}}{\sqrt {2}}}\approx 4.44fNAB_{\text{peak}}}
=== Polarity ===
A dot convention is often used in transformer circuit diagrams, nameplates or terminal markings to define the relative polarity of transformer windings. Positively increasing instantaneous current entering the primary winding's 'dot' end induces positive polarity voltage exiting the secondary winding's 'dot' end. Three-phase transformers used in electric power systems will have a nameplate that indicate the phase relationships between their terminals. This may be in the form of a phasor diagram, or using an alpha-numeric code to show the type of internal connection (wye or delta) for each winding.
=== Effect of frequency ===
The EMF of a transformer at a given flux increases with frequency. By operating at higher frequencies, transformers can be physically more compact because a given core is able to transfer more power without reaching saturation and fewer turns are needed to achieve the same impedance. However, properties such as core loss and conductor skin effect also increase with frequency. Aircraft and military equipment employ 400 Hz power supplies which reduce core and winding weight. Conversely, frequencies used for some railway electrification systems were much lower (e.g. 16.7 Hz and 25 Hz) than normal utility frequencies (50–60 Hz) for historical reasons concerned mainly with the limitations of early electric traction motors. Consequently, the transformers used to step-down the high overhead line voltages were much larger and heavier for the same power rating than those required for the higher frequencies.
Operation of a transformer at its designed voltage but at a higher frequency than intended will lead to reduced magnetizing current. At a lower frequency, the magnetizing current will increase. Operation of a large transformer at other than its design frequency may require assessment of voltages, losses, and cooling to establish if safe operation is practical. Transformers may require protective relays to protect the transformer from overvoltage at higher than rated frequency.
One example is in traction transformers used for electric multiple unit and high-speed train service operating across regions with different electrical standards. The converter equipment and traction transformers have to accommodate different input frequencies and voltage (ranging from as high as 50 Hz down to 16.7 Hz and rated up to 25 kV).
At much higher frequencies the transformer core size required drops dramatically: a physically small transformer can handle power levels that would require a massive iron core at mains frequency. The development of switching power semiconductor devices made switch-mode power supplies viable, to generate a high frequency, then change the voltage level with a small transformer.
Transformers for higher frequency applications such as SMPS typically use core materials with much lower hysteresis and eddy-current losses than those for 50/60 Hz. Primary examples are iron-powder and ferrite cores. The lower frequency-dependant losses of these cores often is at the expense of flux density at saturation. For instance, ferrite saturation occurs at a substantially lower flux density than laminated iron.
Large power transformers are vulnerable to insulation failure due to transient voltages with high-frequency components, such as caused in switching or by lightning.
=== Energy losses ===
Transformer energy losses are dominated by winding and core losses. Transformers' efficiency tends to improve with increasing transformer capacity. The efficiency of typical distribution transformers is between about 98 and 99 percent.
As transformer losses vary with load, it is often useful to tabulate no-load loss, full-load loss, half-load loss, and so on. Hysteresis and eddy current losses are constant at all load levels and dominate at no load, while winding loss increases as load increases. The no-load loss can be significant, so that even an idle transformer constitutes a drain on the electrical supply. Designing energy efficient transformers for lower loss requires a larger core, good-quality silicon steel, or even amorphous steel for the core and thicker wire, increasing initial cost. The choice of construction represents a trade-off between initial cost and operating cost.
Transformer losses arise from:
Winding joule losses
Current flowing through a winding's conductor causes joule heating due to the resistance of the wire. As frequency increases, skin effect and proximity effect causes the winding's resistance and, hence, losses to increase.
Core losses
Hysteresis losses
Each time the magnetic field is reversed, a small amount of energy is lost due to hysteresis within the core, caused by motion of the magnetic domains within the steel. According to Steinmetz's formula, the heat energy due to hysteresis is given by
W
h
≈
η
β
max
1.6
{\displaystyle W_{\text{h}}\approx \eta \beta _{\text{max}}^{1.6}}
and,
hysteresis loss is thus given by
P
h
≈
W
h
f
≈
η
f
β
max
1.6
{\displaystyle P_{\text{h}}\approx {W}_{\text{h}}f\approx \eta {f}\beta _{\text{max}}^{1.6}}
where, f is the frequency, η is the hysteresis coefficient and βmax is the maximum flux density, the empirical exponent of which varies from about 1.4 to 1.8 but is often given as 1.6 for iron. For more detailed analysis, see Magnetic core and Steinmetz's equation.
Eddy current losses
Eddy currents are induced in the conductive metal transformer core by the changing magnetic field, and this current flowing through the resistance of the iron dissipates energy as heat in the core. The eddy current loss is a complex function of the square of supply frequency and inverse square of the material thickness. Eddy current losses can be reduced by making the core of a stack of laminations (thin plates) electrically insulated from each other, rather than a solid block; all transformers operating at low frequencies use laminated or similar cores.
Magnetostriction related transformer hum
Magnetic flux in a ferromagnetic material, such as the core, causes it to physically expand and contract slightly with each cycle of the magnetic field, an effect known as magnetostriction, the frictional energy of which produces an audible noise known as mains hum or "transformer hum". This transformer hum is especially objectionable in transformers supplied at power frequencies and in high-frequency flyback transformers associated with television CRTs.
Stray losses
Leakage inductance is by itself largely lossless, since energy supplied to its magnetic fields is returned to the supply with the next half-cycle. However, any leakage flux that intercepts nearby conductive materials such as the transformer's support structure will give rise to eddy currents and be converted to heat.
Radiative
There are also radiative losses due to the oscillating magnetic field but these are usually small.
Mechanical vibration and audible noise transmission
In addition to magnetostriction, the alternating magnetic field causes fluctuating forces between the primary and secondary windings. This energy incites vibration transmission in interconnected metalwork, thus amplifying audible transformer hum.
== Construction ==
=== Cores ===
Closed-core transformers are constructed in 'core form' or 'shell form'. When windings surround the core, the transformer is core form; when windings are surrounded by the core, the transformer is shell form. Shell form design may be more prevalent than core form design for distribution transformer applications due to the relative ease in stacking the core around winding coils. Core form design tends to, as a general rule, be more economical, and therefore more prevalent, than shell form design for high voltage power transformer applications at the lower end of their voltage and power rating ranges (less than or equal to, nominally, 230 kV or 75 MVA). At higher voltage and power ratings, shell form transformers tend to be more prevalent. Shell form design tends to be preferred for extra-high voltage and higher MVA applications because, though more labor-intensive to manufacture, shell form transformers are characterized as having inherently better kVA-to-weight ratio, better short-circuit strength characteristics and higher immunity to transit damage.
==== Laminated steel cores ====
Transformers for use at power or audio frequencies typically have cores made of high permeability silicon steel. The steel has a permeability many times that of free space and the core thus serves to greatly reduce the magnetizing current and confine the flux to a path which closely couples the windings. Early transformer developers soon realized that cores constructed from solid iron resulted in prohibitive eddy current losses, and their designs mitigated this effect with cores consisting of bundles of insulated iron wires. Later designs constructed the core by stacking layers of thin steel laminations, a principle that has remained in use. Each lamination is insulated from its neighbors by a thin non-conducting layer of insulation. The transformer universal EMF equation can be used to calculate the core cross-sectional area for a preferred level of magnetic flux.
The effect of laminations is to confine eddy currents to highly elliptical paths that enclose little flux, and so reduce their magnitude. Thinner laminations reduce losses, but are more laborious and expensive to construct. Thin laminations are generally used on high-frequency transformers, with some of very thin steel laminations able to operate up to 10 kHz.
One common design of laminated core is made from interleaved stacks of E-shaped steel sheets capped with I-shaped pieces, leading to its name of E-I transformer. Such a design tends to exhibit more losses, but is very economical to manufacture. The cut-core or C-core type is made by winding a steel strip around a rectangular form and then bonding the layers together. It is then cut in two, forming two C shapes, and the core assembled by binding the two C halves together with a steel strap. They have the advantage that the flux is always oriented parallel to the metal grains, reducing reluctance.
A steel core's remanence means that it retains a static magnetic field when power is removed. When power is then reapplied, the residual field will cause a high inrush current until the effect of the remaining magnetism is reduced, usually after a few cycles of the applied AC waveform. Overcurrent protection devices such as fuses must be selected to allow this harmless inrush to pass.
On transformers connected to long, overhead power transmission lines, induced currents due to geomagnetic disturbances during solar storms can cause saturation of the core and operation of transformer protection devices.
Distribution transformers can achieve low no-load losses by using cores made with low-loss high-permeability silicon steel or amorphous (non-crystalline) metal alloy. The higher initial cost of the core material is offset over the life of the transformer by its lower losses at light load.
==== Solid cores ====
Powdered iron cores are used in circuits such as switch-mode power supplies that operate above mains frequencies and up to a few tens of kilohertz. These materials combine high magnetic permeability with high bulk electrical resistivity. For frequencies extending beyond the VHF band, cores made from non-conductive magnetic ceramic materials called ferrites are common. Some radio-frequency transformers also have movable cores (sometimes called 'slugs') which allow adjustment of the coupling coefficient (and bandwidth) of tuned radio-frequency circuits.
==== Toroidal cores ====
Toroidal transformers are built around a ring-shaped core, which, depending on operating frequency, is made from a long strip of silicon steel or permalloy wound into a coil, powdered iron, or ferrite. A strip construction ensures that the grain boundaries are optimally aligned, improving the transformer's efficiency by reducing the core's reluctance. The closed ring shape eliminates air gaps inherent in the construction of an E-I core. : 485 The cross-section of the ring is usually square or rectangular, but more expensive cores with circular cross-sections are also available. The primary and secondary coils are often wound concentrically to cover the entire surface of the core. This minimizes the length of wire needed and provides screening to minimize the core's magnetic field from generating electromagnetic interference.
Toroidal transformers are more efficient than the cheaper laminated E-I types for a similar power level. Other advantages compared to E-I types, include smaller size (about half), lower weight (about half), less mechanical hum (making them superior in audio amplifiers), lower exterior magnetic field (about one tenth), low off-load losses (making them more efficient in standby circuits), single-bolt mounting, and greater choice of shapes. The main disadvantages are higher cost and limited power capacity (see Classification parameters below). Because of the lack of a residual gap in the magnetic path, toroidal transformers also tend to exhibit higher inrush current, compared to laminated E-I types.
Ferrite toroidal cores are used at higher frequencies, typically between a few tens of kilohertz to hundreds of megahertz, to reduce losses, physical size, and weight of inductive components. A drawback of toroidal transformer construction is the higher labor cost of winding. This is because it is necessary to pass the entire length of a coil winding through the core aperture each time a single turn is added to the coil. As a consequence, toroidal transformers rated more than a few kVA are uncommon. Relatively few toroids are offered with power ratings above 10 kVA, and practically none above 25 kVA. Small distribution transformers may achieve some of the benefits of a toroidal core by splitting it and forcing it open, then inserting a bobbin containing primary and secondary windings.
==== Air cores ====
A transformer can be produced by placing the windings near each other, an arrangement termed an "air-core" transformer. An air-core transformer eliminates loss due to hysteresis in the core material. The magnetizing inductance is drastically reduced by the lack of a magnetic core, resulting in large magnetizing currents and losses if used at low frequencies. Air-core transformers are unsuitable for use in power distribution, but are frequently employed in radio-frequency applications. Air cores are also used for resonant transformers such as tesla coils, where they can achieve reasonably low loss despite the low magnetizing inductance.
=== Windings ===
The electrical conductor used for the windings depends upon the application, but in all cases the individual turns must be electrically insulated from each other to ensure that the current travels throughout every turn. For small transformers, in which currents are low and the potential difference between adjacent turns is small, the coils are often wound from enameled magnet wire. Larger power transformers may be wound with copper rectangular strip conductors insulated by oil-impregnated paper and blocks of pressboard.
High-frequency transformers operating in the tens to hundreds of kilohertz often have windings made of braided Litz wire to minimize the skin-effect and proximity effect losses. Large power transformers use multiple-stranded conductors as well, since even at low power frequencies non-uniform distribution of current would otherwise exist in high-current windings. Each strand is individually insulated, and the strands are arranged so that at certain points in the winding, or throughout the whole winding, each portion occupies different relative positions in the complete conductor. The transposition equalizes the current flowing in each strand of the conductor, and reduces eddy current losses in the winding itself. The stranded conductor is also more flexible than a solid conductor of similar size, aiding manufacture.
The windings of signal transformers minimize leakage inductance and stray capacitance to improve high-frequency response. Coils are split into sections, and those sections interleaved between the sections of the other winding.
Power-frequency transformers may have taps at intermediate points on the winding, usually on the higher voltage winding side, for voltage adjustment. Taps may be manually reconnected, or a manual or automatic switch may be provided for changing taps. Automatic on-load tap changers are used in electric power transmission or distribution, on equipment such as arc furnace transformers, or for automatic voltage regulators for sensitive loads. Audio-frequency transformers, used for the distribution of audio to public address loudspeakers, have taps to allow adjustment of impedance to each speaker. A center-tapped transformer is often used in the output stage of an audio power amplifier in a push-pull circuit. Modulation transformers in AM transmitters are very similar.
=== Cooling ===
It is a rule of thumb that the life expectancy of electrical insulation is halved for about every 7 °C to 10 °C increase in operating temperature (an instance of the application of the Arrhenius equation).
Small dry-type and liquid-immersed transformers are often self-cooled by natural convection and radiation heat dissipation. As power ratings increase, transformers are often cooled by forced-air cooling, forced-oil cooling, water-cooling, or combinations of these. Large transformers are filled with transformer oil that both cools and insulates the windings. Transformer oil is often a highly refined mineral oil that cools the windings and insulation by circulating within the transformer tank. The mineral oil and paper insulation system has been extensively studied and used for more than 100 years. It is estimated that 50% of power transformers will survive 50 years of use, that the average age of failure of power transformers is about 10 to 15 years, and that about 30% of power transformer failures are due to insulation and overloading failures. Prolonged operation at elevated temperature degrades insulating properties of winding insulation and dielectric coolant, which not only shortens transformer life but can ultimately lead to catastrophic transformer failure. With a great body of empirical study as a guide, transformer oil testing including dissolved gas analysis provides valuable maintenance information.
Building regulations in many jurisdictions require indoor liquid-filled transformers to either use dielectric fluids that are less flammable than oil, or be installed in fire-resistant rooms. Air-cooled dry transformers can be more economical where they eliminate the cost of a fire-resistant transformer room.
The tank of liquid-filled transformers often has radiators through which the liquid coolant circulates by natural convection or fins. Some large transformers employ electric fans for forced-air cooling, pumps for forced-liquid cooling, or have heat exchangers for water-cooling. An oil-immersed transformer may be equipped with a Buchholz relay, which, depending on severity of gas accumulation due to internal arcing, is used to either trigger an alarm or de-energize the transformer. Oil-immersed transformer installations usually include fire protection measures such as walls, oil containment, and fire-suppression sprinkler systems.
Polychlorinated biphenyls (PCBs) have properties that once favored their use as a dielectric coolant, though concerns over their environmental persistence led to a widespread ban on their use.
Today, non-toxic, stable silicone-based oils, or fluorinated hydrocarbons may be used where the expense of a fire-resistant liquid offsets additional building cost for a transformer vault. However, the long life span of transformers can mean that the potential for exposure can be high long after banning.
Some transformers are gas-insulated. Their windings are enclosed in sealed, pressurized tanks and often cooled by nitrogen or sulfur hexafluoride gas.
Experimental power transformers in the 500–1,000 kVA range have been built with liquid nitrogen or helium cooled superconducting windings, which eliminates winding losses without affecting core losses.
=== Insulation ===
Insulation must be provided between the individual turns of the windings, between the windings, between windings and core, and at the terminals of the winding.
Inter-turn insulation of small transformers may be a layer of insulating varnish on the wire. Layer of paper or polymer films may be inserted between layers of windings, and between primary and secondary windings. A transformer may be coated or dipped in a polymer resin to improve the strength of windings and protect them from moisture or corrosion. The resin may be impregnated into the winding insulation using combinations of vacuum and pressure during the coating process, eliminating all air voids in the winding. In the limit, the entire coil may be placed in a mold, and resin cast around it as a solid block, encapsulating the windings.
Large oil-filled power transformers use windings wrapped with insulating paper, which is impregnated with oil during assembly of the transformer. Oil-filled transformers use highly refined mineral oil to insulate and cool the windings and core.
Construction of oil-filled transformers requires that the insulation covering the windings be thoroughly dried of residual moisture before the oil is introduced. Drying may be done by circulating hot air around the core, by circulating externally heated transformer oil, or by vapor-phase drying (VPD) where an evaporated solvent transfers heat by condensation on the coil and core. For small transformers, resistance heating by injection of current into the windings is used.
=== Bushings ===
Larger transformers are provided with high-voltage insulated bushings made of polymers or porcelain. A large bushing can be a complex structure since it must provide careful control of the electric field gradient without letting the transformer leak oil.
== Classification parameters ==
Transformers can be classified in many ways, such as the following:
Power rating: From a fraction of a volt-ampere (VA) to over a thousand MVA.
Duty of a transformer: Continuous, short-time, intermittent, periodic, varying.
Frequency range: Power-frequency, audio-frequency, or radio-frequency.
Voltage class: From a few volts to hundreds of kilovolts.
Cooling type: Dry or liquid-immersed; self-cooled, forced air-cooled;forced oil-cooled, water-cooled.
Application: power supply, impedance matching, output voltage and current stabilizer, pulse, circuit isolation, power distribution, rectifier, arc furnace, amplifier output, etc..
Basic magnetic form: Core form, shell form, concentric, sandwich.
Constant-potential transformer descriptor: Step-up, step-down, isolation.
General winding configuration: By IEC vector group, two-winding combinations of the phase designations delta, wye or star, and zigzag; autotransformer, Scott-T
Rectifier phase-shift winding configuration: 2-winding, 6-pulse; 3-winding, 12-pulse; . . ., n-winding, [n − 1]·6-pulse; polygon; etc.
K-factor: A measure of how well the transformer can withstand harmonic loads.
== Applications ==
Various specific electrical application designs require a variety of transformer types. Although they all share the basic characteristic transformer principles, they are customized in construction or electrical properties for certain installation requirements or circuit conditions.
In electric power transmission, transformers allow transmission of electric power at high voltages, which reduces the loss due to heating of the wires. This allows generating plants to be located economically at a distance from electrical consumers. All but a tiny fraction of the world's electrical power has passed through a series of transformers by the time it reaches the consumer.
In many electronic devices, a transformer is used to convert voltage from the distribution wiring to convenient values for the circuit requirements, either directly at the power line frequency or through a switch mode power supply.
Signal and audio transformers are used to couple stages of amplifiers and to match devices such as microphones and record players to the input of amplifiers. Audio transformers allowed telephone circuits to carry on a two-way conversation over a single pair of wires. A balun transformer converts a signal that is referenced to ground to a signal that has balanced voltages to ground, such as between external cables and internal circuits. Isolation transformers prevent leakage of current into the secondary circuit and are used in medical equipment and at construction sites. Resonant transformers are used for coupling between stages of radio receivers, or in high-voltage Tesla coils.
== History ==
=== Discovery of induction ===
Electromagnetic induction, the principle of the operation of the transformer, was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832. Only Faraday furthered his experiments to the point of working out the equation describing the relationship between EMF and magnetic flux now known as Faraday's law of induction:
|
E
|
=
|
d
Φ
B
d
t
|
,
{\displaystyle |{\mathcal {E}}|=\left|{{\mathrm {d} \Phi _{\text{B}}} \over \mathrm {d} t}\right|,}
where
|
E
|
{\displaystyle |{\mathcal {E}}|}
is the magnitude of the EMF in volts and ΦB is the magnetic flux through the circuit in webers.
Faraday performed early experiments on induction between coils of wire, including winding a pair of coils around an iron ring, thus creating the first toroidal closed-core transformer. However he only applied individual pulses of current to his transformer, and never discovered the relation between the turns ratio and EMF in the windings.
=== Induction coils ===
The first type of transformer to see wide use was the induction coil, invented by Irish-Catholic Rev. Nicholas Callan of Maynooth College, Ireland in 1836. He was one of the first researchers to realize the more turns the secondary winding has in relation to the primary winding, the larger the induced secondary EMF will be. Induction coils evolved from scientists' and inventors' efforts to get higher voltages from batteries. Since batteries produce direct current (DC) rather than AC, induction coils relied upon vibrating electrical contacts that regularly interrupted the current in the primary to create the flux changes necessary for induction. Between the 1830s and the 1870s, efforts to build better induction coils, mostly by trial and error, slowly revealed the basic principles of transformers.
=== First alternating current transformers ===
By the 1870s, efficient generators producing alternating current (AC) were available, and it was found AC could power an induction coil directly, without an interrupter.
In 1876, Russian engineer Pavel Yablochkov invented a lighting system based on a set of induction coils where the primary windings were connected to a source of AC. The secondary windings could be connected to several 'electric candles' (arc lamps) of his own design. The coils Yablochkov employed functioned essentially as transformers.
In 1878, the Ganz factory, Budapest, Hungary, began producing equipment for electric lighting and, by 1883, had installed over fifty systems in Austria-Hungary. Their AC systems used arc and incandescent lamps, generators, and other equipment.
In 1882, Lucien Gaulard and John Dixon Gibbs first exhibited a device with an initially widely criticized laminated plate open iron core called a 'secondary generator' in London, then sold the idea to the Westinghouse company in the United States in 1886. They also exhibited the invention in Turin, Italy in 1884, where it was highly successful and adopted for an electric lighting system. Their open-core device used a fixed 1:1 ratio to supply a series circuit for the utilization load (lamps). However, the voltage of their system was controlled by moving the iron core in or out.
==== Early series circuit transformer distribution ====
Induction coils with open magnetic circuits are inefficient at transferring power to loads. Until about 1880, the paradigm for AC power transmission from a high voltage supply to a low voltage load was a series circuit. Open-core transformers with a ratio near 1:1 were connected with their primaries in series to allow use of a high voltage for transmission while presenting a low voltage to the lamps. The inherent flaw in this method was that turning off a single lamp (or other electric device) affected the voltage supplied to all others on the same circuit. Many adjustable transformer designs were introduced to compensate for this problematic characteristic of the series circuit, including those employing methods of adjusting the core or bypassing the magnetic flux around part of a coil.
Efficient, practical transformer designs did not appear until the 1880s, but within a decade, the transformer would be instrumental in the war of the currents, and in seeing AC distribution systems triumph over their DC counterparts, a position in which they have remained dominant ever since.
=== Closed-core transformers and parallel power distribution ===
In the autumn of 1884, Károly Zipernowsky, Ottó Bláthy and Miksa Déri (ZBD), three Hungarian engineers associated with the Ganz Works, had determined that open-core devices were impracticable, as they were incapable of reliably regulating voltage. The Ganz factory had also in the autumn of 1884 made delivery of the world's first five high-efficiency AC transformers, the first of these units having been shipped on September 16, 1884. This first unit had been manufactured to the following specifications: 1,400 W, 40 Hz, 120:72 V, 11.6:19.4 A, ratio 1.67:1, one-phase, shell form. In their joint 1885 patent applications for novel transformers (later called ZBD transformers), they described two designs with closed magnetic circuits where copper windings were either wound around an iron wire ring core or surrounded by an iron wire core. The two designs were the first application of the two basic transformer constructions in common use to this day, termed "core form" or "shell form" .
In both designs, the magnetic flux linking the primary and secondary windings traveled almost entirely within the confines of the iron core, with no intentional path through air (see Toroidal cores below). The new transformers were 3.4 times more efficient than the open-core bipolar devices of Gaulard and Gibbs. The ZBD patents included two other major interrelated innovations: one concerning the use of parallel connected, instead of series connected, utilization loads, the other concerning the ability to have high turns ratio transformers such that the supply network voltage could be much higher (initially 1,400 to 2,000 V) than the voltage of utilization loads (100 V initially preferred). When employed in parallel connected electric distribution systems, closed-core transformers finally made it technically and economically feasible to provide electric power for lighting in homes, businesses and public spaces. Bláthy had suggested the use of closed cores, Zipernowsky had suggested the use of parallel shunt connections, and Déri had performed the experiments; In early 1885, the three engineers also eliminated the problem of eddy current losses with the invention of the lamination of electromagnetic cores.
Transformers today are designed on the principles discovered by the three engineers. They also popularized the word 'transformer' to describe a device for altering the EMF of an electric current although the term had already been in use by 1882. In 1886, the ZBD engineers designed, and the Ganz factory supplied electrical equipment for, the world's first power station that used AC generators to power a parallel connected common electrical network, the steam-powered Rome-Cerchi power plant.
=== Westinghouse improvements ===
Building on the advancement of AC technology in Europe, George Westinghouse founded the Westinghouse Electric in Pittsburgh, Pennsylvania, on January 8, 1886. The new firm became active in developing alternating current (AC) electric infrastructure throughout the United States.
The Edison Electric Light Company held an option on the US rights for the ZBD transformers, requiring Westinghouse to pursue alternative designs on the same principles. George Westinghouse had bought Gaulard and Gibbs' patents for $50,000 in February 1886. He assigned to William Stanley the task of redesign the Gaulard and Gibbs transformer for commercial use in United States. Stanley's first patented design was for induction coils with single cores of soft iron and adjustable gaps to regulate the EMF present in the secondary winding (see image). This design was first used commercially in the US in 1886 but Westinghouse was intent on improving the Stanley design to make it (unlike the ZBD type) easy and cheap to produce.
Westinghouse, Stanley and associates soon developed a core that was easier to manufacture, consisting of a stack of thin 'E‑shaped' iron plates insulated by thin sheets of paper or other insulating material. Pre-wound copper coils could then be slid into place, and straight iron plates laid in to create a closed magnetic circuit. Westinghouse obtained a patent for the new low-cost design in 1887.
=== Other early transformer designs ===
In 1889, Russian-born engineer Mikhail Dolivo-Dobrovolsky developed the first three-phase transformer at the Allgemeine Elektricitäts-Gesellschaft ('General Electricity Company') in Germany.
In 1891, Nikola Tesla invented the Tesla coil, an air-cored, dual-tuned resonant transformer for producing very high voltages at high frequency.
Audio frequency transformers ("repeating coils") were used by early experimenters in the development of the telephone.
== See also ==
== Notes ==
== References ==
== Bibliography ==
Beeman, Donald, ed. (1955). Industrial Power Systems Handbook. McGraw-Hill.
Calvert, James (2001). "Inside Transformers". University of Denver. Archived from the original on May 9, 2007. Retrieved May 19, 2007.
Coltman, J. W. (Jan 1988). "The Transformer". Scientific American. 258 (1): 86–95. Bibcode:1988SciAm.258a..86C. doi:10.1038/scientificamerican0188-86. OSTI 6851152.
Coltman, J. W. (Jan–Feb 2002). "History: The Transformer". IEEE Industry Applications Magazine. 8 (1): 8–15. doi:10.1109/2943.974352. S2CID 18160717.
Brenner, Egon; Javid, Mansour (1959). "Chapter 18–Circuits with Magnetic Coupling". Analysis of Electric Circuits. McGraw-Hill. pp. 586–622.
CEGB, (Central Electricity Generating Board) (1982). Modern Power Station Practice. Pergamon. ISBN 978-0-08-016436-6.
Crosby, D. (1958). "The Ideal Transformer". IRE Transactions on Circuit Theory. 5 (2): 145. doi:10.1109/TCT.1958.1086447.
Daniels, A. R. (1985). Introduction to Electrical Machines. Macmillan. ISBN 978-0-333-19627-4.
De Keulenaer, Hans; Chapman, David; Fassbinder, Stefan; McDermott, Mike (2001). The Scope for Energy Saving in the EU through the Use of Energy-Efficient Electricity Distribution Transformers (PDF). 16th International Conference and Exhibition on Electricity Distribution (CIRED 2001). Institution of Engineering and Technology. doi:10.1049/cp:20010853. Archived from the original (PDF) on 4 March 2016. Retrieved 10 July 2014.
Del Vecchio, Robert M.; Poulin, Bertrand; Feghali, Pierre T.M.; Shah, Dilipkumar; Ahuja, Rajendra (2002). Transformer Design Principles: With Applications to Core-Form Power Transformers. Boca Raton: CRC Press. ISBN 978-90-5699-703-8.
Fink, Donald G.; Beatty, H. Wayne, eds. (1978). Standard Handbook for Electrical Engineers (11th ed.). McGraw Hill. ISBN 978-0-07-020974-9.
Gottlieb, Irving (1998). Practical Transformer Handbook: for Electronics, Radio and Communications Engineers. Elsevier. ISBN 978-0-7506-3992-7.
Guarnieri, M. (2013). "Who Invented the Transformer?". IEEE Industrial Electronics Magazine. 7 (4): 56–59. doi:10.1109/MIE.2013.2283834. S2CID 27936000.
Halacsy, A. A.; Von Fuchs, G. H. (April 1961). "Transformer Invented 75 Years Ago". Transactions of the American Institute of Electrical Engineers. Part III: Power Apparatus and Systems. 80 (3): 121–125. doi:10.1109/AIEEPAS.1961.4500994. S2CID 51632693.
Hameyer, Kay (2004). Electrical Machines I: Basics, Design, Function, Operation (PDF). RWTH Aachen University Institute of Electrical Machines. Archived from the original (PDF) on 2013-02-10.
Hammond, John Winthrop (1941). Men and Volts: The Story of General Electric. J. B. Lippincott Company. pp. see esp. 106–107, 178, 238.
Harlow, James (2004). Electric Power Transformer Engineering (PDF). CRC Press. ISBN 0-8493-1704-5.
Hughes, Thomas P. (1993). Networks of Power: Electrification in Western Society, 1880-1930. Baltimore: The Johns Hopkins University Press. p. 96. ISBN 978-0-8018-2873-7. Retrieved Sep 9, 2009.
Heathcote, Martin (1998). J & P Transformer Book (12th ed.). Newnes. ISBN 978-0-7506-1158-9.
Hindmarsh, John (1977). Electrical Machines and Their Applications (4th ed.). Exeter: Pergamon. ISBN 978-0-08-030573-8.
Kothari, D.P.; Nagrath, I.J. (2010). Electric Machines (4th ed.). Tata McGraw-Hill. ISBN 978-0-07-069967-0.
Kulkarni, S. V.; Khaparde, S. A. (2004). Transformer Engineering: Design and Practice. CRC Press. ISBN 978-0-8247-5653-6.
McLaren, Peter (1984). Elementary Electric Power and Machines. Ellis Horwood. ISBN 978-0-470-20057-5.
McLyman, Colonel William (2004). "Chapter 3". Transformer and Inductor Design Handbook. CRC. ISBN 0-8247-5393-3.
Pansini, Anthony (1999). Electrical Transformers and Power Equipment. CRC Press. ISBN 978-0-88173-311-2.
Parker, M. R; Ula, S.; Webb, W. E. (2005). "§2.5.5 'Transformers' & §10.1.3 'The Ideal Transformer'". In Whitaker, Jerry C. (ed.). The Electronics Handbook (2nd ed.). Taylor & Francis. pp. 172, 1017. ISBN 0-8493-1889-0.
Ryan, H. M. (2004). High Voltage Engineering and Testing. CRC Press. ISBN 978-0-85296-775-1.
== External links ==
General links: | Wikipedia/Air-core_transformer |
A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work in algorithms over mathematical objects such as polynomials.
Computer algebra systems may be divided into two classes: specialized and general-purpose. The specialized ones are devoted to a specific part of mathematics, such as number theory, group theory, or teaching of elementary mathematics.
General-purpose computer algebra systems aim to be useful to a user working in any scientific field that requires manipulation of mathematical expressions. To be useful, a general-purpose computer algebra system must include various features such as:
a user interface allowing a user to enter and display mathematical formulas, typically from a keyboard, menu selections, mouse or stylus.
a programming language and an interpreter (the result of a computation commonly has an unpredictable form and an unpredictable size; therefore user intervention is frequently needed),
a simplifier, which is a rewrite system for simplifying mathematics formulas,
a memory manager, including a garbage collector, needed by the huge size of the intermediate data, which may appear during a computation,
an arbitrary-precision arithmetic, needed by the huge size of the integers that may occur,
a large library of mathematical algorithms and special functions.
The library must not only provide for the needs of the users, but also the needs of the simplifier. For example, the computation of polynomial greatest common divisors is systematically used for the simplification of expressions involving fractions.
This large amount of required computer capabilities explains the small number of general-purpose computer algebra systems. Significant systems include Axiom, GAP, Maxima, Magma, Maple, Mathematica, and SageMath.
== History ==
In the 1950s, while computers were mainly used for numerical computations, there were some research projects into using them for symbolic manipulation. Computer algebra systems began to appear in the 1960s and evolved out of two quite different sources—the requirements of theoretical physicists and research into artificial intelligence.
A prime example for the first development was the pioneering work conducted by the later Nobel Prize laureate in physics Martinus Veltman, who designed a program for symbolic mathematics, especially high-energy physics, called Schoonschip (Dutch for "clean ship") in 1963. Other early systems include FORMAC.
Using Lisp as the programming basis, Carl Engelman created MATHLAB in 1964 at MITRE within an artificial-intelligence research environment. Later MATHLAB was made available to users on PDP-6 and PDP-10 systems running TOPS-10 or TENEX in universities. Today it can still be used on SIMH emulations of the PDP-10. MATHLAB ("mathematical laboratory") should not be confused with MATLAB ("matrix laboratory"), which is a system for numerical computation built 15 years later at the University of New Mexico.
In 1987, Hewlett-Packard introduced the first hand-held calculator CAS with the HP-28 series. Other early handheld calculators with symbolic algebra capabilities included the Texas Instruments TI-89 series and TI-92 calculator, and the Casio CFX-9970G.
The first popular computer algebra systems were muMATH, Reduce, Derive (based on muMATH), and Macsyma; a copyleft version of Macsyma is called Maxima. Reduce became free software in 2008. Commercial systems include Mathematica and Maple, which are commonly used by research mathematicians, scientists, and engineers. Freely available alternatives include SageMath (which can act as a front-end to several other free and nonfree CAS). Other significant systems include Axiom, GAP, Maxima and Magma.
The movement to web-based applications in the early 2000s saw the release of WolframAlpha, an online search engine and CAS which includes the capabilities of Mathematica.
More recently, computer algebra systems have been implemented using artificial neural networks, though as of 2020 they are not commercially available.
== Symbolic manipulations ==
The symbolic manipulations supported typically include:
simplification to a smaller expression or some standard form, including automatic simplification with assumptions and simplification with constraints
substitution of symbols or numeric values for certain expressions
change of form of expressions: expanding products and powers, partial and full factorization, rewriting as partial fractions, constraint satisfaction, rewriting trigonometric functions as exponentials, transforming logic expressions, etc.
partial and total differentiation
some indefinite and definite integration (see symbolic integration), including multidimensional integrals
symbolic constrained and unconstrained global optimization
solution of linear and some non-linear equations over various domains
solution of some differential and difference equations
taking some limits
integral transforms
series operations such as expansion, summation and products
matrix operations including products, inverses, etc.
statistical computation
theorem proving and verification which is very useful in the area of experimental mathematics
optimized code generation
In the above, the word some indicates that the operation cannot always be performed.
== Additional capabilities ==
Many also include:
a programming language, allowing users to implement their own algorithms
arbitrary-precision numeric operations
exact integer arithmetic and number theory functionality
Editing of mathematical expressions in two-dimensional form
plotting graphs and parametric plots of functions in two and three dimensions, and animating them
drawing charts and diagrams
APIs for linking it on an external program such as a database, or using in a programming language to use the computer algebra system
string manipulation such as matching and searching
add-ons for use in applied mathematics such as physics, bioinformatics, computational chemistry and packages for physical computation
solvers for differential equations
Some include:
graphic production and editing such as computer-generated imagery and signal processing as image processing
sound synthesis
Some computer algebra systems focus on specialized disciplines; these are typically developed in academia and are free. They can be inefficient for numeric operations as compared to numeric systems.
== Types of expressions ==
The expressions manipulated by the CAS typically include polynomials in multiple variables; standard functions of expressions (sine, exponential, etc.); various special functions (Γ, ζ, erf, Bessel functions, etc.); arbitrary functions of expressions; optimization; derivatives, integrals, simplifications, sums, and products of expressions; truncated series with expressions as coefficients, matrices of expressions, and so on. Numeric domains supported typically include floating-point representation of real numbers, integers (of unbounded size), complex (floating-point representation), interval representation of reals, rational number (exact representation) and algebraic numbers.
== Use in education ==
There have been many advocates for increasing the use of computer algebra systems in primary and secondary-school classrooms. The primary reason for such advocacy is that computer algebra systems represent real-world math more than do paper-and-pencil or hand calculator based mathematics.
This push for increasing computer usage in mathematics classrooms has been supported by some boards of education. It has even been mandated in the curriculum of some regions.
Computer algebra systems have been extensively used in higher education. Many universities offer either specific courses on developing their use, or they implicitly expect students to use them for their course work. The companies that develop computer algebra systems have pushed to increase their prevalence among university and college programs.
CAS-equipped calculators are not permitted on the ACT, the PLAN, and in some classrooms though it may be permitted on all of College Board's calculator-permitted tests, including the SAT, some SAT Subject Tests and the AP Calculus, Chemistry, Physics, and Statistics exams.
== Mathematics used in computer algebra systems ==
Knuth–Bendix completion algorithm
Root-finding algorithms
Symbolic integration via e.g. Risch algorithm or Risch–Norman algorithm
Hypergeometric summation via e.g. Gosper's algorithm
Limit computation via e.g. Gruntz's algorithm
Polynomial factorization via e.g., over finite fields, Berlekamp's algorithm or Cantor–Zassenhaus algorithm.
Greatest common divisor via e.g. Euclidean algorithm
Gaussian elimination
Gröbner basis via e.g. Buchberger's algorithm; generalization of Euclidean algorithm and Gaussian elimination
Padé approximant
Schwartz–Zippel lemma and testing polynomial identities
Chinese remainder theorem
Diophantine equations
Landau's algorithm (nested radicals)
Derivatives of elementary functions and special functions. (e.g. See derivatives of the incomplete gamma function.)
Cylindrical algebraic decomposition
Quantifier elimination over real numbers via cylindrical algebraic decomposition
== See also ==
List of computer algebra systems
Scientific computation
Statistical package
Automated theorem proving
Algebraic modeling language
Constraint-logic programming
Satisfiability modulo theories
== References ==
== External links ==
Curriculum and Assessment in an Age of Computer Algebra Systems Archived 2009-12-01 at the Wayback Machine - From the Education Resources Information Center Clearinghouse for Science, Mathematics, and Environmental Education, Columbus, Ohio.
Richard J. Fateman. "Essays in algebraic simplification." Technical report MIT-LCS-TR-095, 1972. (Of historical interest in showing the direction of research in computer algebra. At the MIT LCS website: [1]) | Wikipedia/Computer_algebra_systems |
FORM is a symbolic manipulation system. It reads text files containing definitions of mathematical expressions as well as statements that tell it how to manipulate these expressions. Its original author is Jos Vermaseren of Nikhef, the Dutch institute for subatomic physics.
It is widely used in the theoretical particle physics community, but it is not restricted to applications in this specific field.
== Features ==
Definition of mathematical expressions containing various objects (symbols, functions, indices, ...) with elementary arithmetic operations
Arbitrary long mathematical expressions (limited only by disk space)
Multi-threaded execution, parallelized version for computer clusters
Powerful pattern matching and replacing
Fast trace calculation especially of gamma matrices
Built-in mathematical functions
Output into various formats (plain text, Fortran code, Mathematica code)
External communication with other software programs
== Example usage ==
A text file containing
Symbol x,y;
Local myexpr = (x+y)^3;
Id y = x;
Print;
.end
would tell FORM to create an expression named myexpr, replace therein the symbol y by x, and print the result on the screen. The result would be given like
myexpr =
8*x^3;
== History ==
FORM was started in 1984 as a successor to Schoonschip, an algebra engine developed by
M. Veltman. It was initially coded in FORTRAN 77, but rewritten in C before the release of version 1.0 in 1989.
Version 2.0 was released in 1991. The version 3.0 of FORM has been publicized in 2000. It has been made open-source on August 27, 2010 under the GPL license.
== Applications in high-energy physics and other fields ==
Mincer: A software package using FORM to compute massless propagator diagrams with up to three loops.
FORM has been the essential tool to calculate the higher-order QCD beta function.
The mathematical structure of multiple zeta values has been researched with dedicated FORM programs.
The software package FormCalc which is widely used in the physics community to calculate Feynman diagrams is built on top of FORM.
== References ==
== External links ==
Official website
The FORM online manual
Debian — Details of package form
Linux packages: ArchLinux, Debian, Gentoo, Ubuntu
Hippel, Matt von (2022-12-01). "Crucial Computer Program for Particle Physics at Risk of Obsolescence". Quanta Magazine. | Wikipedia/Form_(computer_algebra_system) |
E-Science or eScience is computationally intensive science that is carried out in highly distributed network environments, or science that uses immense data sets that require grid computing; the term sometimes includes technologies that enable distributed collaboration, such as the Access Grid. The term was created by John Taylor, the Director General of the United Kingdom's Office of Science and Technology in 1999 and was used to describe a large funding initiative starting in November 2000. E-science has been more broadly interpreted since then, as "the application of computer technology to the undertaking of modern scientific investigation, including the preparation, experimentation, data collection, results dissemination, and long-term storage and accessibility of all materials generated through the scientific process. These may include data modeling and analysis, electronic/digitized laboratory notebooks, raw and fitted data sets, manuscript production and draft versions, pre-prints, and print and/or electronic publications." In 2014, IEEE eScience Conference Series condensed the definition to "eScience promotes innovation in collaborative, computationally- or data-intensive research across all disciplines, throughout the research lifecycle" in one of the working definitions used by the organizers. E-science encompasses "what is often referred to as big data [which] has revolutionized science... [such as] the Large Hadron Collider (LHC) at CERN... [that] generates around 780 terabytes per year... highly data intensive modern fields of science...that generate large amounts of E-science data include: computational biology, bioinformatics, genomics" and the human digital footprint for the social sciences.
Turing Award winner Jim Gray imagined "data-intensive science" or "e-science" as a "fourth paradigm" of science (empirical, theoretical, computational and now data-driven) and asserted that "everything about science is changing because of the impact of information technology" and the data deluge.
E-Science revolutionizes both fundamental legs of the scientific method: empirical research, especially through digital big data; and scientific theory, especially through computer simulation model building. These ideas were reflected by The White House's Office and Science Technology Policy in February 2013, which slated many of the aforementioned e-Science output products for preservation and access requirements under the memorandum's directive. E-sciences include particle physics, earth sciences and social simulations.
== Characteristics and examples ==
Most of the research activities into e-Science have focused on the development of new computational tools and infrastructures to support scientific discovery. Due to the complexity of the software and the backend infrastructural requirements, e-Science projects usually involve large teams managed and developed by research laboratories, large universities or governments. Currently there is a large focus in e-Science in the United Kingdom, where the UK e-Science programme provides significant funding. In Europe the development of computing capabilities to support the CERN Large Hadron Collider has led to the development of e-Science and Grid infrastructures which are also used by other disciplines.
=== Consortiums ===
Example e-Science infrastructures include the
Worldwide LHC Computing Grid,
a federation with various partners including the
European Grid Infrastructure, the Open Science Grid and the
Nordic DataGrid Facility.
To support e-Science applications, Open Science Grid combines interfaces to more than 100 nationwide clusters, 50 interfaces to geographically distributed storage caches, and 8 campus grids (Purdue, Wisconsin-Madison, Clemson, Nebraska-Lincoln, FermiGrid at FNAL, SUNY-Buffalo, and Oklahoma in the United States; and UNESP in Brazil). Areas of science benefiting from Open Science Grid include:
astrophysics, gravitational physics, high-energy physics, neutrino physics, nuclear physics
molecular dynamics, materials science, materials engineering, computer science, computer engineering, nanotechnology
structural biology, computational biology, genomics, proteomics, medicine
=== UK programme ===
After his appointment as Director General of the Research Councils in 1999 John Taylor, with the support of the Science Minister David Sainsbury and the Chancellor of the Exchequer Gordon Brown, bid to HM Treasury to fund a programme of e-infrastructure development for science which would provide the foundation for UK science and industry to be a world leader in the knowledge economy which motivated the Lisbon Strategy for sustainable economic growth that the UK government committed to in March 2000.
In November 2000 John Taylor announced £98 million for a national UK e-Science programme. An additional £20 million contribution was planned from UK industry in matching funds to projects that they participated in. From this budget of £120 million over three years, £75 million was to be spent on grid application pilots in all areas of science, administered by the Research Council responsible for each area, while £35 million was to be administered by the EPSRC as a Core Programme to develop "industrial strength" Grid middleware. Phase 2 of the programme for 2004-2006 was supported by a further £96 million for application projects, and £27 million for the EPSRC core programme. Phase 3 of the programme for 2007-2009 was supported by a further £14 million for the EPSRC core programme and a further sum for applications. Additional funding for UK e-Science activities was provided from European Union funding, from university funding council SRIF funding for hardware, and from Jisc for networking and other infrastructure.
The UK e-Science programme comprised a wide range of resources, centres and people including the National e-Science Centre (NeSC) which is managed by the Universities of Glasgow and Edinburgh, with facilities in both cities.
Tony Hey led the core programme from 2001 to 2005.
Within the UK regional e-Science centres support their local universities and projects, including:
White Rose Grid e-Science Centre (WRGeSC)
Belfast e-Science Centre (BeSC)
Centre for eResearch Bristol (CeRB)
Cambridge e-Science Centre (CeSC)
STFC e-Science Centre (STFCeSC)
e-Science North West (eSNW)
National Grid Service (NGS)
OMII-UK
Lancaster University Centre for e-Science
London e-Science Centre (LeSC)
North East Regional e-Science Centre (NEReSC)
Oxford e-Science Centre (OeSC)
Southampton e-Science Centre Archived 2005-03-08 at the Wayback Machine (SeSC)
Welsh e-Science Centre Archived 2005-03-24 at the Wayback Machine (WeSC)
Midlands e-Science Centre (MeSC)
There are also various centres of excellence and research centres.
In addition to centres, the grid application pilot projects were funded by the Research Council responsible for each area of UK science funding.
The EPSRC funded 11 pilot e-Science projects in three phases (for about £3 million each in the first phase):
First Phase (2001–2005) were CombEchem, DAME, Discovery Net, GEODISE, myGrid and RealityGrid.
Second phase (2004–2008) were GOLD and Integrative biology
Third phase (2005–2010) were PMSEG (MESSAGE), CARMEN and NanoCMOS
The PPARC/STFC funded two projects: GridPP (phase 1 for £17 million, phase 2 for £5.9 million, phase 3 for £30 million and a 4th phase running from 2011 to 2014) and Astrogrid (£14 million over 3 phases).
The remaining £23 million of phase one funding was divided between the application projects funded by BBSRC, MRC and NERC:
BBSRC: Biomolecular Grid, Proteome Annotation Pipeline, High-Throughput Structural Biology, Global Biodiversity
MRC: Biology of Ageing, Sequence and Structure Data, Molecular Genetics, Cancer Management, Clinical e-Science Framework, Neuroinformatics Modeling Tools
NERC: Climateprediction.com, Oceanographic Grid, Molecular Environmental Grid, NERC DataGrid
The funded UK e-Science programme was reviewed on its completion in 2009 by an international panel led by Daniel E. Atkins, director of the Office of Cyberinfrastructure of the US NSF. The report concluded that the programme had developed a skilled pool of expertise, some services, and had led to cooperation between academia and industry, but that these achievements were at a project level rather than by generating infrastructure or transforming disciplines to adopt e-Science as a normal method of work, and that they were not self-sustainable without further investment.
=== United States ===
United States-based initiatives, where the term cyberinfrastructure is typically used to define e-Science projects, are primarily funded by the National Science Foundation office of cyberinfrastructure (NSF OCI) and Department of Energy (in particular the Office of Science). After the conclusion of TeraGrid in 2011, the ACCESS program was established and funded by the National Science Foundation to help researchers and educators, with or without supporting grants, to utilize the nation’s advanced computing systems and services.
=== The Netherlands ===
Dutch eScience research is coordinated by the Netherlands eScience Center in Amsterdam, an initiative founded by NWO and SURF.
=== Europe ===
Plan-Europe is a Platform of National e-Science/Data Research Centers in Europe, as established during the constituting meeting 29–30 October 2014 in Amsterdam, the Netherlands, and which is based on agreed Terms of Reference. PLAN-E has a kernel group of active members and convenes twice annually. More can be found on PLAN-E.
=== Sweden ===
Two academic research projects have been carried out in Sweden by two different groups of universities, to help researches share and access scientific computing resources and knowledge:
Swedish e-Science Research Center (SeRC): Kungliga Tekniska högskolan (KTH), Stockholm University (SU), Karolinska institutet (KI) and Linköping University (LiU)
eSSENCE, The e-Science Collaboration (eSSENCE): Uppsala University, Lund University and Umeå University
== Comparison with traditional science ==
Traditional science is representative of two distinct philosophical traditions within the history of science, but e-Science, it is being argued, requires a paradigm shift, and the addition of a third branch of the sciences. "The idea of open data is not a new one; indeed, when studying the history and philosophy of science, Robert Boyle is credited with stressing the concepts of skepticism, transparency, and reproducibility for independent verification in scholarly publishing in the 1660s. The scientific method later was divided into two major branches, deductive and empirical approaches. Today, a theoretical revision in the scientific method should include a new branch, Victoria Stodden advocate[s], that of the computational approach, where like the other two methods, all of the computational steps by which scientists draw conclusions are revealed. This is because within the last 20 years, people have been grappling with how to handle changes in high performance computing and simulation." As such, e-science aims at combining both empirical and theoretical traditions, while computer simulations can create artificial data, and real-time big data can be used to calibrate theoretical simulation models. Conceptually, e-Science revolves around developing new methods to support scientists in conducting scientific research with the aim of making new scientific discoveries by analyzing vast amounts of data accessible over the internet using vast amounts of computational resources. However, discoveries of value cannot be made simply by providing computational tools, a cyberinfrastructure or by performing a pre-defined set of steps to produce a result. Rather, there needs to be an original, creative aspect to the activity that by its nature cannot be automated. This has led to various research that attempts to define the properties that e-Science platforms should provide in order to support a new paradigm of doing science, and new rules to fulfill the requirements of preserving and making computational data results available in a manner such that they are reproducible in traceable, logical steps, as an intrinsic requirement for the maintenance of modern scientific integrity that allows an extenuation of "Boyle's tradition in the computational age".
=== Modelling e-Science processes ===
One view argues that since a modern discovery process instance serves a similar purpose to a mathematical proof it should have similar properties, namely it allows results to be deterministically reproduced when re-executed and that intermediate results can be viewed to aid examination and comprehension. In this case, simply modelling the provenance of data is not sufficient. One has to model the provenance of the hypotheses and results generated from analyzing the data as well so as to provide evidence that support new discoveries. Scientific workflows have thus been proposed and developed to assist scientists to track the evolution of their data, intermediate results and final results as a means to document and track the evolution of discoveries within a piece of scientific research.
=== Science 2.0 ===
Other views include Science 2.0 where e-Science is considered to be a shift from the publication of final results by well-defined collaborative groups towards a more open approach, which includes the public sharing of raw data, preliminary experimental results, and related information. To facilitate this shift, the Science 2.0 view is on providing tools that simplify communication, cooperation and collaboration between interested parties. Such an approach has the potential to: speed up the process of scientific discovery; overcome problems associated with academic publishing and peer review; and remove time and cost barriers, limiting the process of generating new knowledge.
== See also ==
Citizen science
Cyberinfrastructure
Distributed computing
E-research
e-Science librarianship
e-Social Science
Grid computing
List of e-Science infrastructures
Science 2.0
Scientific workflow system
== References ==
== External links ==
DOE and NSF Open Science Grid
The eScience Institute at the University of Washington
The Dutch Virtual Laboratory for e-science (VL-e) project
UK Research Council's e-Science program
e-science : personnalisation des résultats de recherches Google et sociologies du web
UK National Centre for e-Social Science and their Wiki on e-Social Science
NSF TeraGrid Project
Arts and Humanities E-Science Support Centre (AHESSC)
E-Science and Data Services Collaborative (EDSC)
The European Commission's e-Infrastructures activity
Swedish e-Science Research Centre
eSSENCE the e-Science Collaboration | Wikipedia/E-Science |
Euler–Bernoulli beam theory (also known as engineer's beam theory or classical beam theory) is a simplification of the linear theory of elasticity which provides a means of calculating the load-carrying and deflection characteristics of beams. It covers the case corresponding to small deflections of a beam that is subjected to lateral loads only. By ignoring the effects of shear deformation and rotatory inertia, it is thus a special case of Timoshenko–Ehrenfest beam theory. It was first enunciated circa 1750, but was not applied on a large scale until the development of the Eiffel Tower and the Ferris wheel in the late 19th century. Following these successful demonstrations, it quickly became a cornerstone of engineering and an enabler of the Second Industrial Revolution.
Additional mathematical models have been developed, such as plate theory, but the simplicity of beam theory makes it an important tool in the sciences, especially structural and mechanical engineering.
== History ==
Prevailing consensus is that Galileo Galilei made the first attempts at developing a theory of beams, but recent studies argue that Leonardo da Vinci was the first to make the crucial observations. Da Vinci lacked Hooke's law and calculus to complete the theory, whereas Galileo was held back by an incorrect assumption he made.
The Bernoulli beam is named after Jacob Bernoulli, who made the significant discoveries. Leonhard Euler and Daniel Bernoulli were the first to put together a useful theory circa 1750.
== Static beam equation ==
The Euler–Bernoulli equation describes the relationship between the beam's deflection and the applied load:The curve
w
(
x
)
{\displaystyle w(x)}
describes the deflection of the beam in the
z
{\displaystyle z}
direction at some position
x
{\displaystyle x}
(recall that the beam is modeled as a one-dimensional object).
q
{\displaystyle q}
is a distributed load, in other words a force per unit length (analogous to pressure being a force per area); it may be a function of
x
{\displaystyle x}
,
w
{\displaystyle w}
, or other variables.
E
{\displaystyle E}
is the elastic modulus and
I
{\displaystyle I}
is the second moment of area of the beam's cross section.
I
{\displaystyle I}
must be calculated with respect to the axis which is perpendicular to the applied loading. Explicitly, for a beam whose axis is oriented along
x
{\displaystyle x}
with a loading along
z
{\displaystyle z}
, the beam's cross section is in the
y
z
{\displaystyle yz}
plane, and the relevant second moment of area is
I
=
∬
z
2
d
y
d
z
,
{\displaystyle I=\iint z^{2}\;dy\;dz,}
It can be shown from equilibrium considerations that the centroid of the cross section must be at
y
=
z
=
0
{\displaystyle y=z=0}
.
Often, the product
E
I
{\displaystyle EI}
(known as the flexural rigidity) is a constant, so that
E
I
d
4
w
d
x
4
=
q
(
x
)
.
{\displaystyle EI{\frac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}=q(x).\,}
This equation, describing the deflection of a uniform, static beam, is used widely in engineering practice. Tabulated expressions for the deflection
w
{\displaystyle w}
for common beam configurations can be found in engineering handbooks. For more complicated situations, the deflection can be determined by solving the Euler–Bernoulli equation using techniques such as "direct integration", "Macaulay's method", "moment area method, "conjugate beam method", "the principle of virtual work", "Castigliano's method", "flexibility method", "slope deflection method", "moment distribution method", or "direct stiffness method".
Sign conventions are defined here since different conventions can be found in the literature. In this article, a right-handed coordinate system is used with the
x
{\displaystyle x}
axis to the right, the
z
{\displaystyle z}
axis pointing upwards, and the
y
{\displaystyle y}
axis pointing into the figure. The sign of the bending moment
M
{\displaystyle M}
is taken as positive when the torque vector associated with the bending moment on the right hand side of the section is in the positive
y
{\displaystyle y}
direction, that is, a positive value of
M
{\displaystyle M}
produces compressive stress at the bottom surface. With this choice of bending moment sign convention, in order to have
d
M
=
Q
d
x
{\displaystyle dM=Qdx}
, it is necessary that the shear force
Q
{\displaystyle Q}
acting on the right side of the section be positive in the
z
{\displaystyle z}
direction so as to achieve static equilibrium of moments. If the loading intensity
q
{\displaystyle q}
is taken positive in the positive
z
{\displaystyle z}
direction, then
d
Q
=
−
q
d
x
{\displaystyle dQ=-qdx}
is necessary for force equilibrium.
Successive derivatives of the deflection
w
{\displaystyle w}
have important physical meanings:
d
w
/
d
x
{\displaystyle dw/dx}
is the slope of the beam, which is the anti-clockwise angle of rotation about the
y
{\displaystyle y}
-axis in the limit of small displacements;
M
=
−
E
I
d
2
w
d
x
2
{\displaystyle M=-EI{\frac {d^{2}w}{dx^{2}}}}
is the bending moment in the beam; and
Q
=
−
d
d
x
(
E
I
d
2
w
d
x
2
)
{\displaystyle Q=-{\frac {d}{dx}}\left(EI{\frac {d^{2}w}{dx^{2}}}\right)}
is the shear force in the beam.
The stresses in a beam can be calculated from the above expressions after the deflection due to a given load has been determined.
=== Derivation of the bending equation ===
Because of the fundamental importance of the bending moment equation in engineering, we will provide a short derivation. We change to polar coordinates. The length of the neutral axis in the figure is
ρ
d
θ
.
{\displaystyle \rho d\theta .}
The length of a fiber with a radial distance
z
{\displaystyle z}
below the neutral axis is
(
ρ
+
z
)
d
θ
.
{\displaystyle (\rho +z)d\theta .}
Therefore, the strain of this fiber is
(
ρ
+
z
−
ρ
)
d
θ
ρ
d
θ
=
z
ρ
.
{\displaystyle {\frac {\left(\rho +z-\rho \right)\ d\theta }{\rho \ d\theta }}={\frac {z}{\rho }}.}
The stress of this fiber is
E
z
ρ
{\displaystyle E{\dfrac {z}{\rho }}}
where
E
{\displaystyle E}
is the elastic modulus in accordance with Hooke's law. The differential force vector,
d
F
,
{\displaystyle d\mathbf {F} ,}
resulting from this stress, is given by
d
F
=
E
z
ρ
d
A
e
x
.
{\displaystyle d\mathbf {F} =E{\frac {z}{\rho }}dA\mathbf {e_{x}} .}
This is the differential force vector exerted on the right hand side of the section shown in the figure. We know that it is in the
e
x
{\displaystyle \mathbf {e_{x}} }
direction since the figure clearly shows that the fibers in the lower half are in tension.
d
A
{\displaystyle dA}
is the differential element of area at the location of the fiber. The differential bending moment vector,
d
M
{\displaystyle d\mathbf {M} }
associated with
d
F
{\displaystyle d\mathbf {F} }
is given by
d
M
=
−
z
e
z
×
d
F
=
−
e
y
E
z
2
ρ
d
A
.
{\displaystyle d\mathbf {M} =-z\mathbf {e_{z}} \times d\mathbf {F} =-\mathbf {e_{y}} E{\frac {z^{2}}{\rho }}dA.}
This expression is valid for the fibers in the lower half of the beam. The expression for the fibers in the upper half of the beam will be similar except that the moment arm vector will be in the positive
z
{\displaystyle z}
direction and the force vector will be in the
−
x
{\displaystyle -x}
direction since the upper fibers are in compression. But the resulting bending moment vector will still be in the
−
y
{\displaystyle -y}
direction since
e
z
×
−
e
x
=
−
e
y
.
{\displaystyle \mathbf {e_{z}} \times -\mathbf {e_{x}} =-\mathbf {e_{y}} .}
Therefore, we integrate over the entire cross section of the beam and get for
M
{\displaystyle \mathbf {M} }
the bending moment vector exerted on the right cross section of the beam the expression
M
=
∫
d
M
=
−
e
y
E
ρ
∫
z
2
d
A
=
−
e
y
E
I
ρ
,
{\displaystyle \mathbf {M} =\int d\mathbf {M} =-\mathbf {e_{y}} {\frac {E}{\rho }}\int {z^{2}}\ dA=-\mathbf {e_{y}} {\frac {EI}{\rho }},}
where
I
{\displaystyle I}
is the second moment of area. From calculus, we know that when
d
w
d
x
{\displaystyle {\dfrac {dw}{dx}}}
is small, as it is for an Euler–Bernoulli beam, we can make the approximation
1
ρ
≃
d
2
w
d
x
2
{\displaystyle {\dfrac {1}{\rho }}\simeq {\dfrac {d^{2}w}{dx^{2}}}}
, where
ρ
{\displaystyle \rho }
is the radius of curvature. Therefore,
M
=
−
e
y
E
I
d
2
w
d
x
2
.
{\displaystyle \mathbf {M} =-\mathbf {e_{y}} EI{d^{2}w \over dx^{2}}.}
This vector equation can be separated in the bending unit vector definition (
M
{\displaystyle M}
is oriented as
e
y
{\displaystyle \mathbf {e_{y}} }
), and in the bending equation:
M
=
−
E
I
d
2
w
d
x
2
.
{\displaystyle M=-EI{d^{2}w \over dx^{2}}.}
== Dynamic beam equation ==
The dynamic beam equation is the Euler–Lagrange equation for the following action
S
=
∫
t
1
t
2
∫
0
L
[
1
2
μ
(
∂
w
∂
t
)
2
−
1
2
E
I
(
∂
2
w
∂
x
2
)
2
+
q
(
x
)
w
(
x
,
t
)
]
d
x
d
t
.
{\displaystyle S=\int _{t_{1}}^{t_{2}}\int _{0}^{L}\left[{\frac {1}{2}}\mu \left({\frac {\partial w}{\partial t}}\right)^{2}-{\frac {1}{2}}EI\left({\frac {\partial ^{2}w}{\partial x^{2}}}\right)^{2}+q(x)w(x,t)\right]dxdt.}
The first term represents the kinetic energy where
μ
{\displaystyle \mu }
is the mass per unit length, the second term represents the potential energy due to internal forces (when considered with a negative sign), and the third term represents the potential energy due to the external load
q
(
x
)
{\displaystyle q(x)}
. The Euler–Lagrange equation is used to determine the function that minimizes the functional
S
{\displaystyle S}
. For a dynamic Euler–Bernoulli beam, the Euler–Lagrange equation is
When the beam is homogeneous,
E
{\displaystyle E}
and
I
{\displaystyle I}
are independent of
x
{\displaystyle x}
, and the beam equation is simpler:
E
I
∂
4
w
∂
x
4
=
−
μ
∂
2
w
∂
t
2
+
q
.
{\displaystyle EI{\cfrac {\partial ^{4}w}{\partial x^{4}}}=-\mu {\cfrac {\partial ^{2}w}{\partial t^{2}}}+q\,.}
=== Free vibration ===
In the absence of a transverse load,
q
{\displaystyle q}
, we have the free vibration equation. This equation can be solved using a Fourier decomposition of the displacement into the sum of harmonic vibrations of the form
w
(
x
,
t
)
=
Re
[
w
^
(
x
)
e
−
i
ω
t
]
{\displaystyle w(x,t)={\text{Re}}[{\hat {w}}(x)~e^{-i\omega t}]}
where
ω
{\displaystyle \omega }
is the frequency of vibration. Then, for each value of frequency, we can solve an ordinary differential equation
E
I
d
4
w
^
d
x
4
−
μ
ω
2
w
^
=
0
.
{\displaystyle EI~{\cfrac {\mathrm {d} ^{4}{\hat {w}}}{\mathrm {d} x^{4}}}-\mu \omega ^{2}{\hat {w}}=0\,.}
The general solution of the above equation is
w
^
=
A
1
cosh
(
β
x
)
+
A
2
sinh
(
β
x
)
+
A
3
cos
(
β
x
)
+
A
4
sin
(
β
x
)
with
β
:=
(
μ
ω
2
E
I
)
1
/
4
{\displaystyle {\hat {w}}=A_{1}\cosh(\beta x)+A_{2}\sinh(\beta x)+A_{3}\cos(\beta x)+A_{4}\sin(\beta x)\quad {\text{with}}\quad \beta :=\left({\frac {\mu \omega ^{2}}{EI}}\right)^{1/4}}
where
A
1
,
A
2
,
A
3
,
A
4
{\displaystyle A_{1},A_{2},A_{3},A_{4}}
are constants. These constants are unique for a given set of boundary conditions. However, the solution for the displacement is not unique and depends on the frequency. These solutions are typically written as
w
^
n
=
A
1
cosh
(
β
n
x
)
+
A
2
sinh
(
β
n
x
)
+
A
3
cos
(
β
n
x
)
+
A
4
sin
(
β
n
x
)
with
β
n
:=
(
μ
ω
n
2
E
I
)
1
/
4
.
{\displaystyle {\hat {w}}_{n}=A_{1}\cosh(\beta _{n}x)+A_{2}\sinh(\beta _{n}x)+A_{3}\cos(\beta _{n}x)+A_{4}\sin(\beta _{n}x)\quad {\text{with}}\quad \beta _{n}:=\left({\frac {\mu \omega _{n}^{2}}{EI}}\right)^{1/4}\,.}
The quantities
ω
n
{\displaystyle \omega _{n}}
are called the natural frequencies of the beam. Each of the displacement solutions is called a mode, and the shape of the displacement curve is called a mode shape.
==== Example: Cantilevered beam ====
The boundary conditions for a cantilevered beam of length
L
{\displaystyle L}
(fixed at
x
=
0
{\displaystyle x=0}
) are
w
^
n
=
0
,
d
w
^
n
d
x
=
0
at
x
=
0
d
2
w
^
n
d
x
2
=
0
,
d
3
w
^
n
d
x
3
=
0
at
x
=
L
.
{\displaystyle {\begin{aligned}&{\hat {w}}_{n}=0~,~~{\frac {d{\hat {w}}_{n}}{dx}}=0\quad {\text{at}}~~x=0\\&{\frac {d^{2}{\hat {w}}_{n}}{dx^{2}}}=0~,~~{\frac {d^{3}{\hat {w}}_{n}}{dx^{3}}}=0\quad {\text{at}}~~x=L\,.\end{aligned}}}
If we apply these conditions, non-trivial solutions are found to exist only if
cosh
(
β
n
L
)
cos
(
β
n
L
)
+
1
=
0
.
{\displaystyle \cosh(\beta _{n}L)\,\cos(\beta _{n}L)+1=0\,.}
This nonlinear equation can be solved numerically. The first four roots are
β
1
L
=
0.596864
π
{\displaystyle \beta _{1}L=0.596864\pi }
,
β
2
L
=
1.49418
π
{\displaystyle \beta _{2}L=1.49418\pi }
,
β
3
L
=
2.50025
π
{\displaystyle \beta _{3}L=2.50025\pi }
, and
β
4
L
=
3.49999
π
{\displaystyle \beta _{4}L=3.49999\pi }
.
The corresponding natural frequencies of vibration are
ω
1
=
β
1
2
E
I
μ
=
3.5160
L
2
E
I
μ
,
…
{\displaystyle \omega _{1}=\beta _{1}^{2}{\sqrt {\frac {EI}{\mu }}}={\frac {3.5160}{L^{2}}}{\sqrt {\frac {EI}{\mu }}}~,~~\dots }
The boundary conditions can also be used to determine the mode shapes from the solution for the displacement:
w
^
n
=
A
1
[
(
cosh
β
n
x
−
cos
β
n
x
)
+
cos
β
n
L
+
cosh
β
n
L
sin
β
n
L
+
sinh
β
n
L
(
sin
β
n
x
−
sinh
β
n
x
)
]
{\displaystyle {\hat {w}}_{n}=A_{1}\left[(\cosh \beta _{n}x-\cos \beta _{n}x)+{\frac {\cos \beta _{n}L+\cosh \beta _{n}L}{\sin \beta _{n}L+\sinh \beta _{n}L}}(\sin \beta _{n}x-\sinh \beta _{n}x)\right]}
The unknown constant (actually constants as there is one for each
n
{\displaystyle n}
),
A
1
{\displaystyle A_{1}}
, which in general is complex, is determined by the initial conditions at
t
=
0
{\displaystyle t=0}
on the velocity and displacements of the beam. Typically a value of
A
1
=
1
{\displaystyle A_{1}=1}
is used when plotting mode shapes. Solutions to the undamped forced problem have unbounded displacements when the driving frequency matches a natural frequency
ω
n
{\displaystyle \omega _{n}}
, i.e., the beam can resonate. The natural frequencies of a beam therefore correspond to the frequencies at which resonance can occur.
==== Example: free–free (unsupported) beam ====
A free–free beam is a beam without any supports. The boundary conditions for a free–free beam of length
L
{\displaystyle L}
extending from
x
=
0
{\displaystyle x=0}
to
x
=
L
{\displaystyle x=L}
are given by:
d
2
w
^
n
d
x
2
=
0
,
d
3
w
^
n
d
x
3
=
0
at
x
=
0
and
x
=
L
.
{\displaystyle {\frac {d^{2}{\hat {w}}_{n}}{dx^{2}}}=0~,~~{\frac {d^{3}{\hat {w}}_{n}}{dx^{3}}}=0\quad {\text{at}}~~x=0\,{\text{and}}\,x=L\,.}
If we apply these conditions, non-trivial solutions are found to exist only if
cosh
(
β
n
L
)
cos
(
β
n
L
)
−
1
=
0
.
{\displaystyle \cosh(\beta _{n}L)\,\cos(\beta _{n}L)-1=0\,.}
This nonlinear equation can be solved numerically. The first four roots are
β
1
L
=
1.50562
π
{\displaystyle \beta _{1}L=1.50562\pi }
,
β
2
L
=
2.49975
π
{\displaystyle \beta _{2}L=2.49975\pi }
,
β
3
L
=
3.50001
π
{\displaystyle \beta _{3}L=3.50001\pi }
, and
β
4
L
=
4.50000
π
{\displaystyle \beta _{4}L=4.50000\pi }
.
The corresponding natural frequencies of vibration are:
ω
1
=
β
1
2
E
I
μ
=
22.3733
L
2
E
I
μ
,
…
{\displaystyle \omega _{1}=\beta _{1}^{2}{\sqrt {\frac {EI}{\mu }}}={\frac {22.3733}{L^{2}}}{\sqrt {\frac {EI}{\mu }}}~,~~\dots }
The boundary conditions can also be used to determine the mode shapes from the solution for the displacement:
w
^
n
=
A
1
[
(
cos
β
n
x
+
cosh
β
n
x
)
−
cos
β
n
L
−
cosh
β
n
L
sin
β
n
L
−
sinh
β
n
L
(
sin
β
n
x
+
sinh
β
n
x
)
]
{\displaystyle {\hat {w}}_{n}=A_{1}{\Bigl [}(\cos \beta _{n}x+\cosh \beta _{n}x)-{\frac {\cos \beta _{n}L-\cosh \beta _{n}L}{\sin \beta _{n}L-\sinh \beta _{n}L}}(\sin \beta _{n}x+\sinh \beta _{n}x){\Bigr ]}}
As with the cantilevered beam, the unknown constants are determined by the initial conditions at
t
=
0
{\displaystyle t=0}
on the velocity and displacements of the beam. Also, solutions to the undamped forced problem have unbounded displacements when the driving frequency matches a natural frequency
ω
n
{\displaystyle \omega _{n}}
.
==== Example: hinged-hinged beam ====
The boundary conditions of a hinged-hinged beam of length
L
{\displaystyle L}
(fixed at
x
=
0
{\displaystyle x=0}
and
x
=
L
{\displaystyle x=L}
) are
w
^
n
=
0
,
d
2
w
^
n
d
x
2
=
0
at
x
=
0
and
x
=
L
.
{\displaystyle {\hat {w}}_{n}=0~,~~{\frac {d^{2}{\hat {w}}_{n}}{dx^{2}}}=0\quad {\text{at}}~~x=0\,{\text{and}}\,x=L\,.}
This implies solutions exist for
sin
(
β
n
L
)
sinh
(
β
n
L
)
=
0
.
{\displaystyle \sin(\beta _{n}L)\,\sinh(\beta _{n}L)=0\,.}
Setting
β
n
=
n
π
/
L
{\displaystyle \beta _{n}=n\pi /L}
enforces this condition. Rearranging for natural frequency gives
ω
n
=
n
2
π
2
L
2
E
I
μ
{\displaystyle \omega _{n}={\frac {n^{2}\pi ^{2}}{L^{2}}}{\sqrt {\frac {EI}{\mu }}}}
== Stress ==
Besides deflection, the beam equation describes forces and moments and can thus be used to describe stresses. For this reason, the Euler–Bernoulli beam equation is widely used in engineering, especially civil and mechanical, to determine the strength (as well as deflection) of beams under bending.
Both the bending moment and the shear force cause stresses in the beam. The stress due to shear force is maximum along the neutral axis of the beam (when the width of the beam, t, is constant along the cross section of the beam; otherwise an integral involving the first moment and the beam's width needs to be evaluated for the particular cross section), and the maximum tensile stress is at either the top or bottom surfaces. Thus the maximum principal stress in the beam may be neither at the surface nor at the center but in some general area. However, shear force stresses are negligible in comparison to bending moment stresses in all but the stockiest of beams as well as the fact that stress concentrations commonly occur at surfaces, meaning that the maximum stress in a beam is likely to be at the surface.
=== Simple or symmetrical bending ===
For beam cross-sections that are symmetrical about a plane perpendicular to the neutral plane, it can be shown that the tensile stress experienced by the beam may be expressed as:
σ
=
M
z
I
=
−
z
E
d
2
w
d
x
2
.
{\displaystyle \sigma ={\frac {Mz}{I}}=-zE~{\frac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}.\,}
Here,
z
{\displaystyle z}
is the distance from the neutral axis to a point of interest; and
M
{\displaystyle M}
is the bending moment. Note that this equation implies that pure bending (of positive sign) will cause zero stress at the neutral axis, positive (tensile) stress at the "top" of the beam, and negative (compressive) stress at the bottom of the beam; and also implies that the maximum stress will be at the top surface and the minimum at the bottom. This bending stress may be superimposed with axially applied stresses, which will cause a shift in the neutral (zero stress) axis.
=== Maximum stresses at a cross-section ===
The maximum tensile stress at a cross-section is at the location
z
=
c
1
{\displaystyle z=c_{1}}
and the maximum compressive stress is at the location
z
=
−
c
2
{\displaystyle z=-c_{2}}
where the height of the cross-section is
h
=
c
1
+
c
2
{\displaystyle h=c_{1}+c_{2}}
. These stresses are
σ
1
=
M
c
1
I
=
M
S
1
;
σ
2
=
−
M
c
2
I
=
−
M
S
2
{\displaystyle \sigma _{1}={\cfrac {Mc_{1}}{I}}={\cfrac {M}{S_{1}}}~;~~\sigma _{2}=-{\cfrac {Mc_{2}}{I}}=-{\cfrac {M}{S_{2}}}}
The quantities
S
1
,
S
2
{\displaystyle S_{1},S_{2}}
are the section moduli and are defined as
S
1
=
I
c
1
;
S
2
=
I
c
2
{\displaystyle S_{1}={\cfrac {I}{c_{1}}}~;~~S_{2}={\cfrac {I}{c_{2}}}}
The section modulus combines all the important geometric information about a beam's section into one quantity. For the case where a beam is doubly symmetric,
c
1
=
c
2
{\displaystyle c_{1}=c_{2}}
and we have one section modulus
S
=
I
/
c
{\displaystyle S=I/c}
.
=== Strain in an Euler–Bernoulli beam ===
We need an expression for the strain in terms of the deflection of the neutral surface to relate the stresses in an Euler–Bernoulli beam to the deflection. To obtain that expression we use the assumption that normals to the neutral surface remain normal during the deformation and that deflections are small. These assumptions imply that the beam bends into an arc of a circle of radius
ρ
{\displaystyle \rho }
(see Figure 1) and that the neutral surface does not change in length during the deformation.
Let
d
x
{\displaystyle \mathrm {d} x}
be the length of an element of the neutral surface in the undeformed state. For small deflections, the element does not change its length after bending but deforms into an arc of a circle of radius
ρ
{\displaystyle \rho }
. If
d
θ
{\displaystyle \mathrm {d} \theta }
is the angle subtended by this arc, then
d
x
=
ρ
d
θ
{\displaystyle \mathrm {d} x=\rho ~\mathrm {d} \theta }
.
Let us now consider another segment of the element at a distance
z
{\displaystyle z}
above the neutral surface. The initial length of this element is
d
x
{\displaystyle \mathrm {d} x}
. However, after bending, the length of the element becomes
d
x
′
=
(
ρ
−
z
)
d
θ
=
d
x
−
z
d
θ
{\displaystyle \mathrm {d} x'=(\rho -z)~\mathrm {d} \theta =\mathrm {d} x-z~\mathrm {d} \theta }
. The strain in that segment of the beam is given by
ε
x
=
d
x
′
−
d
x
d
x
=
−
z
ρ
=
−
κ
z
{\displaystyle \varepsilon _{x}={\cfrac {\mathrm {d} x'-\mathrm {d} x}{\mathrm {d} x}}=-{\cfrac {z}{\rho }}=-\kappa ~z}
where
κ
{\displaystyle \kappa }
is the curvature of the beam. This gives us the axial strain in the beam as a function of distance from the neutral surface. However, we still need to find a relation between the radius of curvature and the beam deflection
w
{\displaystyle w}
.
=== Relation between curvature and beam deflection ===
Let P be a point on the neutral surface of the beam at a distance
x
{\displaystyle x}
from the origin of the
(
x
,
z
)
{\displaystyle (x,z)}
coordinate system. The slope of the beam is approximately equal to the angle made by the neutral surface with the
x
{\displaystyle x}
-axis for the small angles encountered in beam theory. Therefore, with this approximation,
θ
(
x
)
=
d
w
d
x
{\displaystyle \theta (x)={\cfrac {\mathrm {d} w}{\mathrm {d} x}}}
Therefore, for an infinitesimal element
d
x
{\displaystyle \mathrm {d} x}
, the relation
d
x
=
ρ
d
θ
{\displaystyle \mathrm {d} x=\rho ~\mathrm {d} \theta }
can be written as
1
ρ
=
d
θ
d
x
=
d
2
w
d
x
2
=
κ
{\displaystyle {\cfrac {1}{\rho }}={\cfrac {\mathrm {d} \theta }{\mathrm {d} x}}={\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}=\kappa }
Hence the strain in the beam may be expressed as
ε
x
=
−
z
κ
{\displaystyle \varepsilon _{x}=-z\kappa }
=== Stress-strain relations ===
For a homogeneous isotropic linear elastic material, the stress is related to the strain by
σ
=
E
ε
{\displaystyle \sigma =E\varepsilon }
, where
E
{\displaystyle E}
is the Young's modulus. Hence the stress in an Euler–Bernoulli beam is given by
σ
x
=
−
z
E
d
2
w
d
x
2
{\displaystyle \sigma _{x}=-zE{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}}
Note that the above relation, when compared with the relation between the axial stress and the bending moment, leads to
M
=
−
E
I
d
2
w
d
x
2
{\displaystyle M=-EI{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}}
Since the shear force is given by
Q
=
d
M
/
d
x
{\displaystyle Q=\mathrm {d} M/\mathrm {d} x}
, we also have
Q
=
−
E
I
d
3
w
d
x
3
{\displaystyle Q=-EI{\cfrac {\mathrm {d} ^{3}w}{\mathrm {d} x^{3}}}}
== Boundary considerations ==
The beam equation contains a fourth-order derivative in
x
{\displaystyle x}
. To find a unique solution
w
(
x
,
t
)
{\displaystyle w(x,t)}
we need four boundary conditions. The boundary conditions usually model supports, but they can also model point loads, distributed loads and moments. The support or displacement boundary conditions are used to fix values of displacement (
w
{\displaystyle w}
) and rotations (
d
w
/
d
x
{\displaystyle \mathrm {d} w/\mathrm {d} x}
) on the boundary. Such boundary conditions are also called Dirichlet boundary conditions. Load and moment boundary conditions involve higher derivatives of
w
{\displaystyle w}
and represent momentum flux. Flux boundary conditions are also called Neumann boundary conditions.
As an example consider a cantilever beam that is built-in at one end and free at the other as shown in the adjacent figure. At the built-in end of the beam there cannot be any displacement or rotation of the beam. This means that at the left end both deflection and slope are zero. Since no external bending moment is applied at the free end of the beam, the bending moment at that location is zero. In addition, if there is no external force applied to the beam, the shear force at the free end is also zero.
Taking the
x
{\displaystyle x}
coordinate of the left end as
0
{\displaystyle 0}
and the right end as
L
{\displaystyle L}
(the length of the beam), these statements translate to the following set of boundary conditions (assume
E
I
{\displaystyle EI}
is a constant):
w
|
x
=
0
=
0
;
∂
w
∂
x
|
x
=
0
=
0
(fixed end)
{\displaystyle w|_{x=0}=0\quad ;\quad {\frac {\partial w}{\partial x}}{\bigg |}_{x=0}=0\qquad {\mbox{(fixed end)}}\,}
∂
2
w
∂
x
2
|
x
=
L
=
0
;
∂
3
w
∂
x
3
|
x
=
L
=
0
(free end)
{\displaystyle {\frac {\partial ^{2}w}{\partial x^{2}}}{\bigg |}_{x=L}=0\quad ;\quad {\frac {\partial ^{3}w}{\partial x^{3}}}{\bigg |}_{x=L}=0\qquad {\mbox{(free end)}}\,}
A simple support (pin or roller) is equivalent to a point force on the beam which is adjusted in such a way as to fix the position of the beam at that point. A fixed support or clamp, is equivalent to the combination of a point force and a point torque which is adjusted in such a way as to fix both the position and slope of the beam at that point. Point forces and torques, whether from supports or directly applied, will divide a beam into a set of segments, between which the beam equation will yield a continuous solution, given four boundary conditions, two at each end of the segment. Assuming that the product EI is a constant, and defining
λ
=
F
/
E
I
{\displaystyle \lambda =F/EI}
where F is the magnitude of a point force, and
τ
=
M
/
E
I
{\displaystyle \tau =M/EI}
where M is the magnitude of a point torque, the boundary conditions appropriate for some common cases is given in the table below. The change in a particular derivative of w across the boundary as x increases is denoted by
Δ
{\displaystyle \Delta }
followed by that derivative. For example,
Δ
w
″
=
w
″
(
x
+
)
−
w
″
(
x
−
)
{\displaystyle \Delta w''=w''(x+)-w''(x-)}
where
w
″
(
x
+
)
{\displaystyle w''(x+)}
is the value of
w
″
{\displaystyle w''}
at the lower boundary of the upper segment, while
w
″
(
x
−
)
{\displaystyle w''(x-)}
is the value of
w
″
{\displaystyle w''}
at the upper boundary of the lower segment. When the values of the particular derivative are not only continuous across the boundary, but fixed as well, the boundary condition is written e.g.,
Δ
w
″
=
0
∗
{\displaystyle \Delta w''=0^{*}}
which actually constitutes two separate equations (e.g.,
w
″
(
x
−
)
=
w
″
(
x
+
)
{\displaystyle w''(x-)=w''(x+)}
= fixed).
Note that in the first cases, in which the point forces and torques are located between two segments, there are four boundary conditions, two for the lower segment, and two for the upper. When forces and torques are applied to one end of the beam, there are two boundary conditions given which apply at that end. The sign of the point forces and torques at an end will be positive for the lower end, negative for the upper end.
== Loading considerations ==
Applied loads may be represented either through boundary conditions or through the function
q
(
x
,
t
)
{\displaystyle q(x,t)}
which represents an external distributed load. Using distributed loading is often favorable for simplicity. Boundary conditions are, however, often used to model loads depending on context; this practice being especially common in vibration analysis.
By nature, the distributed load is very often represented in a piecewise manner, since in practice a load isn't typically a continuous function. Point loads can be modeled with help of the Dirac delta function. For example, consider a static uniform cantilever beam of length
L
{\displaystyle L}
with an upward point load
F
{\displaystyle F}
applied at the free end. Using boundary conditions, this may be modeled in two ways. In the first approach, the applied point load is approximated by a shear force applied at the free end. In that case the governing equation and boundary conditions are:
E
I
d
4
w
d
x
4
=
0
w
|
x
=
0
=
0
;
d
w
d
x
|
x
=
0
=
0
;
d
2
w
d
x
2
|
x
=
L
=
0
;
−
E
I
d
3
w
d
x
3
|
x
=
L
=
F
{\displaystyle {\begin{aligned}&EI{\frac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}=0\\&w|_{x=0}=0\quad ;\quad {\frac {\mathrm {d} w}{\mathrm {d} x}}{\bigg |}_{x=0}=0\quad ;\quad {\frac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}{\bigg |}_{x=L}=0\quad ;\quad -EI{\frac {\mathrm {d} ^{3}w}{\mathrm {d} x^{3}}}{\bigg |}_{x=L}=F\,\end{aligned}}}
Alternatively we can represent the point load as a distribution using the Dirac function. In that case the equation and boundary conditions are
E
I
d
4
w
d
x
4
=
F
δ
(
x
−
L
)
w
|
x
=
0
=
0
;
d
w
d
x
|
x
=
0
=
0
;
d
2
w
d
x
2
|
x
=
L
=
0
{\displaystyle {\begin{aligned}&EI{\frac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}=F\delta (x-L)\\&w|_{x=0}=0\quad ;\quad {\frac {\mathrm {d} w}{\mathrm {d} x}}{\bigg |}_{x=0}=0\quad ;\quad {\frac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}{\bigg |}_{x=L}=0\,\end{aligned}}}
Note that shear force boundary condition (third derivative) is removed, otherwise there would be a contradiction. These are equivalent boundary value problems, and both yield the solution
w
=
F
6
E
I
(
3
L
x
2
−
x
3
)
.
{\displaystyle w={\frac {F}{6EI}}(3Lx^{2}-x^{3})\,~.}
The application of several point loads at different locations will lead to
w
(
x
)
{\displaystyle w(x)}
being a piecewise function. Use of the Dirac function greatly simplifies such situations; otherwise the beam would have to be divided into sections, each with four boundary conditions solved separately. A well organized family of functions called Singularity functions are often used as a shorthand for the Dirac function, its derivative, and its antiderivatives.
Dynamic phenomena can also be modeled using the static beam equation by choosing appropriate forms of the load distribution. As an example, the free vibration of a beam can be accounted for by using the load function:
q
(
x
,
t
)
=
μ
∂
2
w
∂
t
2
{\displaystyle q(x,t)=\mu {\frac {\partial ^{2}w}{\partial t^{2}}}\,}
where
μ
{\displaystyle \mu }
is the linear mass density of the beam, not necessarily a constant. With this time-dependent loading, the beam equation will be a partial differential equation:
∂
2
∂
x
2
(
E
I
∂
2
w
∂
x
2
)
=
−
μ
∂
2
w
∂
t
2
.
{\displaystyle {\frac {\partial ^{2}}{\partial x^{2}}}\left(EI{\frac {\partial ^{2}w}{\partial x^{2}}}\right)=-\mu {\frac {\partial ^{2}w}{\partial t^{2}}}.}
Another interesting example describes the deflection of a beam rotating with a constant angular frequency of
ω
{\displaystyle \omega }
:
q
(
x
)
=
μ
ω
2
w
(
x
)
{\displaystyle q(x)=\mu \omega ^{2}w(x)\,}
This is a centripetal force distribution. Note that in this case,
q
{\displaystyle q}
is a function of the displacement (the dependent variable), and the beam equation will be an autonomous ordinary differential equation.
== Examples ==
=== Three-point bending ===
The three-point bending test is a classical experiment in mechanics. It represents the case of a beam resting on two roller supports and subjected to a concentrated load applied in the middle of the beam. The shear is constant in absolute value: it is half the central load, P / 2. It changes sign in the middle of the beam. The bending moment varies linearly from one end, where it is 0, and the center where its absolute value is PL / 4, is where the risk of rupture is the most important.
The deformation of the beam is described by a polynomial of third degree over a half beam (the other half being symmetrical).
The bending moments (
M
{\displaystyle M}
), shear forces (
Q
{\displaystyle Q}
), and deflections (
w
{\displaystyle w}
) for a beam subjected to a central point load and an asymmetric point load are given in the table below.
=== Cantilever beams ===
Another important class of problems involves cantilever beams. The bending moments (
M
{\displaystyle M}
), shear forces (
Q
{\displaystyle Q}
), and deflections (
w
{\displaystyle w}
) for a cantilever beam subjected to a point load at the free end and a uniformly distributed load are given in the table below.
Solutions for several other commonly encountered configurations are readily available in textbooks on mechanics of materials and engineering handbooks.
=== Statically indeterminate beams ===
The bending moments and shear forces in Euler–Bernoulli beams can often be determined directly using static balance of forces and moments. However, for certain boundary conditions, the number of reactions can exceed the number of independent equilibrium equations. Such beams are called statically indeterminate.
The built-in beams shown in the figure below are statically indeterminate. To determine the stresses and deflections of such beams, the most direct method is to solve the Euler–Bernoulli beam equation with appropriate boundary conditions. But direct analytical solutions of the beam equation are possible only for the simplest cases. Therefore, additional techniques such as linear superposition are often used to solve statically indeterminate beam problems.
The superposition method involves adding the solutions of a number of statically determinate problems which are chosen such that the boundary conditions for the sum of the individual problems add up to those of the original problem.
Another commonly encountered statically indeterminate beam problem is the cantilevered beam with the free end supported on a roller. The bending moments, shear forces, and deflections of such a beam are listed below:
== Extensions ==
The kinematic assumptions upon which the Euler–Bernoulli beam theory is founded allow it to be extended to more advanced analysis. Simple superposition allows for three-dimensional transverse loading. Using alternative constitutive equations can allow for viscoelastic or plastic beam deformation. Euler–Bernoulli beam theory can also be extended to the analysis of curved beams, beam buckling, composite beams, and geometrically nonlinear beam deflection.
Euler–Bernoulli beam theory does not account for the effects of transverse shear strain. As a result, it underpredicts deflections and overpredicts natural frequencies. For thin beams (beam length to thickness ratios of the order 20 or more) these effects are of minor importance. For thick beams, however, these effects can be significant. More advanced beam theories such as the Timoshenko beam theory (developed by the Russian-born scientist Stephen Timoshenko) have been developed to account for these effects.
=== Large deflections ===
The original Euler–Bernoulli theory is valid only for infinitesimal strains and small rotations. The theory can be extended in a straightforward manner to problems involving moderately large rotations provided that the strain remains small by using the von Kármán strains.
The Euler–Bernoulli hypotheses that plane sections remain plane and normal to the axis of the beam lead to displacements of the form
u
1
=
u
0
(
x
)
−
z
d
w
0
d
x
;
u
2
=
0
;
u
3
=
w
0
(
x
)
{\displaystyle u_{1}=u_{0}(x)-z{\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}~;~~u_{2}=0~;~~u_{3}=w_{0}(x)}
Using the definition of the Lagrangian Green strain from finite strain theory, we can find the von Kármán strains for the beam that are valid for large rotations but small strains by discarding all the higher-order terms (which contain more than two fields) except
∂
w
∂
x
i
∂
w
∂
x
j
.
{\displaystyle {\frac {\partial {w}}{\partial {x^{i}}}}{\frac {\partial {w}}{\partial {x^{j}}}}.}
The resulting strains take the form:
ε
11
=
d
u
0
d
x
−
z
d
2
w
0
d
x
2
+
1
2
[
(
d
u
0
d
x
−
z
d
2
w
0
d
x
2
)
2
+
(
d
w
0
d
x
)
2
]
≈
d
u
0
d
x
−
z
d
2
w
0
d
x
2
+
1
2
(
d
w
0
d
x
)
2
ε
22
=
0
ε
33
=
1
2
(
d
w
0
d
x
)
2
ε
23
=
0
ε
31
=
−
1
2
[
(
d
u
0
d
x
−
z
d
2
w
0
d
x
2
)
(
d
w
0
d
x
)
]
≈
0
ε
12
=
0.
{\displaystyle {\begin{aligned}\varepsilon _{11}&={\cfrac {\mathrm {d} {u_{0}}}{\mathrm {d} {x}}}-z{\cfrac {\mathrm {d} ^{2}{w_{0}}}{\mathrm {d} {x^{2}}}}+{\frac {1}{2}}\left[\left({\cfrac {\mathrm {d} u_{0}}{\mathrm {d} x}}-z{\cfrac {\mathrm {d} ^{2}w_{0}}{\mathrm {d} x^{2}}}\right)^{2}+\left({\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}\right)^{2}\right]\approx {\cfrac {\mathrm {d} {u_{0}}}{\mathrm {d} {x}}}-z{\cfrac {\mathrm {d} ^{2}{w_{0}}}{\mathrm {d} {x^{2}}}}+{\frac {1}{2}}\left({\frac {\mathrm {d} {w_{0}}}{\mathrm {d} {x}}}\right)^{2}\\[0.25em]\varepsilon _{22}&=0\\[0.25em]\varepsilon _{33}&={\frac {1}{2}}\left({\frac {\mathrm {d} {w_{0}}}{\mathrm {d} {x}}}\right)^{2}\\[0.25em]\varepsilon _{23}&=0\\[0.25em]\varepsilon _{31}&=-{\frac {1}{2}}\left[\left({\cfrac {\mathrm {d} u_{0}}{\mathrm {d} x}}-z{\cfrac {\mathrm {d} ^{2}w_{0}}{\mathrm {d} x^{2}}}\right)\left({\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}\right)\right]\approx 0\\[0.25em]\varepsilon _{12}&=0.\end{aligned}}}
From the principle of virtual work, the balance of forces and moments in the beams gives us the equilibrium equations
d
N
x
x
d
x
+
f
(
x
)
=
0
d
2
M
x
x
d
x
2
+
q
(
x
)
+
d
d
x
(
N
x
x
d
w
0
d
x
)
=
0
{\displaystyle {\begin{aligned}{\cfrac {\mathrm {d} N_{xx}}{\mathrm {d} x}}+f(x)&=0\\{\cfrac {\mathrm {d} ^{2}M_{xx}}{\mathrm {d} x^{2}}}+q(x)+{\cfrac {\mathrm {d} }{\mathrm {d} x}}\left(N_{xx}{\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}\right)&=0\end{aligned}}}
where
f
(
x
)
{\displaystyle f(x)}
is the axial load,
q
(
x
)
{\displaystyle q(x)}
is the transverse load, and
N
x
x
=
∫
A
σ
x
x
d
A
;
M
x
x
=
∫
A
z
σ
x
x
d
A
{\displaystyle N_{xx}=\int _{A}\sigma _{xx}~\mathrm {d} A~;~~M_{xx}=\int _{A}z\sigma _{xx}~\mathrm {d} A}
To close the system of equations we need the constitutive equations that relate stresses to strains (and hence stresses to displacements). For large rotations and small strains these relations are
N
x
x
=
A
x
x
[
d
u
0
d
x
+
1
2
(
d
w
0
d
x
)
2
]
−
B
x
x
d
2
w
0
d
x
2
M
x
x
=
B
x
x
[
d
u
0
d
x
+
1
2
(
d
w
0
d
x
)
2
]
−
D
x
x
d
2
w
0
d
x
2
{\displaystyle {\begin{aligned}N_{xx}&=A_{xx}\left[{\cfrac {\mathrm {d} u_{0}}{\mathrm {d} x}}+{\frac {1}{2}}\left({\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}\right)^{2}\right]-B_{xx}{\cfrac {\mathrm {d} ^{2}w_{0}}{\mathrm {d} x^{2}}}\\M_{xx}&=B_{xx}\left[{\cfrac {\mathrm {d} u_{0}}{\mathrm {d} x}}+{\frac {1}{2}}\left({\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}\right)^{2}\right]-D_{xx}{\cfrac {\mathrm {d} ^{2}w_{0}}{\mathrm {d} x^{2}}}\end{aligned}}}
where
A
x
x
=
∫
A
E
d
A
;
B
x
x
=
∫
A
z
E
d
A
;
D
x
x
=
∫
A
z
2
E
d
A
.
{\displaystyle A_{xx}=\int _{A}E~\mathrm {d} A~;~~B_{xx}=\int _{A}zE~\mathrm {d} A~;~~D_{xx}=\int _{A}z^{2}E~\mathrm {d} A~.}
The quantity
A
x
x
{\displaystyle A_{xx}}
is the extensional stiffness,
B
x
x
{\displaystyle B_{xx}}
is the coupled extensional-bending stiffness, and
D
x
x
{\displaystyle D_{xx}}
is the bending stiffness.
For the situation where the beam has a uniform cross-section and no axial load, the governing equation for a large-rotation Euler–Bernoulli beam is
E
I
d
4
w
d
x
4
−
3
2
E
A
(
d
w
d
x
)
2
(
d
2
w
d
x
2
)
=
q
(
x
)
{\displaystyle EI~{\cfrac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}-{\frac {3}{2}}~EA~\left({\cfrac {\mathrm {d} w}{\mathrm {d} x}}\right)^{2}\left({\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}\right)=q(x)}
== See also ==
Applied mechanics
Bending
Bending moment
Buckling
Flexural rigidity
Generalised beam theory
Plate theory
Sandwich theory
Shear and moment diagram
Singularity function
Strain (materials science)
Timoshenko beam theory
Theorem of three moments (Clapeyron's theorem)
Three-point flexural test
== References ==
=== Notes ===
=== Citations ===
=== Further reading ===
== External links ==
Beam stress & deflection, beam deflection tables | Wikipedia/Euler–Bernoulli_beam_equation |
In mechanics, an impact is when two bodies collide. During this collision, both bodies decelerate. The deceleration causes a high force or shock, applied over a short time period. A high force, over a short duration, usually causes more damage to both bodies than a lower force applied over a proportionally longer duration.
At normal speeds, during a perfectly inelastic collision, an object struck by a projectile will deform, and this deformation will absorb most or all of the force of the collision. Viewed from a conservation of energy perspective, the kinetic energy of the projectile is changed into heat and sound energy, as a result of the deformations and vibrations induced in the struck object. However, these deformations and vibrations cannot occur instantaneously. A high-velocity collision (an impact) does not provide sufficient time for these deformations and vibrations to occur. Thus, the struck material behaves as if it were more brittle than it would otherwise be, and the majority of the applied force goes into fracturing the material. Or, another way to look at it is that materials actually are more brittle on short time scales than on long time scales: this is related to time-temperature superposition.
Impact resistance decreases with an increase in the modulus of elasticity, which means that stiffer materials will have less impact resistance. Resilient materials will have better impact resistance.
Different materials can behave in quite different ways in impact when compared with static loading conditions. Ductile materials like steel tend to become more brittle at high loading rates, and spalling may occur on the reverse side to the impact if penetration doesn't occur. The way in which the kinetic energy is distributed through the section is also important in determining its response. Projectiles apply a Hertzian contact stress at the point of impact to a solid body, with compression stresses under the point, but with bending loads a short distance away. Since most materials are weaker in tension than compression, this is the zone where cracks tend to form and grow.
== Applications ==
A nail is pounded with a series of impacts, each by a single hammer blow. These high velocity impacts overcome the static friction between the nail and the substrate. A pile driver achieves the same end, although on a much larger scale, the method being commonly used during civil construction projects to make building and bridge foundations. An impact wrench is a device designed to impart torque impacts to bolts to tighten or loosen them. At normal speeds, the forces applied to the bolt would be dispersed, via friction, to the mating threads. However, at impact speeds, the forces act on the bolt to move it before they can be dispersed. In ballistics, bullets utilize impact forces to puncture surfaces that could otherwise resist substantial forces. A rubber sheet, for example, behaves more like glass at typical bullet speeds. That is, it fractures, and does not stretch or vibrate.
The field of applications of impact theory ranges from the optimization of material processing, impact testing, dynamics of granular media to medical applications related to the biomechanics of the human body, especially the hip- and knee-joints. Also, it has vast applications in the automotive and military industries.
== Impacts causing damage ==
Road traffic accidents usually involve impact loading, such as when a car hits a traffic bollard, water hydrant or tree, the damage being localized to the impact zone. When vehicles collide, the damage increases with the relative velocity of the vehicles, the damage increasing as the square of the velocity since it is the impact kinetic energy (1/2 mv2) which is the variable of importance. Much design effort is made to improve the impact resistance of cars so as to minimize user injury. It can be achieved in several ways: by enclosing the driver and passengers in a safety cell for example. The cell is reinforced so it will survive in high speed crashes, and so protect the users. Parts of the body shell outside the cell are designed to crumple progressively, absorbing most of the kinetic energy which must be dissipated by the impact.
Various impact test are used to assess the effects of high loading, both on products and standard slabs of material. The Charpy test and Izod test are two examples of standardized methods which are used widely for testing materials. Ball or projectile drop tests are used for assessing product impacts.
The Columbia disaster was caused by impact damage when a chunk of polyurethane foam impacted the carbon fibre composite wing of the Space Shuttle. Although tests had been conducted before the disaster, the test chunks were much smaller than the chunk that fell away from the booster rocket and hit the exposed wing.
When fragile items are shipped, impacts and drops can cause product damage. Protective packaging and cushioning help reduce the peak acceleration by extending the duration of the shock or impact.
== See also ==
== References ==
=== Sources === | Wikipedia/Impact_(mechanics) |
In solid mechanics, shearing forces are unaligned forces acting on one part of a body in a specific direction, and another part of the body in the opposite direction. When the forces are collinear (aligned with each other), they are called tension forces or compression forces. Shear force can also be defined in terms of planes: "If a plane is passed through a body, a force acting along this plane is called a shear force or shearing force."
== Force required to shear steel ==
This section calculates the force required to cut a piece of material with a shearing action. The relevant information is the area of the material being sheared, i.e. the area across which the shearing action takes place, and the shear strength of the material. A round bar of steel is used as an example. The shear strength is calculated from the tensile strength using a factor which relates the two strengths. In this case 0.6 applies to the example steel, known as EN8 bright, although it can vary from 0.58 to 0.62 depending on application.
EN8 bright has a tensile strength of 800 MPa and mild steel, for comparison, has a tensile strength of 400 MPa.
To calculate the force to shear a 25 mm diameter bar of EN8 bright steel;
area of the bar in mm2 = (12.52)(π) ≈ 490.8 mm2
0.8 kN/mm2 × 490.8 mm2 = 392.64 kN ≈ 40 tonne-force
40 tonne-force × 0.6 (to change force from tensile to shear) = 24 tonne-force
When working with a riveted or tensioned bolted joint, the strength comes from friction between the materials bolted together. Bolts are correctly torqued to maintain the friction. The shear force only becomes relevant when the bolts are not torqued.
A bolt with property class 12.9 has a tensile strength of 1200 MPa (1 MPa = 1 N/mm2) or 1.2 kN/mm2 and the yield strength is 0.90 times tensile strength, 1080 MPa in this case.
A bolt with property class 4.6 has a tensile strength of 400 MPa (1 MPa = 1 N/mm2) or 0.4 kN/mm2 and yield strength is 0.60 times tensile strength, 240 MPa in this case.
== See also ==
ASTM F568M, mechanical properties of different grades of steel fasteners
Cantilever method
Résal effect
Newton's laws of motion § Newton's third law
== References == | Wikipedia/Shear_force |
Reinforced concrete, also called ferroconcrete or ferro-concrete, is a composite material in which concrete's relatively low tensile strength and ductility are compensated for by the inclusion of reinforcement having higher tensile strength or ductility. The reinforcement is usually, though not necessarily, steel reinforcing bars (known as rebar) and is usually embedded passively in the concrete before the concrete sets. However, post-tensioning is also employed as a technique to reinforce the concrete. In terms of volume used annually, it is one of the most common engineering materials. In corrosion engineering terms, when designed correctly, the alkalinity of the concrete protects the steel rebar from corrosion.
== Description ==
Reinforcing schemes are generally designed to resist tensile stresses in particular regions of the concrete that might cause unacceptable cracking and/or structural failure. Modern reinforced concrete can contain varied reinforcing materials made of steel, polymers or alternate composite material in conjunction with rebar or not. Reinforced concrete may also be permanently stressed (concrete in compression, reinforcement in tension), so as to improve the behavior of the final structure under working loads. In the United States, the most common methods of doing this are known as pre-tensioning and post-tensioning.
For a strong, ductile and durable construction the reinforcement needs to have the following properties at least:
High relative strength
High toleration of tensile strain
Good bond to the concrete, irrespective of pH, moisture, and similar factors
Thermal compatibility, not causing unacceptable stresses (such as expansion or contraction) in response to changing temperatures.
Durability in the concrete environment, irrespective of corrosion or sustained stress for example.
== History ==
The early development of the reinforced concrete was going on in parallel in England and France, in the middle of the 19th century.
French builder François Coignet was the first one to use iron-reinforced concrete as a building technique. In 1853-55, Coignet built for himself the first iron reinforced concrete structure, a four-story house at 72 rue Charles Michels in the suburbs of Paris known as the François Coignet House. Coignet's descriptions of reinforcing concrete suggests that he did not do it for means of adding strength to the concrete but for keeping walls in monolithic construction from overturning. The 1872–73 Pippen Building in Brooklyn, although not designed by Coignet, stands as a testament to his technique.
In 1854, English builder William B. Wilkinson reinforced the concrete roof and floors in the two-story house he was constructing. His positioning of the reinforcement demonstrated that, unlike his predecessors, he had knowledge of tensile stresses. Between 1869 and 1870, Henry Eton would design, and Messrs W & T Phillips of London construct the wrought iron reinforced Homersfield Bridge, with a 50' (15.25 meter) span, over the river Waveney, between the English counties of Norfolk and Suffolk.
Joseph Monier, a 19th-century French gardener, was a pioneer in the development of structural, prefabricated and reinforced concrete, having been dissatisfied with the existing materials available for making durable flowerpots. He was granted a patent for reinforcing concrete flowerpots by means of mixing a wire mesh and a mortar shell in 1867. In 1877, Monier was granted another patent for a more advanced technique of reinforcing concrete columns and girders, using iron rods placed in a grid pattern. Though Monier undoubtedly knew that reinforcing concrete would improve its inner cohesion, it is not clear whether he even knew how much the tensile strength of concrete was improved by the reinforcing.
In 1877, Thaddeus Hyatt published a report entitled An Account of Some Experiments with Portland-Cement-Concrete Combined with Iron as a Building Material, with Reference to Economy of Metal in Construction and for Security against Fire in the Making of Roofs, Floors, and Walking Surfaces, in which he reported his experiments on the behaviour of reinforced concrete. His work played a major role in the evolution of concrete construction as a proven and studied science. Without Hyatt's work, more dangerous trial and error methods might have been depended on for the advancement in the technology.
Before the 1870s, the use of concrete construction, though dating back to the Roman Empire, and having been reintroduced in the early 19th century, was not yet a scientifically proven technology.
Ernest L. Ransome, an English-born engineer, was an early innovator of reinforced concrete techniques at the end of the 19th century. Using the knowledge of reinforced concrete developed during the previous 50 years, Ransome improved nearly all the styles and techniques of the earlier inventors of reinforced concrete. Ransome's key innovation was to twist the reinforcing steel bar, thereby improving its bond with the concrete. Gaining increasing fame from his concrete constructed buildings, Ransome was able to build in 1886-1889 two of the first reinforced concrete bridges in North America. One of his bridges still stands on Shelter Island in New York's East End.
One of the first concrete buildings constructed in the United States was a private home designed by William Ward, completed in 1876. The home was particularly designed to be fireproof.
G. A. Wayss was a German civil engineer and a pioneer of the iron and steel concrete construction. In 1879, Wayss bought the German rights to Monier's patents and, in 1884, his firm, Wayss & Freytag, made the first commercial use of reinforced concrete. Up until the 1890s, Wayss and his firm greatly contributed to the advancement of Monier's system of reinforcing, established it as a well-developed scientific technology.
The Lamington Bridge was Australia's first large reinforced concrete road bridge. It was designed by Alfred Barton Brady, who was the Queensland Government Architect at the time of the bridge's construction in 1896. It has eleven 15.2-metre (50 ft) spans and a total length of 187-metre (614 ft), larger than any known comparable bridge in the world at that time.
One of the first skyscrapers made with reinforced concrete was the 16-story Ingalls Building in Cincinnati, constructed in 1904.
The first reinforced concrete building in Southern California was the Laughlin Annex in downtown Los Angeles, constructed in 1905. In 1906, 16 building permits were reportedly issued for reinforced concrete buildings in the City of Los Angeles, including the Temple Auditorium and 8-story Hayward Hotel.
In 1906, a partial collapse of the Bixby Hotel in Long Beach killed 10 workers during construction when shoring was removed prematurely. That event spurred a scrutiny of concrete erection practices and building inspections. The structure was constructed of reinforced concrete frames with hollow clay tile ribbed flooring and hollow clay tile infill walls. That practice was strongly questioned by experts and recommendations for "pure" concrete construction were made, using reinforced concrete for the floors and walls as well as the frames.
In April 1904, Julia Morgan, an American architect and engineer, who pioneered the aesthetic use of reinforced concrete, completed her first reinforced concrete structure, El Campanil, a 72-foot (22 m) bell tower at Mills College, which is located across the bay from San Francisco. Two years later, El Campanil survived the 1906 San Francisco earthquake without any damage, which helped build her reputation and launch her prolific career. The 1906 earthquake also changed the public's initial resistance to reinforced concrete as a building material, which had been criticized for its perceived dullness. In 1908, the San Francisco Board of Supervisors changed the city's building codes to allow wider use of reinforced concrete.
In 1906, the National Association of Cement Users (NACU) published Standard No. 1 and, in 1910, the Standard Building Regulations for the Use of Reinforced Concrete.
== Use in construction ==
Many different types of structures and components of structures can be built using reinforced concrete elements including slabs, walls, beams, columns, foundations, frames and more.
Reinforced concrete can be classified as precast or cast-in-place concrete.
Designing and implementing the most efficient floor system is key to creating optimal building structures. Small changes in the design of a floor system can have significant impact on material costs, construction schedule, ultimate strength, operating costs, occupancy levels and end use of a building.
Without reinforcement, constructing modern structures with concrete material would not be possible.
=== Reinforced concrete elements ===
When reinforced concrete elements are used in construction, these reinforced concrete elements exhibit basic behavior when subjected to external loads. Reinforced concrete elements may be subject to tension, compression, bending, shear, and/or torsion.
== Behavior ==
=== Materials ===
Concrete is a mixture of coarse (stone or brick chips) and fine (generally sand and/or crushed stone) aggregates with a paste of binder material (usually Portland cement) and water. When cement is mixed with a small amount of water, it hydrates to form microscopic opaque crystal lattices encapsulating and locking the aggregate into a rigid shape. The aggregates used for making concrete should be free from harmful substances like organic impurities, silt, clay, lignite, etc. Typical concrete mixes have high resistance to compressive stresses (about 4,000 psi (28 MPa)); however, any appreciable tension (e.g., due to bending) will break the microscopic rigid lattice, resulting in cracking and separation of the concrete. For this reason, typical non-reinforced concrete must be well supported to prevent the development of tension.
If a material with high strength in tension, such as steel, is placed in concrete, then the composite material, reinforced concrete, resists not only compression but also bending and other direct tensile actions. A composite section where the concrete resists compression and reinforcement "rebar" resists tension can be made into almost any shape and size for the construction industry.
=== Key characteristics ===
Three physical characteristics give reinforced concrete its special properties:
The coefficient of thermal expansion of concrete is similar to that of steel, eliminating large internal stresses due to differences in thermal expansion or contraction.
When the cement paste within the concrete hardens, this conforms to the surface details of the steel, permitting any stress to be transmitted efficiently between the different materials. Usually steel bars are roughened or corrugated to further improve the bond or cohesion between the concrete and steel.
The alkaline chemical environment provided by the alkali reserve (KOH, NaOH) and the portlandite (calcium hydroxide) contained in the hardened cement paste causes a passivating film to form on the surface of the steel, making it much more resistant to corrosion than it would be in neutral or acidic conditions. When the cement paste is exposed to the air and meteoric water reacts with the atmospheric CO2, portlandite and the calcium silicate hydrate (CSH) of the hardened cement paste become progressively carbonated and the high pH gradually decreases from 13.5 – 12.5 to 8.5, the pH of water in equilibrium with calcite (calcium carbonate) and the steel is no longer passivated.
As a rule of thumb, only to give an idea on orders of magnitude, steel is protected at pH above ~11 but starts to corrode below ~10 depending on steel characteristics and local physico-chemical conditions when concrete becomes carbonated. Carbonation of concrete along with chloride ingress are amongst the chief reasons for the failure of reinforcement bars in concrete.
The relative cross-sectional area of steel required for typical reinforced concrete is usually quite small and varies from 1% for most beams and slabs to 6% for some columns. Reinforcing bars are normally round in cross-section and vary in diameter. Reinforced concrete structures sometimes have provisions such as ventilated hollow cores to control their moisture & humidity.
Distribution of concrete (in spite of reinforcement) strength characteristics along the cross-section of vertical reinforced concrete elements is inhomogeneous.
=== Mechanism of composite action of reinforcement and concrete ===
The reinforcement in a RC structure, such as a steel bar, has to undergo the same strain or deformation as the surrounding concrete in order to prevent discontinuity, slip or separation of the two materials under load. Maintaining composite action requires transfer of load between the concrete and steel. The direct stress is transferred from the concrete to the bar interface so as to change the tensile stress in the reinforcing bar along its length. This load transfer is achieved by means of bond (anchorage) and is idealized as a continuous stress field that develops in the vicinity of the steel-concrete interface.
The reasons that the two different material components concrete and steel can work together are as follows:
(1) Reinforcement can be well bonded to the concrete, thus they can jointly resist external loads and deform.
(2) The thermal expansion coefficients of concrete and steel are so close
(1.0×10−5 to 1.5×10−5 for concrete and 1.2×10−5 for steel) that the thermal stress-induced damage to the bond between the two components can be prevented.
(3) Concrete can protect the embedded steel from corrosion and high-temperature induced softening.
=== Anchorage (bond) in concrete: Codes of specifications ===
Because the actual bond stress varies along the length of a bar anchored in a zone of tension, current international codes of specifications use the concept of development length rather than bond stress. The main requirement for safety against bond failure is to provide a sufficient extension of the length of the bar beyond the point where the steel is required to develop its yield stress and this length must be at least equal to its development length. However, if the actual available length is inadequate for full development, special anchorages must be provided, such as cogs or hooks or mechanical end plates. The same concept applies to lap splice length mentioned in the codes where splices (overlapping) provided between two adjacent bars in order to maintain the required continuity of stress in the splice zone.
=== Anticorrosion measures ===
In wet and cold climates, reinforced concrete for roads, bridges, parking structures and other structures that may be exposed to deicing salt may benefit from use of corrosion-resistant reinforcement such as uncoated, low carbon/chromium (micro composite), epoxy-coated, hot dip galvanized or stainless steel rebar. Good design and a well-chosen concrete mix will provide additional protection for many applications.
Uncoated, low carbon/chromium rebar looks similar to standard carbon steel rebar due to its lack of a coating; its highly corrosion-resistant features are inherent in the steel microstructure. It can be identified by the unique ASTM specified mill marking on its smooth, dark charcoal finish. Epoxy-coated rebar can easily be identified by the light green color of its epoxy coating. Hot dip galvanized rebar may be bright or dull gray depending on length of exposure, and stainless rebar exhibits a typical white metallic sheen that is readily distinguishable from carbon steel reinforcing bar. Reference ASTM standard specifications A1035/A1035M Standard Specification for Deformed and Plain Low-carbon, Chromium, Steel Bars for Concrete Reinforcement, A767 Standard Specification for Hot Dip Galvanized Reinforcing Bars, A775 Standard Specification for Epoxy Coated Steel Reinforcing Bars and A955 Standard Specification for Deformed and Plain Stainless Bars for Concrete Reinforcement.
Another, cheaper way of protecting rebars is coating them with zinc phosphate. Zinc phosphate slowly reacts with calcium cations and the hydroxyl anions present in the cement pore water and forms a stable hydroxyapatite layer.
Penetrating sealants typically must be applied some time after curing. Sealants include paint, plastic foams, films and aluminum foil, felts or fabric mats sealed with tar, and layers of bentonite clay, sometimes used to seal roadbeds.
Corrosion inhibitors, such as calcium nitrite [Ca(NO2)2], can also be added to the water mix before pouring concrete. Generally, 1–2 wt. % of [Ca(NO2)2] with respect to cement weight is needed to prevent corrosion of the rebars. The nitrite anion is a mild oxidizer that oxidizes the soluble and mobile ferrous ions (Fe2+) present at the surface of the corroding steel and causes them to precipitate as an insoluble ferric hydroxide (Fe(OH)3). This causes the passivation of steel at the anodic oxidation sites. Nitrite is a much more active corrosion inhibitor than nitrate, which is a less powerful oxidizer of the divalent iron.
== Reinforcement and terminology of beams ==
A beam bends under bending moment, resulting in a small curvature. At the outer face (tensile face) of the curvature the concrete experiences tensile stress, while at the inner face (compressive face) it experiences compressive stress.
A singly reinforced beam is one in which the concrete element is only reinforced near the tensile face and the reinforcement, called tension steel, is designed to resist the tension.
A doubly reinforced beam is the section in which besides the tensile reinforcement the concrete element is also reinforced near the compressive face to help the concrete resist compression and take stresses. The latter reinforcement is called compression steel. When the compression zone of a concrete is inadequate to resist the compressive moment (positive moment), extra reinforcement has to be provided if the architect limits the dimensions of the section.
An under-reinforced beam is one in which the tension capacity of the tensile reinforcement is smaller than the combined compression capacity of the concrete and the compression steel (under-reinforced at tensile face). When the reinforced concrete element is subject to increasing bending moment, the tension steel yields while the concrete does not reach its ultimate failure condition. As the tension steel yields and stretches, an "under-reinforced" concrete also yields in a ductile manner, exhibiting a large deformation and warning before its ultimate failure. In this case the yield stress of the steel governs the design.
An over-reinforced beam is one in which the tension capacity of the tension steel is greater than the combined compression capacity of the concrete and the compression steel (over-reinforced at tensile face). So the "over-reinforced concrete" beam fails by crushing of the compressive-zone concrete and before the tension zone steel yields, which does not provide any warning before failure as the failure is instantaneous.
A balanced-reinforced beam is one in which both the compressive and tensile zones reach yielding at the same imposed load on the beam, and the concrete will crush and the tensile steel will yield at the same time. This design criterion is however as risky as over-reinforced concrete, because failure is sudden as the concrete crushes at the same time of the tensile steel yields, which gives a very little warning of distress in tension failure.
Steel-reinforced concrete moment-carrying elements should normally be designed to be under-reinforced so that users of the structure will receive warning of impending collapse.
The characteristic strength is the strength of a material where less than 5% of the specimen shows lower strength.
The design strength or nominal strength is the strength of a material, including a material-safety factor. The value of the safety factor generally ranges from 0.75 to 0.85 in Permissible stress design.
The ultimate limit state is the theoretical failure point with a certain probability. It is stated under factored loads and factored resistances.
Reinforced concrete structures are normally designed according to rules and regulations or recommendation of a code such as ACI-318, CEB, Eurocode 2 or the like. WSD, USD or LRFD methods are used in design of RC structural members. Analysis and design of RC members can be carried out by using linear or non-linear approaches. When applying safety factors, building codes normally propose linear approaches, but for some cases non-linear approaches. To see the examples of a non-linear numerical simulation and calculation visit the references:
== Prestressed concrete ==
Prestressing concrete is a technique that greatly increases the load-bearing strength of concrete beams. The reinforcing steel in the bottom part of the beam, which will be subjected to tensile forces when in service, is placed in tension before the concrete is poured around it. Once the concrete has hardened, the tension on the reinforcing steel is released, placing a built-in compressive force on the concrete. When loads are applied, the reinforcing steel takes on more stress and the compressive force in the concrete is reduced, but does not become a tensile force. Since the concrete is always under compression, it is less subject to cracking and failure.
== Common failure modes of steel reinforced concrete ==
Reinforced concrete can fail due to inadequate strength, leading to mechanical failure, or due to a reduction in its durability. Corrosion and freeze/thaw cycles may damage poorly designed or constructed reinforced concrete. When rebar corrodes, the oxidation products (rust) expand and tends to flake, cracking the concrete and unbonding the rebar from the concrete. Typical mechanisms leading to durability problems are discussed below.
=== Mechanical failure ===
Cracking of the concrete section is nearly impossible to prevent; however, the size and location of cracks can be limited and controlled by appropriate reinforcement, control joints, curing methodology and concrete mix design. Cracking can allow moisture to penetrate and corrode the reinforcement. This is a serviceability failure in limit state design. Cracking is normally the result of an inadequate quantity of rebar, or rebar spaced at too great a distance. The concrete cracks either under excess loading, or due to internal effects such as early thermal shrinkage while it cures.
Ultimate failure leading to collapse can be caused by crushing the concrete, which occurs when compressive stresses exceed its strength, by yielding or failure of the rebar when bending or shear stresses exceed the strength of the reinforcement, or by bond failure between the concrete and the rebar.
=== Carbonation ===
Carbonation, or neutralisation, is a chemical reaction between carbon dioxide in the air and calcium hydroxide and hydrated calcium silicate in the concrete.
When a concrete structure is designed, it is usual to specify the concrete cover for the rebar (the depth of the rebar within the object). The minimum concrete cover is normally regulated by design or building codes. If the reinforcement is too close to the surface, early failure due to corrosion may occur. The concrete cover depth can be measured with a cover meter. However, carbonated concrete incurs a durability problem only when there is also sufficient moisture and oxygen to cause electropotential corrosion of the reinforcing steel.
One method of testing a structure for carbonation is to drill a fresh hole in the surface and then treat the cut surface with phenolphthalein indicator solution. This solution turns pink when in contact with alkaline concrete, making it possible to see the depth of carbonation. Using an existing hole does not suffice because the exposed surface will already be carbonated.
=== Chlorides ===
Chlorides can promote the corrosion of embedded rebar if present in sufficiently high concentration. Chloride anions induce both localized corrosion (pitting corrosion) and generalized corrosion of steel reinforcements. For this reason, one should only use fresh raw water or potable water for mixing concrete, ensure that the coarse and fine aggregates do not contain chlorides, rather than admixtures which might contain chlorides.
It was once common for calcium chloride to be used as an admixture to promote rapid set-up of the concrete. It was also mistakenly believed that it would prevent freezing. However, this practice fell into disfavor once the deleterious effects of chlorides became known. It should be avoided whenever possible.
The use of de-icing salts on roadways, used to lower the freezing point of water, is probably one of the primary causes of premature failure of reinforced or prestressed concrete bridge decks, roadways, and parking garages. The use of epoxy-coated reinforcing bars and the application of cathodic protection has mitigated this problem to some extent. Also FRP (fiber-reinforced polymer) rebars are known to be less susceptible to chlorides. Properly designed concrete mixtures that have been allowed to cure properly are effectively impervious to the effects of de-icers.
Another important source of chloride ions is sea water. Sea water contains by weight approximately 3.5% salts. These salts include sodium chloride, magnesium sulfate, calcium sulfate, and bicarbonates. In water these salts dissociate in free ions (Na+, Mg2+, Cl−, SO2−4, HCO−3) and migrate with the water into the capillaries of the concrete. Chloride ions, which make up about 50% of these ions, are particularly aggressive as a cause of corrosion of carbon steel reinforcement bars.
In the 1960s and 1970s it was also relatively common for magnesite, a chloride rich carbonate mineral, to be used as a floor-topping material. This was done principally as a levelling and sound attenuating layer. However it is now known that when these materials come into contact with moisture they produce a weak solution of hydrochloric acid due to the presence of chlorides in the magnesite. Over a period of time (typically decades), the solution causes corrosion of the embedded rebars. This was most commonly found in wet areas or areas repeatedly exposed to moisture.
=== Alkali silica reaction ===
This a reaction of amorphous silica (chalcedony, chert, siliceous limestone) sometimes present in the aggregates with the hydroxyl ions (OH−) from the cement pore solution. Poorly crystallized silica (SiO2) dissolves and dissociates at high pH (12.5 - 13.5) in alkaline water. The soluble dissociated silicic acid reacts in the porewater with the calcium hydroxide (portlandite) present in the cement paste to form an expansive calcium silicate hydrate (CSH). The alkali–silica reaction (ASR) causes localised swelling responsible for tensile stress and cracking. The conditions required for alkali silica reaction are threefold:
(1) aggregate containing an alkali-reactive constituent (amorphous silica), (2) sufficient availability of hydroxyl ions (OH−), and (3) sufficient moisture, above 75% relative humidity (RH) within the concrete. This phenomenon is sometimes popularly referred to as "concrete cancer". This reaction occurs independently of the presence of rebars; massive concrete structures such as dams can be affected.
=== Conversion of high alumina cement ===
Resistant to weak acids and especially sulfates, this cement cures quickly and has very high durability and strength. It was frequently used after World War II to make precast concrete objects. However, it can lose strength with heat or time (conversion), especially when not properly cured. After the collapse of three roofs made of prestressed concrete beams using high alumina cement, this cement was banned in the UK in 1976. Subsequent inquiries into the matter showed that the beams were improperly manufactured, but the ban remained.
=== Sulfates ===
Sulfates (SO4) in the soil or in groundwater, in sufficient concentration, can react with the Portland cement in concrete causing the formation of expansive products, e.g., ettringite or thaumasite, which can lead to early failure of the structure. The most typical attack of this type is on concrete slabs and foundation walls at grades where the sulfate ion, via alternate wetting and drying, can increase in concentration. As the concentration increases, the attack on the Portland cement can begin. For buried structures such as pipe, this type of attack is much rarer, especially in the eastern United States. The sulfate ion concentration increases much slower in the soil mass and is especially dependent upon the initial amount of sulfates in the native soil. A chemical analysis of soil borings to check for the presence of sulfates should be undertaken during the design phase of any project involving concrete in contact with the native soil. If the concentrations are found to be aggressive, various protective coatings can be applied. Also, in the US ASTM C150 Type 5 Portland cement can be used in the mix. This type of cement is designed to be particularly resistant to a sulfate attack.
== Steel plate construction ==
In steel plate construction, stringers join parallel steel plates. The plate assemblies are fabricated off site, and welded together on-site to form steel walls connected by stringers. The walls become the form into which concrete is poured. Steel plate construction speeds reinforced concrete construction by cutting out the time-consuming on-site manual steps of tying rebar and building forms. The method results in excellent strength because the steel is on the outside, where tensile forces are often greatest.
== Fiber-reinforced concrete ==
Fiber reinforcement is mainly used in shotcrete, but can also be used in normal concrete. Fiber-reinforced normal concrete is mostly used for on-ground floors and pavements, but can also be considered for a wide range of construction parts (beams, pillars, foundations, etc.), either alone or with hand-tied rebars.
Concrete reinforced with fibers (which are usually steel, glass, plastic fibers) or cellulose polymer fiber is less expensive than hand-tied rebar. The shape, dimension, and length of the fiber are important. A thin and short fiber, for example short, hair-shaped glass fiber, is only effective during the first hours after pouring the concrete (its function is to reduce cracking while the concrete is stiffening), but it will not increase the concrete tensile strength. A normal-size fiber for European shotcrete (1 mm diameter, 45 mm length—steel or plastic) will increase the concrete's tensile strength. Fiber reinforcement is most often used to supplement or partially replace primary rebar, and in some cases it can be designed to fully replace rebar.
Steel is the strongest commonly available fiber, and comes in different lengths (30 to 80 mm in Europe) and shapes (end-hooks). Steel fibers can only be used on surfaces that can tolerate or avoid corrosion and rust stains. In some cases, a steel-fiber surface is faced with other materials.
Glass fiber is inexpensive and corrosion-proof, but not as ductile as steel. Recently, spun basalt fiber, long available in Eastern Europe, has become available in the U.S. and Western Europe. Basalt fiber is stronger and less expensive than glass, but historically has not resisted the alkaline environment of Portland cement well enough to be used as direct reinforcement. New materials use plastic binders to isolate the basalt fiber from the cement.
The premium fibers are graphite-reinforced plastic fibers, which are nearly as strong as steel, lighter in weight, and corrosion-proof. Some experiments have had promising early results with carbon nanotubes, but the material is still far too expensive for any building.
== Non-steel reinforcement ==
There is considerable overlap between the subjects of non-steel reinforcement and fiber-reinforcement of concrete. The introduction of non-steel reinforcement of concrete is relatively recent; it takes two major forms: non-metallic rebar rods, and non-steel (usually also non-metallic) fibers incorporated into the cement matrix. For example, there is increasing interest in glass fiber reinforced concrete (GFRC) and in various applications of polymer fibers incorporated into concrete. Although currently there is not much suggestion that such materials will replace metal rebar, some of them have major advantages in specific applications, and there also are new applications in which metal rebar simply is not an option. However, the design and application of non-steel reinforcing is fraught with challenges. For one thing, concrete is a highly alkaline environment, in which many materials, including most kinds of glass, have a poor service life. Also, the behavior of such reinforcing materials differs from the behavior of metals, for instance in terms of shear strength, creep and elasticity.
Fiber-reinforced plastic/polymer (FRP) and glass-reinforced plastic (GRP) consist of fibers of polymer, glass, carbon, aramid or other polymers or high-strength fibers set in a resin matrix to form a rebar rod, or grid, or fiber. These rebars are installed in much the same manner as steel rebars. The cost is higher but, suitably applied, the structures have advantages, in particular a dramatic reduction in problems related to corrosion, either by intrinsic concrete alkalinity or by external corrosive fluids that might penetrate the concrete. These structures can be significantly lighter and usually have a longer service life. The cost of these materials has dropped dramatically since their widespread adoption in the aerospace industry and by the military.
In particular, FRP rods are useful for structures where the presence of steel would not be acceptable. For example, MRI machines have huge magnets, and accordingly require non-magnetic buildings. Again, toll booths that read radio tags need reinforced concrete that is transparent to radio waves. Also, where the design life of the concrete structure is more important than its initial costs, non-steel reinforcing often has its advantages where corrosion of reinforcing steel is a major cause of failure. In such situations corrosion-proof reinforcing can extend a structure's life substantially, for example in the intertidal zone. FRP rods may also be useful in situations where it is likely that the concrete structure may be compromised in future years, for example the edges of balconies when balustrades are replaced, and bathroom floors in multi-story construction where the service life of the floor structure is likely to be many times the service life of the waterproofing building membrane.
Plastic reinforcement often is stronger, or at least has a better strength to weight ratio than reinforcing steels. Also, because it resists corrosion, it does not need a protective concrete cover as thick as steel reinforcement does (typically 30 to 50 mm or more). FRP-reinforced structures therefore can be lighter and last longer. Accordingly, for some applications the whole-life cost will be price-competitive with steel-reinforced concrete.
The material properties of FRP or GRP bars differ markedly from steel, so there are differences in the design considerations. FRP or GRP bars have relatively higher tensile strength but lower stiffness, so that deflections are likely to be higher than for equivalent steel-reinforced units. Structures with internal FRP reinforcement typically have an elastic deformability comparable to the plastic deformability (ductility) of steel reinforced structures. Failure in either case is more likely to occur by compression of the concrete than by rupture of the reinforcement. Deflection is always a major design consideration for reinforced concrete. Deflection limits are set to ensure that crack widths in steel-reinforced concrete are controlled to prevent water, air or other aggressive substances reaching the steel and causing corrosion. For FRP-reinforced concrete, aesthetics and possibly water-tightness will be the limiting criteria for crack width control. FRP rods also have relatively lower compressive strengths than steel rebar, and accordingly require different design approaches for reinforced concrete columns.
One drawback to the use of FRP reinforcement is their limited fire resistance. Where fire safety is a consideration, structures employing FRP have to maintain their strength and the anchoring of the forces at temperatures to be expected in the event of fire. For purposes of fireproofing, an adequate thickness of cement concrete cover or protective cladding is necessary. The addition of 1 kg/m3 of polypropylene fibers to concrete has been shown to reduce spalling during a simulated fire. (The improvement is thought to be due to the formation of pathways out of the bulk of the concrete, allowing steam pressure to dissipate.)
Another problem is the effectiveness of shear reinforcement. FRP rebar stirrups formed by bending before hardening generally perform relatively poorly in comparison to steel stirrups or to structures with straight fibers. When strained, the zone between the straight and curved regions are subject to strong bending, shear, and longitudinal stresses. Special design techniques are necessary to deal with such problems.
There is growing interest in applying external reinforcement to existing structures using advanced materials such as composite (fiberglass, basalt, carbon) rebar, which can impart exceptional strength. Worldwide, there are a number of brands of composite rebar recognized by different countries, such as Aslan, DACOT, V-rod, and ComBar. The number of projects using composite rebar increases day by day around the world, in countries ranging from USA, Russia, and South Korea to Germany.
== See also ==
Anchorage in reinforced concrete
Concrete cover
Concrete slab
Corrosion engineering
Cover meter
Falsework
Ferrocement
Formwork
Henri de Miffonis
Interfacial transition zone
Precast concrete
Reinforced concrete structures durability
Reinforced solid
Structural robustness
Types of concrete
== References ==
=== Further reading / External links ===
Threlfall A., et al. Reynolds's Reinforced Concrete Designer's Handbook – 11th ed. ISBN 978-0-419-25830-8.
Newby F., Early Reinforced Concrete, Ashgate Variorum, 2001, ISBN 978-0-86078-760-0.
Kim, S., Surek, J and J. Baker-Jarvis. "Electromagnetic Metrology on Concrete and Corrosion." Journal of Research of the National Institute of Standards and Technology, Vol. 116, No. 3 (May–June 2011): 655–669.
Daniel R., Formwork UK "Concrete frame structures.".
Materials principles and practice. Charles Newey, Graham Weaver, Open University. Materials Department. Milton Keynes, England: Materials Dept., Open University. 1990. ISBN 0-408-02730-4. OCLC 19553645.{{cite book}}: CS1 maint: others (link)
Structural materials. George Weidmann, P. R. Lewis, Nick Reid, Open University. Materials Department. Milton Keynes, U.K.: Materials Dept., Open University. 1990. p. 357. ISBN 0-408-04658-9. OCLC 20693897.{{cite book}}: CS1 maint: others (link)
Corrosion of reinforcement in concrete construction. C. L. Page, P. B. Bamforth, J. W. Figg, International Symposium on Corrosion of Reinforcement in Concrete Construction. Cambridge: Royal Society of Chemistry, Information Services. 1996. ISBN 0-85404-731-X. OCLC 35233292.{{cite book}}: CS1 maint: others (link)
Short documentary about reinforced concrete and its challenges, 2024 (The Aesthetic City)
Eisenbach, Philipp (2017). "Concrete in the 19th century". Processing of Slender Concrete Shells - Fabrication and Installation. Kassel: Kassel University Press. pp. 49–51. ISBN 978-3-7376-0259-4. | Wikipedia/Reinforced_concrete |
Structural mechanics or mechanics of structures is the computation of deformations, deflections, and internal forces or stresses (stress equivalents) within structures, either for design or for performance evaluation of existing structures. It is one subset of structural analysis. Structural mechanics analysis needs input data such as structural loads, the structure's geometric representation and support conditions, and the materials' properties. Output quantities may include support reactions, stresses and displacements. Advanced structural mechanics may include the effects of stability and non-linear behaviors.
Mechanics of structures is a field of study within applied mechanics that investigates the behavior of structures under mechanical loads, such as bending of a beam, buckling of a column, torsion of a shaft, deflection of a thin shell, and vibration of a bridge.
There are three approaches to the analysis: the energy methods, flexibility method or direct stiffness method which later developed into finite element method and the plastic analysis approach.
== Energy method ==
Energy principles in structural mechanics
== Flexibility method ==
Flexibility method
== Stiffness methods ==
Direct stiffness method
Finite element method in structural mechanics
== Plastic analysis approach ==
Plastic analysis
== Major topics ==
Beam theory
Buckling
Earthquake engineering
Finite element method in structural mechanics
Plates and shells
Torsion
Trusses
Stiffening
Structural dynamics
Structural instability
== References == | Wikipedia/Structure_mechanics |
Castigliano's method, named after Carlo Alberto Castigliano, is a method for determining the displacements of a linear-elastic system based on the partial derivatives of the energy. The basic concept may be easy to understand by recalling that a change in energy is equal to the causing force times the resulting displacement. Therefore, the causing force is equal to the change in energy divided by the resulting displacement. Alternatively, the resulting displacement is equal to the change in energy divided by the causing force. Partial derivatives are needed to relate causing forces and resulting displacements to the change in energy.
== Castigliano's theorems ==
=== Castigliano's first theorem – for forces in an elastic structure ===
Castigliano's method for calculating forces is an application of his first theorem, which states:If the strain energy of an elastic structure can be expressed as a function of generalised displacement qi then the partial derivative of the strain energy with respect to generalised displacement gives the generalised force Qi.In equation form,
Q
i
=
∂
U
∂
q
i
{\displaystyle Q_{i}={\frac {\partial U}{\partial q_{i}}}}
where U is the strain energy.
=== Castigliano's second theorem – for displacements in a linearly elastic structure ===
Castigliano's method for calculating displacements is an application of his second theorem, which states:If the strain energy of a linearly elastic structure can be expressed as a function of generalised force Qi then the partial derivative of the strain energy with respect to generalised force gives the generalised displacement qi in the direction of Qi.As above, the second theorem can also be expressed mathematically:
q
i
=
∂
U
∂
Q
i
{\displaystyle q_{i}={\frac {\partial U}{\partial Q_{i}}}}
If the force-displacement curve is nonlinear then the complementary strain energy needs to be used instead of strain energy.
== Examples ==
For a thin, straight cantilever beam with a load
P
{\displaystyle P}
at the end, the displacement
δ
{\displaystyle \delta }
at the end can be found by Castigliano's second theorem:
δ
=
∂
U
∂
P
{\displaystyle \delta ={\frac {\partial U}{\partial P}}}
δ
=
∂
∂
P
∫
0
L
M
2
(
x
)
2
E
I
d
x
=
∂
∂
P
∫
0
L
(
P
x
)
2
2
E
I
d
x
{\displaystyle \delta ={\frac {\partial }{\partial P}}\int _{0}^{L}{{\frac {M^{2}(x)}{2EI}}dx}={\frac {\partial }{\partial P}}\int _{0}^{L}{{\frac {(Px)^{2}}{2EI}}dx}}
where
E
{\displaystyle E}
is Young's modulus,
I
{\displaystyle I}
is the second moment of area of the cross-section, and
M
(
x
)
=
P
x
{\displaystyle M(x)=Px}
is the expression for the internal moment at a point at distance
x
{\displaystyle x}
from the end. The integral evaluates to:
δ
=
∂
∂
P
∫
0
L
P
2
x
2
2
E
I
d
x
=
∂
∂
P
P
2
L
3
6
E
I
=
P
L
3
3
E
I
.
{\displaystyle {\begin{aligned}\delta &={\frac {\partial }{\partial P}}\int _{0}^{L}{{\frac {P^{2}x^{2}}{2EI}}dx}={\frac {\partial }{\partial P}}{\frac {P^{2}L^{3}}{6EI}}\\&={\frac {PL^{3}}{3EI}}.\end{aligned}}}
The result is the standard formula given for cantilever beams under end loads.
Castigliano's theorems apply if the strain energy is finite. This is true if
m
−
i
>
n
/
2
{\displaystyle m-i>n/2}
. It is
m
=
1
,
2
{\displaystyle m=1,2}
the order of the energy (= the highest derivative in the energy),
i
=
0
,
1
,
2
,
3
{\displaystyle i=0,1,2,3}
, is the index of the Dirac delta (single force,
i
=
0
{\displaystyle i=0}
) and
n
=
1
,
2
,
3
{\displaystyle n=1,2,3}
is the dimension of the space. To second order equations,
m
=
1
{\displaystyle m=1}
, belong two Dirac deltas,
i
=
0
{\displaystyle i=0}
, force and
i
=
1
{\displaystyle i=1}
, dislocation and to fourth order equations,
m
=
2
{\displaystyle m=2}
, four Dirac deltas,
i
=
0
{\displaystyle i=0}
force,
i
=
1
{\displaystyle i=1}
moment,
i
=
2
{\displaystyle i=2}
bend,
i
=
3
{\displaystyle i=3}
dislocation.
Example: If a plate,
m
=
1
,
n
=
2
{\displaystyle m=1,n=2}
, is loaded with a single force,
i
=
0
{\displaystyle i=0}
, the inequality is not valid,
1
−
0
≯
2
/
2
{\displaystyle 1-0\ngtr 2/2}
, also not in
3
−
D
{\displaystyle 3-D}
,
m
=
1
,
n
=
3
,
1
−
0
≯
3
/
2
{\displaystyle m=1,n=3,1-0\ngtr 3/2}
. Nor does it apply to a membrane (Laplace),
m
=
1
,
n
=
2
,
i
=
0
{\displaystyle m=1,n=2,i=0}
, or a Reissner-Mindlin plate,
m
=
1
,
n
=
2
,
i
=
0
{\displaystyle m=1,n=2,i=0}
. In general Castigliano's theorems do not apply to
2
−
D
{\displaystyle 2-D}
and
3
−
D
{\displaystyle 3-D}
problems. The exception is the Kirchhoff plate,
m
=
2
,
n
=
2
,
i
=
0
{\displaystyle m=2,n=2,i=0}
, since
2
−
0
>
2
/
2
{\displaystyle 2-0>2/2}
. But a moment,
i
=
1
{\displaystyle i=1}
, causes the energy of a Kirchhoff plate to overflow,
2
−
1
≯
2
/
2
{\displaystyle 2-1\ngtr 2/2}
. In
1
−
D
{\displaystyle 1-D}
problems the strain energy is finite if
m
−
i
>
1
/
2
{\displaystyle m-i>1/2}
.
Menabrea's theorem is subject to the same restriction. It needs that
m
−
i
>
n
/
{\displaystyle m-i>n/}
2 is valid. It is
i
{\displaystyle i}
the order of the support reaction, single force
i
=
0
{\displaystyle i=0}
, moment
i
=
1
{\displaystyle i=1}
. Except for a Kirchhoff plate and
i
=
0
{\displaystyle i=0}
(single force as support reaction), it is generally not valid in
2
−
D
{\displaystyle 2-D}
and
3
−
D
{\displaystyle 3-D}
because the presence of point supports results in infinitely large energy.
== External links ==
Carlo Alberto Castigliano
Castigliano's method: some examples(in German)
== References == | Wikipedia/Castigliano's_method |
Fibre-reinforced plastic (FRP; also called fibre-reinforced polymer, or in American English fiber) is a composite material made of a polymer matrix reinforced with fibres. The fibres are usually glass (in fibreglass), carbon (in carbon-fibre-reinforced polymer), aramid, or basalt. Rarely, other fibres such as paper, wood, boron, or asbestos have been used. The polymer is usually an epoxy, vinyl ester, or polyester thermosetting plastic, though phenol formaldehyde resins are still in use.
FRPs are commonly used in the aerospace, automotive, marine, and construction industries. They are commonly found in ballistic armour and cylinders for self-contained breathing apparatuses.
== History ==
Bakelite was the first fibre-reinforced plastic. Leo Baekeland had originally set out to find a replacement for shellac (made from the excretion of lac bugs). Chemists had begun to recognize that many natural resins and fibres were polymers, and Baekeland investigated the reactions of phenol and formaldehyde. He first produced a soluble phenol-formaldehyde shellac called "Novolak" that never became a market success, then turned to developing a binder for asbestos which, at that time, was moulded with rubber. By controlling the pressure and temperature applied to phenol and formaldehyde, he found in 1905 he could produce his dreamed of hard mouldable material (the world's first synthetic plastic): bakelite. He announced his invention at a meeting of the American Chemical Society on 5 February 1909.
The development of fibre-reinforced plastic for commercial use was being extensively researched in the 1930s. In the United Kingdom, considerable research was undertaken by pioneers such as Norman de Bruyne. It was particularly of interest to the aviation industry.
Mass production of glass strands was discovered in 1932, when Games Slayter, a researcher at Owens-Illinois accidentally directed a jet of compressed air at a stream of molten glass and produced fibres. A patent for this method of producing glass wool was first applied for in 1933.
Owens joined with the Corning company in 1935 and the method was adapted by Owens Corning to produce its patented "fibreglas" (one "s") in 1936. Originally, fibreglas was a glass wool with fibres entrapping a great deal of gas, making it useful as an insulator, especially at high temperatures.
A suitable resin for combining the "fibreglas" with a plastic to produce a composite material, was developed in 1936 by du Pont. The first ancestor of modern polyester resins is Cyanamid's resin of 1942. Peroxide curing systems were used by then. With the combination of fibreglas and resin the gas content of the material was replaced by plastic. This reduced the insulation properties to values typical of the plastic, but now for the first time the composite showed great strength and promise as a structural and building material. Confusingly, many glass fibre composites continued to be called "fibreglass" (as a generic name) and the name was also used for the low-density glass wool product containing gas instead of plastic.
Ray Greene of Owens Corning is credited with producing the first composite boat in 1937, but did not proceed further at the time due to the brittle nature of the plastic used. In 1939, Russia was reported to have constructed a passenger boat of plastic materials, and the United States a fuselage and wings of an aircraft. The first car to have a fibre-glass body was the 1946 Stout Scarab. Only one of this model was built. The Ford prototype of 1941 could have been the first plastic car, but there is some uncertainty around the materials used as it was destroyed shortly afterwards.
The first fibre-reinforced plastic plane was either the Fairchild F-46, first flown on 12 May 1937, or the Californian built Bennett Plastic Plane. A fibreglass fuselage was used on a modified Vultee BT-13A designated the XBT-16 based at Wright Field in late 1942. In 1943, further experiments were undertaken building structural aircraft parts from composite materials resulting in the first plane, a Vultee BT-15, with a GFRP fuselage, designated the XBT-19, being flown in 1944. A significant development in the tooling for GFRP components had been made by Republic Aviation Corporation in 1943.
Carbon fibre production began in the late 1950s and was used, though not widely in British industry until the early 1960s. Aramid fibres were being produced around this time also, appearing first under the trade name Nomex by DuPont. Today, each of these fibres is used widely in industry for any applications that require plastics with specific strength or elastic qualities. Glass fibres are the most common across all industries, although carbon-fibre and carbon-fibre-aramid composites are widely found in aerospace, automotive and sporting good applications. These three (glass, carbon, and aramid) continue to be the important categories of fibre used in FRP.
Global polymer production on the scale present today began in the mid 20th century, when low material and productions costs, new production technologies and new product categories, combined to make polymer production economical. The industry finally matured in the late 1970s, when world polymer production surpassed that of steel, making polymers the ubiquitous material that they are today. Fibre-reinforced plastics have been a significant aspect of this industry from the beginning.
== Process definition ==
A polymer is generally manufactured by step-growth polymerization or addition polymerization. When one or more polymers are combined with various agents to enhance or in any way alter their material properties, the result is referred to as a plastic. Composite plastics refers to those types of plastics that result from bonding two or more homogeneous materials with different material properties to derive a final product with certain desired material and mechanical properties. Fibre-reinforced plastics are a category of composite plastics that specifically use fibre materials to mechanically enhance the strength and elasticity of plastics.
The original plastic material without fibre reinforcement is known as the matrix or binding agent. The matrix is a tough but relatively weak plastic that is reinforced by stronger stiffer reinforcing filaments or fibres. The extent that strength and elasticity are enhanced in a fibre-reinforced plastic depends on the mechanical properties of both the fibre and matrix, their volume relative to one another, and the fibre length and orientation within the matrix. Reinforcement of the matrix occurs by definition when the FRP material exhibits increased strength or elasticity relative to the strength and elasticity of the matrix alone.
== Process description ==
FRP involves two distinct processes, the first is the process whereby the fibrous material is manufactured and formed, the second is the process whereby fibrous materials are bonded with the matrix during moulding.
=== Fibre ===
==== Manufacture of fibre fabric ====
Reinforcing Fibre is manufactured in both two-dimensional and three-dimensional orientations:
Two-dimensional fibre glass-reinforced polymer is characterized by a laminated structure in which the fibres are only aligned along the plane in x-direction, and y-direction of the material. This means that no fibres are aligned in the through-thickness or the z-direction, this lack of alignment in the through thickness can create a disadvantage in cost and processing. Costs and labour increase because conventional processing techniques used to fabricate composites, such as wet hand lay-up, autoclave and resin transfer moulding, require a high amount of skilled labour to cut, stack and consolidate into a preformed component.
Three-dimensional fibreglass-reinforced polymer composites are materials with three-dimensional fibre structures that incorporate fibres in the x-direction, y-direction and z-direction. The development of three-dimensional orientations arose from industry's need to reduce fabrication costs, to increase through-thickness mechanical properties, and to improve impact damage tolerance; all were problems associated with two-dimensional fibre-reinforced polymers.
==== Manufacture of fibre preforms ====
Fibre preforms are how the fibres are manufactured before being bonded to the matrix. Fibre preforms are often manufactured in sheets, continuous mats, or as continuous filaments for spray applications. The four major ways to manufacture the fibre preform is through the textile processing techniques of weaving, knitting, braiding and stitching.
Weaving can be done in a conventional manner to produce two-dimensional fibres as well as in a multilayer weaving that can create three-dimensional fibres. However, multilayer weaving requires multiple layers of warp yarns to create fibres in the z-direction, creating a few disadvantages in manufacturing, namely the time to set up all the warp yarns on the loom. Therefore, most multilayer weaving is currently used to produce relatively narrow width products, or high value products where the cost of the preform production is acceptable. Another one of the main problems facing the use of multilayer woven fabrics is the difficulty in producing a fabric that contains fibres oriented at other than right angles to each other.
The second major way of manufacturing fibre preforms is Braiding. Braiding is suited to the manufacture of narrow width flat or tubular fabric and is not as capable as weaving in the production of large volumes of wide fabrics. Braiding is done over top of mandrels that vary in cross-sectional shape or dimension along their length. Braiding is limited to objects about a brick in size. Unlike standard weaving, braiding can produce fabric that contains fibres at 45-degree angles to one another. Braiding three-dimensional fibres can be done using four-step, two-step or Multilayer Interlock Braiding. Four-step or row and column braiding utilizes a flat bed containing rows and columns of yarn carriers that form the shape of the desired preform. Additional carriers are added to the outside of the array, the precise location and quantity of which depends upon the exact preform shape and structure required. There are four separate sequences of row and column motion, which act to interlock the yarns and produce the braided preform. The yarns are mechanically forced into the structure between each step to consolidate the structure, as a reed is used in weaving. Two-step braiding is unlike the four-step process because the two-step process includes a large number of yarns fixed in the axial direction and a lesser number of braiding yarns. The process consists of two steps in which the braiding carriers move completely through the structure between the axial carriers. This relatively simple sequence of motions is capable of forming preforms of essentially any shape, including circular and hollow shapes. Unlike the four-step process, the two-step process does not require mechanical compaction: the motions involved in the process allows the braid to be pulled tight by yarn tension alone. The last type of braiding is multi-layer interlocking braiding that consists of a number of standard circular braiders being joined to form a cylindrical braiding frame. This frame has a number of parallel braiding tracks around the circumference of the cylinder but the mechanism allows the transfer of yarn carriers between adjacent tracks forming a multilayer braided fabric with yarns interlocking to adjacent layers. The multilayer interlock braid differs from both the four-step and two-step braids in that the interlocking yarns are primarily in the plane of the structure and thus do not significantly reduce the in-plane properties of the preform. The four-step and two-step processes produce a greater degree of interlinking as the braiding yarns travel through the thickness of the preform, but therefore contribute less to the in-plane performance of the preform. A disadvantage of the multilayer interlock equipment is that due to the conventional sinusoidal movement of the yarn carriers to form the preform, the equipment is not able to have the density of yarn carriers that is possible with the two-step and four-step machines.
Knitting fibre preforms can be done with the traditional methods of Warp and [Weft] Knitting, and the fabric produced is often regarded by many as two-dimensional fabric, but machines with two or more needle beds are capable of producing multilayer fabrics with yarns that traverse between the layers. Developments in electronic controls for needle selection and knit loop transfer, and in the sophisticated mechanisms that allow specific areas of the fabric to be held and their movement controlled, have allowed the fabric to be formed into the required three-dimensional preform shape with a minimum of material wastage.
Stitching is arguably the simplest of the four main textile manufacturing techniques and one that can be performed with the smallest investment in specialized machinery. Basically stitching consists of inserting a needle, carrying the stitch thread, through a stack of fabric layers to form a 3D structure. The advantages of stitching are that it is possible to stitch both dry and prepreg fabric, although the tackiness of the prepreg makes the process difficult and generally creates more damage within the prepreg material than in the dry fabric. Stitching also utilizes the standard two-dimensional fabrics that are commonly in use within the composite industry, so there is a sense of familiarity with the material systems. The use of standard fabric also allows a greater degree of flexibility in the fabric lay-up of the component than is possible with the other textile processes, which have restrictions on the fibre orientations that can be produced.
=== Forming processes ===
A rigid structure is usually used to establish the shape of FRP components. Parts can be laid up on a flat surface referred to as a "caul plate" or on a cylindrical structure referred to as a "mandrel". However, most fibre-reinforced plastic parts are created with a mould or "tool". Moulds can be concave female moulds, male moulds, or the mould can completely enclose the part with a top and bottom mould.
The moulding processes of FRP plastics begins by placing the fibre preform on or in the mould. The fibre preform can be dry fibre, or fibre that already contains a measured amount of resin called "prepreg". Dry fibres are "wetted" with resin either by hand or the resin is injected into a closed mould. The part is then cured, leaving the matrix and fibres in the shape created by the mould. Heat and/or pressure are sometimes used to cure the resin and improve the quality of the final part.
The different methods of forming are listed below.
==== Bladder moulding ====
Individual sheets of prepreg material are laid up and placed in a female-style mould along with a balloon-like bladder. The mould is closed and placed in a heated press. Finally, the bladder is pressurized forcing the layers of material against the mould walls.
==== Compression moulding ====
When the raw material (plastic block, rubber block, plastic sheet, or granules) contains reinforcing fibres, a compression moulded part qualifies as a fibre-reinforced plastic. More typically the plastic preform used in compression moulding does not contain reinforcing fibres. In compression moulding, a "preform" or "charge", of SMC, BMC is placed into mould cavity. The mould is closed and the material is formed and cured inside by pressure and heat. Compression moulding offers excellent detailing for geometric shapes ranging from pattern and relief detailing to complex curves and creative forms, to precision engineering all within a maximum curing time of 20 minutes.
==== Autoclave and vacuum bag ====
Individual sheets of prepreg material are laid-up and placed in an open mould. The material is covered with release film, bleeder/breather material and a vacuum bag. A vacuum is pulled on part and the entire mould is placed into an autoclave (heated pressure vessel). The part is cured with a continuous vacuum to extract entrapped gasses from laminate. This is a very common process in the aerospace industry because it affords precise control over moulding due to a long, slow cure cycle that is anywhere from one to several hours. This precise control creates the exact laminate geometric forms needed to ensure strength and safety in the aerospace industry, but it is also slow and labour-intensive, meaning costs often confine it to the aerospace industry.
==== Mandrel wrapping ====
Sheets of prepreg material are wrapped around a steel or aluminium mandrel. The prepreg material is compacted by nylon or polypropylene cello tape. Parts are typically batch cured by vacuum bagging and hanging in an oven. After cure, the cello and mandrel are removed leaving a hollow carbon tube. This process creates strong and robust hollow carbon tubes.
==== Wet layup ====
Wet layup forming combines fibre reinforcement and the matrix as they are placed on the forming tool. Reinforcing fibre layers are placed in an open mould and then saturated with a wet resin by pouring it over the fabric and working it into the fabric. The mould is then left so that the resin will cure, usually at room temperature, though heat is sometimes used to ensure a proper cure. Sometimes a vacuum bag is used to compress a wet layup. Glass fibres are most commonly used for this process, the results are widely known as fibreglass, and is used to make common products like skis, canoes, kayaks and surf boards.
==== Chopper gun ====
Continuous strands of fibreglass are pushed through a hand-held gun that both chops the strands and combines them with a catalysed resin such as polyester. The impregnated chopped glass is shot onto the mould surface in whatever thickness and design the human operator thinks is appropriate. This process is good for large production runs at economical cost, but produces geometric shapes with less strength than other moulding processes and has poor dimensional tolerance.
==== Filament winding ====
Machines pull fibre bundles through a wet bath of resin and wound over a rotating steel mandrel in specific orientations. Parts are cured either room temperature or elevated temperatures. Mandrel is extracted, leaving a final geometric shape but can be left in some cases.
==== Pultrusion ====
Fibre bundles and slit fabrics are pulled through a wet bath of resin and formed into the rough part shape. Saturated material is extruded from a heated closed die curing while being continuously pulled through die. Some of the end products of pultrusion are structural shapes, i.e. I beam, angle, channel and flat sheet. These materials can be used to create all sorts of fibreglass structures such as ladders, platforms, handrail systems tank, pipe and pump supports.
==== Resin transfer moulding ====
Also called resin infusion. Fabrics are placed into a mould into which wet resin is then injected. Resin is typically pressurized and forced into a cavity which is under vacuum in resin transfer moulding. Resin is entirely pulled into cavity under vacuum in vacuum-assisted resin transfer moulding. This moulding process allows precise tolerances and detailed shaping, but can sometimes fail to fully saturate the fabric leading to weak spots in the final shape.
== Advantages and limitations ==
FRP allows the alignment of the glass fibres of thermoplastics to suit specific design programs. Specifying the orientation of reinforcing fibres can increase the strength and resistance to deformation of the polymer. Glass reinforced polymers are strongest and most resistive to deforming forces when the polymers fibres are parallel to the force being exerted, and are weakest when the fibres are perpendicular. This alignment significantly enhances the tensile strength of the material along the fibre direction, making it ideal for load-bearing applications where directional strength is critical. Thus, this ability is at once both an advantage or a limitation depending on the context of use. Weak spots of perpendicular fibres can be used for natural hinges and connections, but can also lead to material failure when production processes fail to properly orient the fibres parallel to expected forces. When forces are exerted perpendicular to the orientation of fibres, the strength and elasticity of the polymer is less than the matrix alone. In cast resin components made of glass reinforced polymers such as UP and EP, the orientation of fibres can be oriented in two-dimensional and three-dimensional weaves. This means that when forces are possibly perpendicular to one orientation, they are parallel to another orientation; this eliminates the potential for weak spots in the polymer.
=== Failure modes ===
Structural failure can occur in FRP materials when:
Tensile forces stretch the matrix more than the fibres, causing the material to shear at the interface between matrix and fibres.
Tensile forces near the end of the fibres exceed the tolerances of the matrix, separating the fibres from the matrix.
Tensile forces can also exceed the tolerances of the fibres causing the fibres themselves to fracture leading to material failure.
== Failure Mechanisms ==
FRP composites can exhibit both micro-structural and macroscopic damage:
Micro-structural Damage
Matrix micro-cracks, which are small cracks that form in the polymer matrix.
Fiber-matrix debonding, which is when the reinforcing fiber and the matrix separate, indicating failure at the interface.
Fiber breakage, which is the fracture of the fibers within the matrix.
Crack coupling which is the interaction of multiple cracks within the materials as a whole.
Macroscopic Damage
Transverse cracks that occur perpendicular to the direction of the reinforcing fibers.
Shear failure of the fiber bundles.
Cracks in the matrix.
Delamination between fiber bundles in a composite laminate.
Tensile failure of fiber bundles,
Final fracture.
Load Orientation
The type of load and its orientation with respect to the fibers affect what failure mechanisms are favored.
Low-angle impact: Favors abrasive wear that leads to surface wrinkling, increased roughness, and long cracks. Material shedding results after the resulting cracks link.
High-angle impact: Favors impact damage that leads to smaller cracks.
Compressive Loads: Channels created by fiber bundles act as critical damage points. Under compression, the fibers begin to buckle which can result in diagonal shear, or net compression depending on the orientation of the fibers.
Failure Initiation
The sequence of failure initiation in composite laminates is highly variable, depending on the specific material system and the applied loading conditions. Damage growth can be initiated by either matrix-dominated or fiber-dominated failure mechanisms. However, the interface between layers often serves as a weak link; delamination is a common failure mode because interlaminar strength is typically the lowest in a laminated material, suggesting the interface is frequently the first to fail.
Observed sequences of failure initiation provide further insight. Generally, the initial deterioration of composites under repeated loading often appears as a gradual loss of stiffness, which is attributed to failure or breakdown within the resin. In adhesive joints, fracture can begin as debonding at the adhesive/adherend interface, fiber peel-off from the outermost lamina, or intra- or inter-laminar debonding within the composite adherends. It's notable that the relationship between resistance change and debonded area in these joints may not be unique when various micromechanisms are competing.
Under tensile loading in microvascular composites, fiber damage in the 0° layers can initiate first, followed by resin damage in the 90° layers. In carbon-fiber reinforced polymer laminated plates subjected to tensile loading, the typical sequence observed is matrix tensile damage and microcrack formation in 90° plies, which then leads to delamination between plies, and ultimately, tensile failure of fibers in 0° plies. Furthermore, ply cracking, where transverse stresses exceed the ply's capacity, can sometimes trigger subsequent delamination. A broken fiber can also initiate a transverse crack, which then contributes to reducing the overall transverse strength of the material.
== Material requirements ==
A thermoset polymer matrix material, or engineering grade thermoplastic polymer matrix material, must meet certain requirements in order to first be suitable for FRPs and ensure a successful reinforcement of itself. The matrix must be able to properly saturate, and preferably bond chemically with the fibre reinforcement for maximum adhesion within a suitable curing period. The matrix must also completely envelop the fibres to protect them from cuts and notches that would reduce their strength, and to transfer forces to the fibres. The fibres must also be kept separate from each other so that if failure occurs it is localized as much as possible, and if failure occurs the matrix must also debond from the fibre for similar reasons. Finally, the matrix should be of a plastic that remains chemically and physically stable during and after the reinforcement and moulding processes. To be suitable as reinforcement material, fibre additives must increase the tensile strength and modulus of elasticity of the matrix and meet the following conditions; fibres must exceed critical fibre content; the strength and rigidity of fibres itself must exceed the strength and rigidity of the matrix alone; and there must be optimum bonding between fibres and matrix
=== Glass fibre ===
"Fibreglass reinforced plastics" or FRPs (commonly referred to simply as fibreglass) use textile grade glass fibres. These textile fibres are different from other forms of glass fibres used to deliberately trap air, for insulating applications (see glass wool). Textile glass fibres begin as varying combinations of SiO2, Al2O3, B2O3, CaO, or MgO in powder form. These mixtures are then heated through direct melting to temperatures around 1300 degrees Celsius, after which dies are used to extrude filaments of glass fibre in diameter ranging from 9 to 17 μm. These filaments are then wound into larger threads and spun onto bobbins for transportation and further processing. Glass fibre is by far the most popular means to reinforce plastic and thus enjoys a wealth of production processes, some of which are applicable to aramid and carbon fibres as well owing to their shared fibrous qualities.
Roving is a process where filaments are spun into larger diameter threads. These threads are then commonly used for woven reinforcing glass fabrics and mats, and in spray applications.
Fibre fabrics (glass cloth, etc.) are web-form fabric reinforcing material that has both warp and weft directions. Fibre mats are web-form non-woven mats of glass fibres. Mats are manufactured in cut dimensions with chopped fibres, or in continuous mats using continuous fibres. Chopped fibre glass is used in processes where lengths of glass threads are cut between 3 and 26 mm, threads are then used in plastics most commonly intended for moulding processes. Glass fibre short strands are short 0.2–0.3 mm strands of glass fibres that are used to reinforce thermoplastics most commonly for injection moulding.
=== Carbon fibre ===
Carbon fibres are created when polyacrylonitrile fibres (PAN), Pitch resins, or Rayon are carbonized (through oxidation and thermal pyrolysis) at high temperatures. Through further processes of graphitizing or stretching, the fibres strength or elasticity can be enhanced respectively. Carbon fibres are manufactured in diameters analogous to glass fibres with diameters ranging from 4 to 17 μm. These fibres wound into larger threads for transportation and further production processes. Further production processes include weaving or braiding into carbon fabrics, cloths and mats analogous to those described for glass that can then be used in actual reinforcements.
=== Aramid fibre ===
Aramid fibres are most commonly known as Kevlar, Nomex and Technora. Aramids are generally prepared by the reaction between an amine group and a carboxylic acid halide group (aramid);. Commonly, this occurs when an aromatic polyamide is spun from a liquid concentration of sulphuric acid into a crystallized fibre. Fibres are then spun into larger threads in order to weave into large ropes or woven fabrics (aramid). Aramid fibres are manufactured with varying grades based on strength and rigidity, so that the material can be adapted to meet specific design requirements, such as cutting the tough material during manufacture.
== Example polymer and reinforcement combinations ==
== Applications ==
Fibre-reinforced plastics are best suited for any design program that demands weight savings, precision engineering, definite tolerances, and the simplification of parts in both production and operation.The fibres provide strength and stiffness to the material, while the polymer matrix holds the fibres together and transfers loads between them. FRP composites have a wide range of applications across various industries due to their unique combination of properties, including high strength-to-weight ratio, corrosion resistance, and design flexibility. A moulded polymer product is cheaper, faster, and easier to manufacture than a cast aluminium or steel product, and maintains similar and sometimes better tolerances and material strengths.
=== Carbon-fibre-reinforced polymers ===
Rudder of Airbus A310
Disadvantages: hazards relating to hail stones, or bird impacts, while aircraft are flying or while on the ground
Advantages over a traditional rudder made from sheet aluminium are:
25% reduction in weight
95% reduction in components by combining parts and forms into simpler moulded parts.
Overall reduction in production and operational costs, economy of parts results in lower production costs and the weight savings create fuel savings that lower the operational costs of flying the aeroplane.
=== Glass-fibre-reinforced polymers ===
Engine intake manifolds are made from glass-fibre-reinforced PA 66.
Advantages this has over cast aluminium manifolds are:
Up to a 60% reduction in weight
Improved surface quality and aerodynamics
Reduction in components by combining parts and forms into simpler moulded shapes.
Automotive gas and clutch pedals made from glass-fibre-reinforced PA 66 (DWP 12–13)
Advantages over stamped aluminium are:
Pedals can be moulded as single units combining both pedals and mechanical linkages simplifying the production and operation of the design.
Fibres can be oriented to reinforce against specific stresses, increasing the durability and safety.
Aluminium windows, doors and façades are thermally insulated by using thermal insulation plastics made of glass fibre reinforced polyamide. In 1977 Ensinger GmbH produced first insulation profile for window systems.
=== Structural applications ===
FRP can be applied to strengthen the beams, columns, and slabs of buildings and bridges. It is possible to increase the strength of structural members even after they have been severely damaged due to loading conditions. In the case of damaged reinforced concrete members, this would first require the repair of the member by removing loose debris and filling in cavities and cracks with mortar or epoxy resin. Once the member is repaired, strengthening can be achieved through wet, hand lay-up of fibre sheets impregnated with epoxy resin, applied to the cleaned and prepared surfaces of the member.
Two techniques are typically adopted for the strengthening of beams, depending on the strength enhancement desired: flexural strengthening or shear strengthening. In many cases it may be necessary to provide both strength enhancements. For the flexural strengthening of a beam, FRP sheets or plates are applied to the tension face of the member (the bottom face for a simply supported member with applied top loading or gravity loading). Principal tensile fibres are oriented parallel to the beam's longitudinal axis, similar to its internal flexural steel reinforcement. This increases the beam strength and its stiffness (load required to cause unit deflection), but decreases the deflection capacity and ductility.
For the shear strengthening of a beam, the FRP is applied on the web (sides) of a member with fibres oriented transverse to the beam's longitudinal axis. Resisting of shear forces is achieved in a similar manner as internal steel stirrups, by bridging shear cracks that form under applied loading. FRP can be applied in several configurations, depending on the exposed faces of the member and the degree of strengthening desired, this includes: side bonding, U-wraps (U-jackets), and closed wraps (complete wraps). Side bonding involves applying FRP to the sides of the beam only. It provides the least amount of shear strengthening due to failures caused by de-bonding from the concrete surface at the FRP free edges. For U-wraps, the FRP is applied continuously in a 'U' shape around the sides and bottom (tension) face of the beam. If all faces of a beam are accessible, the use of closed wraps is desirable as they provide the most strength enhancement. Closed wrapping involves applying FRP around the entire perimeter of the member, such that there are no free ends and the typical failure mode is rupture of the fibres. For all wrap configurations, the FRP can be applied along the length of the member as a continuous sheet or as discrete strips, having a predefined minimum width and spacing.
Slabs may be strengthened by applying FRP strips at their bottom (tension) face. This will result in better flexural performance, since the tensile resistance of the slabs is supplemented by the tensile strength of FRP. In the case of beams and slabs, the effectiveness of FRP strengthening depends on the performance of the resin chosen for bonding. This is particularly an issue for shear strengthening using side bonding or U-wraps. Columns are typically wrapped with FRP around their perimeter, as with closed or complete wrapping. This not only results in higher shear resistance, but more crucial for column design, it results in increased compressive strength under axial loading. The FRP wrap works by restraining the lateral expansion of the column, which can enhance confinement in a similar manner as spiral reinforcement does for the column core.
=== Elevator cable ===
In June 2013, KONE elevator company announced Ultrarope for use as a replacement for steel cables in elevators. It seals the carbon fibres in high-friction polymer. Unlike steel cable, Ultrarope was designed for buildings that require up to 1,000 m (3,300 ft) of lift. Steel elevators top out at 500 m (1,600 ft). The company estimated that in a 500 m (1,600 ft) high building, an elevator would use 15% less electrical power than a steel-cabled version. As of June 2013, the product had passed all European Union and United States certification tests.
== Design considerations ==
FRP is used in designs that require a measure of strength or modulus of elasticity for which non-reinforced plastics and other material choices are ill-suited, either mechanically or economically. The primary design consideration for using FRP is to ensure that the material is used economically and in a manner that takes advantage of its specific structural characteristics, but this is not always the case. The orientation of fibres creates a material weakness perpendicular to the fibres. Thus the use of fibre reinforcement and their orientation affects the strength, rigidity, elasticity and hence the functionality of the final product itself. Orienting the fibres either unidirectionally, 2-dimensionally, or 3-dimensionally during production affects the strength, flexibility, and elasticity of the final product. Fibres oriented in the direction of applied forces display greater resistance to distortion from these forces, thus areas of a product that must withstand forces will be reinforced with fibres oriented parallel to the forces, and areas that require flexibility, such as natural hinges, will have fibres oriented perpendicular to the forces.
Orienting the fibres in more dimensions avoids this either-or scenario and creates objects that seek to avoid any specific weakness due to the unidirectional orientation of fibres. The properties of strength, flexibility and elasticity can also be magnified or diminished through the geometric shape and design of the final product. For example, ensuring proper wall thickness and creating multifunctional geometric shapes that can be moulded as a single piece enhances the material and structural integrity of the product by reducing the requirements for joints, connections, and hardware.
=== Disposal and recycling concerns ===
As a subset of plastic, FR plastics are liable to a number of the issues and concerns in plastic waste disposal and recycling. Plastics pose a particular challenge in recycling because they are derived from polymers and monomers that often cannot be separated and returned to their virgin states. For this reason not all plastics can be recycled for re-use, in fact some estimates claim only 20–30% of plastics can be recycled at all. Fibre-reinforced plastics and their matrices share these disposal and environmental concerns. Investigation of safe disposal methods has led to two main variations involving the application of intense heat: in one binding agents are burned off—in the process recapturing some of the sunk material cost in the form of heat—and incombustible elements captured by filtration; in the other the incombustible material is burned in a cement kiln, the fibres becoming an integral part of the resulting cast material. In addition to concerns regarding safe disposal, the fact that the fibres themselves are difficult to remove from the matrix and preserve for re-use means FRP's amplify these challenges. FRP's are inherently difficult to separate into base materials, that is into fibre and matrix, and the matrix is difficult to separate into usable plastics, polymers, and monomers. These are all concerns for environmentally-informed design today. Plastics do often offer savings in energy and economic savings in comparison to other materials. In addition, with the advent of new more environmentally friendly matrices such as bioplastics and UV-degradable plastics, FRP will gain environmental sensitivity.
== See also ==
Long-fibre-reinforced thermoplastic
Pre-preg
Composite material
== References == | Wikipedia/Fibre-reinforced_plastic |
Consensus theory is a social theory that holds a particular political or economic system as a fair system, and that social change should take place within the social institutions provided by it. Consensus theory contrasts sharply with conflict theory, which holds that social change is only achieved through conflict.
Under consensus theory the absence of conflict is seen as the equilibrium state of society and that there is a general or widespread agreement among all members of a particular society about norms, values, rules and regulations. Consensus theory is concerned with the maintenance or continuation of social order in society.
Consensus theory serves as a sociological argument for the furtherance and preservation of the status quo. It is antagonistic to conflict theory, which serves as a sociological argument for modifying the status quo or for its total reversal. In consensus theory, the rules are seen as integrative, and whoever doesn't respect them is a deviant person. Under conflict theory, the rules are seen as coercive, and who transgresses them is considered an agent of change.
== See also ==
== References == | Wikipedia/Consensus_theory |
This article describes the mathematics of the Standard Model of particle physics, a gauge quantum field theory containing the internal symmetries of the unitary product group SU(3) × SU(2) × U(1). The theory is commonly viewed as describing the fundamental set of particles – the leptons, quarks, gauge bosons and the Higgs boson.
The Standard Model is renormalizable and mathematically self-consistent; however, despite having huge and continued successes in providing experimental predictions, it does leave some unexplained phenomena. In particular, although the physics of special relativity is incorporated, general relativity is not, and the Standard Model will fail at energies or distances where the graviton is expected to emerge. Therefore, in a modern field theory context, it is seen as an effective field theory.
== Quantum field theory ==
The standard model is a quantum field theory, meaning its fundamental objects are quantum fields, which are defined at all points in spacetime. QFT treats particles as excited states (also called quanta) of their underlying quantum fields, which are more fundamental than the particles. These fields are
the fermion fields, ψ, which account for "matter particles";
the electroweak boson fields
W
1
{\displaystyle W_{1}}
,
W
2
{\displaystyle W_{2}}
,
W
3
{\displaystyle W_{3}}
, and B;
the gluon field, Ga; and
the Higgs field, φ.
That these are quantum rather than classical fields has the mathematical consequence that they are operator-valued. In particular, values of the fields generally do not commute. As operators, they act upon a quantum state (ket vector).
== Alternative presentations of the fields ==
As is common in quantum theory, there is more than one way to look at things. At first the basic fields given above may not seem to correspond well with the "fundamental particles" in the chart above, but there are several alternative presentations that, in particular contexts, may be more appropriate than those that are given above.
=== Fermions ===
Rather than having one fermion field ψ, it can be split up into separate components for each type of particle. This mirrors the historical evolution of quantum field theory, since the electron component ψe (describing the electron and its antiparticle the positron) is then the original ψ field of quantum electrodynamics, which was later accompanied by ψμ and ψτ fields for the muon and tauon respectively (and their antiparticles). Electroweak theory added
ψ
ν
e
,
ψ
ν
μ
{\displaystyle \psi _{\nu _{\mathrm {e} }},\psi _{\nu _{\mu }}}
, and
ψ
ν
τ
{\displaystyle \psi _{\nu _{\tau }}}
for the corresponding neutrinos. The quarks add still further components. In order to be four-spinors like the electron and other lepton components, there must be one quark component for every combination of flavor and color, bringing the total to 24 (3 for charged leptons, 3 for neutrinos, and 2·3·3 = 18 for quarks). Each of these is a four component bispinor, for a total of 96 complex-valued components for the fermion field.
An important definition is the barred fermion field
ψ
¯
{\displaystyle {\bar {\psi }}}
, which is defined to be
ψ
†
γ
0
{\displaystyle \psi ^{\dagger }\gamma ^{0}}
, where
†
{\displaystyle \dagger }
denotes the Hermitian adjoint of ψ, and γ0 is the zeroth gamma matrix. If ψ is thought of as an n × 1 matrix then
ψ
¯
{\displaystyle {\bar {\psi }}}
should be thought of as a 1 × n matrix.
==== A chiral theory ====
An independent decomposition of ψ is that into chirality components:
where
γ
5
{\displaystyle \gamma _{5}}
is the fifth gamma matrix. This is very important in the Standard Model because left and right chirality components are treated differently by the gauge interactions.
In particular, under weak isospin SU(2) transformations the left-handed particles are weak-isospin doublets, whereas the right-handed are singlets – i.e. the weak isospin of ψR is zero. Put more simply, the weak interaction could rotate e.g. a left-handed electron into a left-handed neutrino (with emission of a W−), but could not do so with the same right-handed particles. As an aside, the right-handed neutrino originally did not exist in the standard model – but the discovery of neutrino oscillation implies that neutrinos must have mass, and since chirality can change during the propagation of a massive particle, right-handed neutrinos must exist in reality. This does not however change the (experimentally proven) chiral nature of the weak interaction.
Furthermore, U(1) acts differently on
ψ
e
L
{\displaystyle \psi _{\mathrm {e} }^{\rm {L}}}
and
ψ
e
R
{\displaystyle \psi _{\mathrm {e} }^{\rm {R}}}
(because they have different weak hypercharges).
==== Mass and interaction eigenstates ====
A distinction can thus be made between, for example, the mass and interaction eigenstates of the neutrino. The former is the state that propagates in free space, whereas the latter is the different state that participates in interactions. Which is the "fundamental" particle? For the neutrino, it is conventional to define the "flavor" (νe, νμ, or ντ) by the interaction eigenstate, whereas for the quarks we define the flavor (up, down, etc.) by the mass state. We can switch between these states using the CKM matrix for the quarks, or the PMNS matrix for the neutrinos (the charged leptons on the other hand are eigenstates of both mass and flavor).
As an aside, if a complex phase term exists within either of these matrices, it will give rise to direct CP violation, which could explain the dominance of matter over antimatter in our current universe. This has been proven for the CKM matrix, and is expected for the PMNS matrix.
==== Positive and negative energies ====
Finally, the quantum fields are sometimes decomposed into "positive" and "negative" energy parts: ψ = ψ+ + ψ−. This is not so common when a quantum field theory has been set up, but often features prominently in the process of quantizing a field theory.
=== Bosons ===
Due to the Higgs mechanism, the electroweak boson fields
W
1
{\displaystyle W_{1}}
,
W
2
{\displaystyle W_{2}}
,
W
3
{\displaystyle W_{3}}
, and
B
{\displaystyle B}
"mix" to create the states that are physically observable. To retain gauge invariance, the underlying fields must be massless, but the observable states can gain masses in the process. These states are:
The massive neutral (Z) boson:
Z
=
cos
θ
W
W
3
−
sin
θ
W
B
{\displaystyle Z=\cos \theta _{\rm {W}}W_{3}-\sin \theta _{\rm {W}}B}
The massless neutral boson:
A
=
sin
θ
W
W
3
+
cos
θ
W
B
{\displaystyle A=\sin \theta _{\rm {W}}W_{3}+\cos \theta _{\rm {W}}B}
The massive charged W bosons:
W
±
=
1
2
(
W
1
∓
i
W
2
)
{\displaystyle W^{\pm }={\frac {1}{\sqrt {2}}}\left(W_{1}\mp iW_{2}\right)}
where θW is the Weinberg angle.
The A field is the photon, which corresponds classically to the well-known electromagnetic four-potential – i.e. the electric and magnetic fields. The Z field actually contributes in every process the photon does, but due to its large mass, the contribution is usually negligible.
== Perturbative QFT and the interaction picture ==
Much of the qualitative descriptions of the standard model in terms of "particles" and "forces" comes from the perturbative quantum field theory view of the model. In this, the Lagrangian is decomposed as
L
=
L
0
+
L
I
{\displaystyle {\mathcal {L}}={\mathcal {L}}_{0}+{\mathcal {L}}_{\mathrm {I} }}
into separate free field and interaction Lagrangians. The free fields care for particles in isolation, whereas processes involving several particles arise through interactions. The idea is that the state vector should only change when particles interact, meaning a free particle is one whose quantum state is constant. This corresponds to the interaction picture in quantum mechanics.
In the more common Schrödinger picture, even the states of free particles change over time: typically the phase changes at a rate that depends on their energy. In the alternative Heisenberg picture, state vectors are kept constant, at the price of having the operators (in particular the observables) be time-dependent. The interaction picture constitutes an intermediate between the two, where some time dependence is placed in the operators (the quantum fields) and some in the state vector. In QFT, the former is called the free field part of the model, and the latter is called the interaction part. The free field model can be solved exactly, and then the solutions to the full model can be expressed as perturbations of the free field solutions, for example using the Dyson series.
It should be observed that the decomposition into free fields and interactions is in principle arbitrary. For example, renormalization in QED modifies the mass of the free field electron to match that of a physical electron (with an electromagnetic field), and will in doing so add a term to the free field Lagrangian which must be cancelled by a counterterm in the interaction Lagrangian, that then shows up as a two-line vertex in the Feynman diagrams. This is also how the Higgs field is thought to give particles mass: the part of the interaction term that corresponds to the nonzero vacuum expectation value of the Higgs field is moved from the interaction to the free field Lagrangian, where it looks just like a mass term having nothing to do with the Higgs field.
=== Free fields ===
Under the usual free/interaction decomposition, which is suitable for low energies, the free fields obey the following equations:
The fermion field ψ satisfies the Dirac equation;
(
i
ℏ
γ
μ
∂
μ
−
m
f
c
)
ψ
f
=
0
{\displaystyle (i\hbar \gamma ^{\mu }\partial _{\mu }-m_{\rm {f}}c)\psi _{\rm {f}}=0}
for each type
f
{\displaystyle f}
of fermion.
The photon field A satisfies the wave equation
∂
μ
∂
μ
A
ν
=
0
{\displaystyle \partial _{\mu }\partial ^{\mu }A^{\nu }=0}
.
The Higgs field φ satisfies the Klein–Gordon equation.
The weak interaction fields Z, W± satisfy the Proca equation.
These equations can be solved exactly. One usually does so by considering first solutions that are periodic with some period L along each spatial axis; later taking the limit: L → ∞ will lift this periodicity restriction.
In the periodic case, the solution for a field F (any of the above) can be expressed as a Fourier series of the form
F
(
x
)
=
β
∑
p
∑
r
E
p
−
1
2
(
a
r
(
p
)
u
r
(
p
)
e
−
i
p
x
ℏ
+
b
r
†
(
p
)
v
r
(
p
)
e
i
p
x
ℏ
)
{\displaystyle F(x)=\beta \sum _{\mathbf {p} }\sum _{r}E_{\mathbf {p} }^{-{\frac {1}{2}}}\left(a_{r}(\mathbf {p} )u_{r}(\mathbf {p} )e^{-{\frac {ipx}{\hbar }}}+b_{r}^{\dagger }(\mathbf {p} )v_{r}(\mathbf {p} )e^{\frac {ipx}{\hbar }}\right)}
where:
β is a normalization factor; for the fermion field
ψ
f
{\displaystyle \psi _{\rm {f}}}
it is
m
f
c
2
/
V
{\textstyle {\sqrt {m_{\rm {f}}c^{2}/V}}}
, where
V
=
L
3
{\displaystyle V=L^{3}}
is the volume of the fundamental cell considered; for the photon field Aμ it is
ℏ
c
/
2
V
{\displaystyle \hbar c/{\sqrt {2V}}}
.
The sum over p is over all momenta consistent with the period L, i.e., over all vectors
2
π
ℏ
L
(
n
1
,
n
2
,
n
3
)
{\displaystyle {\frac {2\pi \hbar }{L}}(n_{1},n_{2},n_{3})}
where
n
1
,
n
2
,
n
3
{\displaystyle n_{1},n_{2},n_{3}}
are integers.
The sum over r covers other degrees of freedom specific for the field, such as polarization or spin; it usually comes out as a sum from 1 to 2 or from 1 to 3.
Ep is the relativistic energy for a momentum p quantum of the field,
=
m
2
c
4
+
c
2
p
2
{\textstyle ={\sqrt {m^{2}c^{4}+c^{2}\mathbf {p} ^{2}}}}
when the rest mass is m.
ar(p) and
b
r
†
(
p
)
{\displaystyle b_{r}^{\dagger }(\mathbf {p} )}
are annihilation and creation operators respectively for "a-particles" and "b-particles" respectively of momentum p; "b-particles" are the antiparticles of "a-particles". Different fields have different "a-" and "b-particles". For some fields, a and b are the same.
ur(p) and vr(p) are non-operators that carry the vector or spinor aspects of the field (where relevant).
p
=
(
E
p
/
c
,
p
)
{\displaystyle p=(E_{\mathbf {p} }/c,\mathbf {p} )}
is the four-momentum for a quantum with momentum p.
p
x
=
p
μ
x
μ
{\displaystyle px=p_{\mu }x^{\mu }}
denotes an inner product of four-vectors.
In the limit L → ∞, the sum would turn into an integral with help from the V hidden inside β. The numeric value of β also depends on the normalization chosen for
u
r
(
p
)
{\displaystyle u_{r}(\mathbf {p} )}
and
v
r
(
p
)
{\displaystyle v_{r}(\mathbf {p} )}
.
Technically,
a
r
†
(
p
)
{\displaystyle a_{r}^{\dagger }(\mathbf {p} )}
is the Hermitian adjoint of the operator ar(p) in the inner product space of ket vectors. The identification of
a
r
†
(
p
)
{\displaystyle a_{r}^{\dagger }(\mathbf {p} )}
and ar(p) as creation and annihilation operators comes from comparing conserved quantities for a state before and after one of these have acted upon it.
a
r
†
(
p
)
{\displaystyle a_{r}^{\dagger }(\mathbf {p} )}
can for example be seen to add one particle, because it will add 1 to the eigenvalue of the a-particle number operator, and the momentum of that particle ought to be p since the eigenvalue of the vector-valued momentum operator increases by that much. For these derivations, one starts out with expressions for the operators in terms of the quantum fields. That the operators with
†
{\displaystyle \dagger }
are creation operators and the one without annihilation operators is a convention, imposed by the sign of the commutation relations postulated for them.
An important step in preparation for calculating in perturbative quantum field theory is to separate the "operator" factors a and b above from their corresponding vector or spinor factors u and v. The vertices of Feynman graphs come from the way that u and v from different factors in the interaction Lagrangian fit together, whereas the edges come from the way that the as and bs must be moved around in order to put terms in the Dyson series on normal form.
=== Interaction terms and the path integral approach ===
The Lagrangian can also be derived without using creation and annihilation operators (the "canonical" formalism) by using a path integral formulation, pioneered by Feynman building on the earlier work of Dirac. Feynman diagrams are pictorial representations of interaction terms. A quick derivation is indeed presented at the article on Feynman diagrams.
== Lagrangian formalism ==
We can now give some more detail about the aforementioned free and interaction terms appearing in the Standard Model Lagrangian density. Any such term must be both gauge and reference-frame invariant, otherwise the laws of physics would depend on an arbitrary choice or the frame of an observer. Therefore, the global Poincaré symmetry, consisting of translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity must apply. The local SU(3) × SU(2) × U(1) gauge symmetry is the internal symmetry. The three factors of the gauge symmetry together give rise to the three fundamental interactions, after some appropriate relations have been defined, as we shall see.
=== Kinetic terms ===
A free particle can be represented by a mass term, and a kinetic term that relates to the "motion" of the fields.
==== Fermion fields ====
The kinetic term for a Dirac fermion is
i
ψ
¯
γ
μ
∂
μ
ψ
{\displaystyle i{\bar {\psi }}\gamma ^{\mu }\partial _{\mu }\psi }
where the notations are carried from earlier in the article. ψ can represent any, or all, Dirac fermions in the standard model. Generally, as below, this term is included within the couplings (creating an overall "dynamical" term).
==== Gauge fields ====
For the spin-1 fields, first define the field strength tensor
F
μ
ν
a
=
∂
μ
A
ν
a
−
∂
ν
A
μ
a
+
g
f
a
b
c
A
μ
b
A
ν
c
{\displaystyle F_{\mu \nu }^{a}=\partial _{\mu }A_{\nu }^{a}-\partial _{\nu }A_{\mu }^{a}+gf^{abc}A_{\mu }^{b}A_{\nu }^{c}}
for a given gauge field (here we use A), with gauge coupling constant g. The quantity fabc is the structure constant of the particular gauge group, defined by the commutator
[
t
a
,
t
b
]
=
i
f
a
b
c
t
c
,
{\displaystyle [t_{a},t_{b}]=if^{abc}t_{c},}
where ti are the generators of the group. In an abelian (commutative) group (such as the U(1) we use here) the structure constants vanish, since the generators ta all commute with each other. Of course, this is not the case in general – the standard model includes the non-Abelian SU(2) and SU(3) groups (such groups lead to what is called a Yang–Mills gauge theory).
We need to introduce three gauge fields corresponding to each of the subgroups SU(3) × SU(2) × U(1).
The gluon field tensor will be denoted by
G
μ
ν
a
{\displaystyle G_{\mu \nu }^{a}}
, where the index a labels elements of the 8 representation of color SU(3). The strong coupling constant is conventionally labelled gs (or simply g where there is no ambiguity). The observations leading to the discovery of this part of the Standard Model are discussed in the article in quantum chromodynamics.
The notation
W
μ
ν
a
{\displaystyle W_{\mu \nu }^{a}}
will be used for the gauge field tensor of SU(2) where a runs over the 3 generators of this group. The coupling can be denoted gw or again simply g. The gauge field will be denoted by
W
μ
a
{\displaystyle W_{\mu }^{a}}
.
The gauge field tensor for the U(1) of weak hypercharge will be denoted by Bμν, the coupling by g′, and the gauge field by Bμ.
The kinetic term can now be written as
L
k
i
n
=
−
1
4
B
μ
ν
B
μ
ν
−
1
2
t
r
W
μ
ν
W
μ
ν
−
1
2
t
r
G
μ
ν
G
μ
ν
{\displaystyle {\mathcal {L}}_{\rm {kin}}=-{1 \over 4}B_{\mu \nu }B^{\mu \nu }-{1 \over 2}\mathrm {tr} W_{\mu \nu }W^{\mu \nu }-{1 \over 2}\mathrm {tr} G_{\mu \nu }G^{\mu \nu }}
where the traces are over the SU(2) and SU(3) indices hidden in W and G respectively. The two-index objects are the field strengths derived from W and G the vector fields. There are also two extra hidden parameters: the theta angles for SU(2) and SU(3).
=== Coupling terms ===
The next step is to "couple" the gauge fields to the fermions, allowing for interactions.
==== Electroweak sector ====
The electroweak sector interacts with the symmetry group U(1) × SU(2)L, where the subscript L indicates coupling only to left-handed fermions.
L
E
W
=
∑
ψ
ψ
¯
γ
μ
(
i
∂
μ
−
g
′
1
2
Y
W
B
μ
−
g
1
2
τ
W
μ
)
ψ
{\displaystyle {\mathcal {L}}_{\mathrm {EW} }=\sum _{\psi }{\bar {\psi }}\gamma ^{\mu }\left(i\partial _{\mu }-g^{\prime }{1 \over 2}Y_{\mathrm {W} }B_{\mu }-g{1 \over 2}{\boldsymbol {\tau }}\mathbf {W} _{\mu }\right)\psi }
where Bμ is the U(1) gauge field; YW is the weak hypercharge (the generator of the U(1) group); Wμ is the three-component SU(2) gauge field; and the components of τ are the Pauli matrices (infinitesimal generators of the SU(2) group) whose eigenvalues give the weak isospin. Note that we have to redefine a new U(1) symmetry of weak hypercharge, different from QED, in order to achieve the unification with the weak force. The electric charge Q, third component of weak isospin T3 (also called Tz, I3 or Iz) and weak hypercharge YW are related by
Q
=
T
3
+
1
2
Y
W
,
{\displaystyle Q=T_{3}+{\tfrac {1}{2}}Y_{\rm {W}},}
(or by the alternative convention Q = T3 + YW). The first convention, used in this article, is equivalent to the earlier Gell-Mann–Nishijima formula. It makes the hypercharge be twice the average charge of a given isomultiplet.
One may then define the conserved current for weak isospin as
j
μ
=
1
2
ψ
¯
L
γ
μ
τ
ψ
L
{\displaystyle \mathbf {j} _{\mu }={1 \over 2}{\bar {\psi }}_{\rm {L}}\gamma _{\mu }{\boldsymbol {\tau }}\psi _{\rm {L}}}
and for weak hypercharge as
j
μ
Y
=
2
(
j
μ
e
m
−
j
μ
3
)
,
{\displaystyle j_{\mu }^{Y}=2(j_{\mu }^{\rm {em}}-j_{\mu }^{3})~,}
where
j
μ
e
m
{\displaystyle j_{\mu }^{\rm {em}}}
is the electric current and
j
μ
3
{\displaystyle j_{\mu }^{3}}
the third weak isospin current. As explained above, these currents mix to create the physically observed bosons, which also leads to testable relations between the coupling constants.
To explain this in a simpler way, we can see the effect of the electroweak interaction by picking out terms from the Lagrangian. We see that the SU(2) symmetry acts on each (left-handed) fermion doublet contained in ψ, for example
−
g
2
(
ν
¯
e
e
¯
)
τ
+
γ
μ
(
W
+
)
μ
(
ν
e
e
)
=
−
g
2
ν
¯
e
γ
μ
(
W
+
)
μ
e
{\displaystyle -{g \over 2}({\bar {\nu }}_{e}\;{\bar {e}})\tau ^{+}\gamma _{\mu }(W^{+})^{\mu }{\begin{pmatrix}{\nu _{e}}\\e\end{pmatrix}}=-{g \over 2}{\bar {\nu }}_{e}\gamma _{\mu }(W^{+})^{\mu }e}
where the particles are understood to be left-handed, and where
τ
+
≡
1
2
(
τ
1
+
i
τ
2
)
=
(
0
1
0
0
)
{\displaystyle \tau ^{+}\equiv {1 \over 2}(\tau ^{1}{+}i\tau ^{2})={\begin{pmatrix}0&1\\0&0\end{pmatrix}}}
This is an interaction corresponding to a "rotation in weak isospin space" or in other words, a transformation between eL and νeL via emission of a W− boson. The U(1) symmetry, on the other hand, is similar to electromagnetism, but acts on all "weak hypercharged" fermions (both left- and right-handed) via the neutral Z0, as well as the charged fermions via the photon.
==== Quantum chromodynamics sector ====
The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, with SU(3) symmetry, generated by Ta. Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by
L
Q
C
D
=
i
U
¯
(
∂
μ
−
i
g
s
G
μ
a
T
a
)
γ
μ
U
+
i
D
¯
(
∂
μ
−
i
g
s
G
μ
a
T
a
)
γ
μ
D
.
{\displaystyle {\mathcal {L}}_{\mathrm {QCD} }=i{\overline {U}}\left(\partial _{\mu }-ig_{s}G_{\mu }^{a}T^{a}\right)\gamma ^{\mu }U+i{\overline {D}}\left(\partial _{\mu }-ig_{s}G_{\mu }^{a}T^{a}\right)\gamma ^{\mu }D.}
where U and D are the Dirac spinors associated with up and down-type quarks, and other notations are continued from the previous section.
=== Mass terms and the Higgs mechanism ===
==== Mass terms ====
The mass term arising from the Dirac Lagrangian (for any fermion ψ) is
−
m
ψ
¯
ψ
{\displaystyle -m{\bar {\psi }}\psi }
, which is not invariant under the electroweak symmetry. This can be seen by writing ψ in terms of left and right-handed components (skipping the actual calculation):
−
m
ψ
¯
ψ
=
−
m
(
ψ
¯
L
ψ
R
+
ψ
¯
R
ψ
L
)
{\displaystyle -m{\bar {\psi }}\psi =-m({\bar {\psi }}_{\rm {L}}\psi _{\rm {R}}+{\bar {\psi }}_{\rm {R}}\psi _{\rm {L}})}
i.e. contribution from
ψ
¯
L
ψ
L
{\displaystyle {\bar {\psi }}_{\rm {L}}\psi _{\rm {L}}}
and
ψ
¯
R
ψ
R
{\displaystyle {\bar {\psi }}_{\rm {R}}\psi _{\rm {R}}}
terms do not appear. We see that the mass-generating interaction is achieved by constant flipping of particle chirality. The spin-half particles have no right/left chirality pair with the same SU(2) representations and equal and opposite weak hypercharges, so assuming these gauge charges are conserved in the vacuum, none of the spin-half particles could ever swap chirality, and must remain massless. Additionally, we know experimentally that the W and Z bosons are massive, but a boson mass term contains the combination e.g. AμAμ, which clearly depends on the choice of gauge. Therefore, none of the standard model fermions or bosons can "begin" with mass, but must acquire it by some other mechanism.
==== Higgs mechanism ====
The solution to both these problems comes from the Higgs mechanism, which involves scalar fields (the number of which depend on the exact form of Higgs mechanism) which (to give the briefest possible description) are "absorbed" by the massive bosons as degrees of freedom, and which couple to the fermions via Yukawa coupling to create what looks like mass terms.
In the Standard Model, the Higgs field is a complex scalar field of the group SU(2)L:
ϕ
=
1
2
(
ϕ
+
ϕ
0
)
,
{\displaystyle \phi ={\frac {1}{\sqrt {2}}}{\begin{pmatrix}\phi ^{+}\\\phi ^{0}\end{pmatrix}},}
where the superscripts + and 0 indicate the electric charge (Q) of the components. The weak hypercharge (YW) of both components is 1.
The Higgs part of the Lagrangian is
L
H
=
[
(
∂
μ
−
i
g
W
μ
a
t
a
−
i
g
′
Y
ϕ
B
μ
)
ϕ
]
2
+
μ
2
ϕ
†
ϕ
−
λ
(
ϕ
†
ϕ
)
2
,
{\displaystyle {\mathcal {L}}_{\rm {H}}=\left[\left(\partial _{\mu }-igW_{\mu }^{a}t^{a}-ig'Y_{\phi }B_{\mu }\right)\phi \right]^{2}+\mu ^{2}\phi ^{\dagger }\phi -\lambda (\phi ^{\dagger }\phi )^{2},}
where λ > 0 and μ2 > 0, so that the mechanism of spontaneous symmetry breaking can be used. There is a parameter here, at first hidden within the shape of the potential, that is very important. In a unitarity gauge one can set
ϕ
+
=
0
{\displaystyle \phi ^{+}=0}
and make
ϕ
0
{\displaystyle \phi ^{0}}
real. Then
⟨
ϕ
0
⟩
=
v
{\displaystyle \langle \phi ^{0}\rangle =v}
is the non-vanishing vacuum expectation value of the Higgs field.
v
{\displaystyle v}
has units of mass, and it is the only parameter in the Standard Model that is not dimensionless. It is also much smaller than the Planck scale and about twice the Higgs mass, setting the scale for the mass of all other particles in the Standard Model. This is the only real fine-tuning to a small nonzero value in the Standard Model. Quadratic terms in Wμ and Bμ arise, which give masses to the W and Z bosons:
M
W
=
1
2
v
g
M
Z
=
1
2
v
g
2
+
g
′
2
{\displaystyle {\begin{aligned}M_{\rm {W}}&={\tfrac {1}{2}}vg\\M_{\rm {Z}}&={\tfrac {1}{2}}v{\sqrt {g^{2}+{g'}^{2}}}\end{aligned}}}
The mass of the Higgs boson itself is given by
M
H
=
2
μ
2
≡
2
λ
v
2
.
{\textstyle M_{\rm {H}}={\sqrt {2\mu ^{2}}}\equiv {\sqrt {2\lambda v^{2}}}.}
==== Yukawa interaction ====
The Yukawa interaction terms are
L
Yukawa
=
(
Y
u
)
m
n
(
q
¯
L
)
m
φ
~
(
u
R
)
n
+
(
Y
d
)
m
n
(
q
¯
L
)
m
φ
(
d
R
)
n
+
(
Y
e
)
m
n
(
L
¯
L
)
m
φ
~
(
e
R
)
n
+
h
.
c
.
{\displaystyle {\mathcal {L}}_{\text{Yukawa}}=(Y_{\text{u}})_{mn}({\bar {q}}_{\text{L}})_{m}{\tilde {\varphi }}(u_{\text{R}})_{n}+(Y_{\text{d}})_{mn}({\bar {q}}_{\text{L}})_{m}\varphi (d_{\text{R}})_{n}+(Y_{\text{e}})_{mn}({\bar {L}}_{\text{L}})_{m}{\tilde {\varphi }}(e_{\text{R}})_{n}+\mathrm {h.c.} }
where
Y
u
{\displaystyle Y_{\text{u}}}
,
Y
d
{\displaystyle Y_{\text{d}}}
, and
Y
e
{\displaystyle Y_{\text{e}}}
are 3 × 3 matrices of Yukawa couplings, with the mn term giving the coupling of the generations m and n, and h.c. means Hermitian conjugate of preceding terms. The fields
q
L
{\displaystyle q_{\text{L}}}
and
L
L
{\displaystyle L_{\text{L}}}
are left-handed quark and lepton doublets. Likewise,
u
R
{\displaystyle u_{\text{R}}}
,
d
R
{\displaystyle d_{\text{R}}}
and
e
R
{\displaystyle e_{\text{R}}}
are right-handed up-type quark, down-type quark, and lepton singlets. Finally
φ
{\displaystyle \varphi }
is the Higgs doublet and
φ
~
=
i
τ
2
φ
∗
{\displaystyle {\tilde {\varphi }}=i\tau _{2}\varphi ^{*}}
==== Neutrino masses ====
As previously mentioned, evidence shows neutrinos must have mass. But within the standard model, the right-handed neutrino does not exist, so even with a Yukawa coupling neutrinos remain massless. An obvious solution is to simply add a right-handed neutrino νR, which requires the addition of a new Dirac mass term in the Yukawa sector:
L
ν
Dir
=
(
Y
ν
)
m
n
(
L
¯
L
)
m
φ
(
ν
R
)
n
+
h
.
c
.
{\displaystyle {\mathcal {L}}_{\nu }^{\text{Dir}}=(Y_{\nu })_{mn}({\bar {L}}_{L})_{m}\varphi (\nu _{R})_{n}+\mathrm {h.c.} }
This field however must be a sterile neutrino, since being right-handed it experimentally belongs to an isospin singlet (T3 = 0) and also has charge Q = 0, implying YW = 0 (see above) i.e. it does not even participate in the weak interaction. The experimental evidence for sterile neutrinos is currently inconclusive.
Another possibility to consider is that the neutrino satisfies the Majorana equation, which at first seems possible due to its zero electric charge. In this case a new Majorana mass term is added to the Yukawa sector:
L
ν
Maj
=
−
1
2
m
(
ν
¯
C
ν
+
ν
¯
ν
C
)
{\displaystyle {\mathcal {L}}_{\nu }^{\text{Maj}}=-{\frac {1}{2}}m\left({\overline {\nu }}^{C}\nu +{\overline {\nu }}\nu ^{C}\right)}
where C denotes a charge conjugated (i.e. anti-) particle, and the
ν
{\displaystyle \nu }
terms are consistently all left (or all right) chirality (note that a left-chirality projection of an antiparticle is a right-handed field; care must be taken here due to different notations sometimes used). Here we are essentially flipping between left-handed neutrinos and right-handed anti-neutrinos (it is furthermore possible but not necessary that neutrinos are their own antiparticle, so these particles are the same). However, for left-chirality neutrinos, this term changes weak hypercharge by 2 units – not possible with the standard Higgs interaction, requiring the Higgs field to be extended to include an extra triplet with weak hypercharge = 2 – whereas for right-chirality neutrinos, no Higgs extensions are necessary. For both left and right chirality cases, Majorana terms violate lepton number, but possibly at a level beyond the current sensitivity of experiments to detect such violations.
It is possible to include both Dirac and Majorana mass terms in the same theory, which (in contrast to the Dirac-mass-only approach) can provide a “natural” explanation for the smallness of the observed neutrino masses, by linking the right-handed neutrinos to yet-unknown physics around the GUT scale (see seesaw mechanism).
Since in any case new fields must be postulated to explain the experimental results, neutrinos are an obvious gateway to searching physics beyond the Standard Model.
== Detailed information ==
This section provides more detail on some aspects, and some reference material. Explicit Lagrangian terms are also provided here.
=== Field content in detail ===
The Standard Model has the following fields. These describe one generation of leptons and quarks, and there are three generations, so there are three copies of each fermionic field. By CPT symmetry, there is a set of fermions and antifermions with opposite parity and charges. If a left-handed fermion spans some representation its antiparticle (right-handed antifermion) spans the dual representation (note that
2
¯
=
2
{\displaystyle {\bar {\mathbf {2} }}={\mathbf {2} }}
for SU(2), because it is pseudo-real). The column "representation" indicates under which representations of the gauge groups that each field transforms, in the order (SU(3), SU(2), U(1)) and for the U(1) group, the value of the weak hypercharge is listed. There are twice as many left-handed lepton field components as right-handed lepton field components in each generation, but an equal number of left-handed quark and right-handed quark field components.
=== Fermion content ===
This table is based in part on data gathered by the Particle Data Group.
=== Free parameters ===
Upon writing the most general Lagrangian with massless neutrinos, one finds that the dynamics depend on 19 parameters, whose numerical values are established by experiment. Straightforward extensions of the Standard Model with massive neutrinos need 7 more parameters (3 masses and 4 PMNS matrix parameters) for a total of 26 parameters. The neutrino parameter values are still uncertain. The 19 certain parameters are summarized here.
The choice of free parameters is somewhat arbitrary. In the table above, gauge couplings are listed as free parameters, therefore with this choice the Weinberg angle is not a free parameter – it is defined as
tan
θ
W
=
g
1
/
g
2
{\displaystyle \tan \theta _{\rm {W}}={g_{1}}/{g_{2}}}
. Likewise, the fine-structure constant of QED is
α
=
1
4
π
(
g
1
g
2
)
2
g
1
2
+
g
2
2
{\displaystyle \alpha ={\frac {1}{4\pi }}{\frac {(g_{1}g_{2})^{2}}{g_{1}^{2}+g_{2}^{2}}}}
. Instead of fermion masses, dimensionless Yukawa couplings can be chosen as free parameters. For example, the electron mass depends on the Yukawa coupling of the electron to the Higgs field, and its value is
m
e
=
y
e
v
/
2
{\displaystyle m_{\rm {e}}=y_{\rm {e}}v/{\sqrt {2}}}
. Instead of the Higgs mass, the Higgs self-coupling strength
λ
=
m
H
2
2
v
2
{\displaystyle \lambda ={\frac {m_{\rm {H}}^{2}}{2v^{2}}}}
, which is approximately 0.129, can be chosen as a free parameter. Instead of the Higgs vacuum expectation value, the
μ
2
{\displaystyle \mu ^{2}}
parameter directly from the Higgs self-interaction term
μ
2
ϕ
†
ϕ
−
λ
(
ϕ
†
ϕ
)
2
{\displaystyle \mu ^{2}\phi ^{\dagger }\phi -\lambda (\phi ^{\dagger }\phi )^{2}}
can be chosen. Its value is
μ
2
=
λ
v
2
=
m
H
2
/
2
{\displaystyle \mu ^{2}=\lambda v^{2}={m_{\rm {H}}^{2}}/2}
, or approximately
μ
{\displaystyle \mu }
= 88.45 GeV.
The value of the vacuum energy (or more precisely, the renormalization scale used to calculate this energy) may also be treated as an additional free parameter. The renormalization scale may be identified with the Planck scale or fine-tuned to match the observed cosmological constant. However, both options are problematic.
=== Additional symmetries of the Standard Model ===
From the theoretical point of view, the Standard Model exhibits four additional global symmetries, not postulated at the outset of its construction, collectively denoted accidental symmetries, which are continuous U(1) global symmetries. The transformations leaving the Lagrangian invariant are:
ψ
q
→
e
i
α
/
3
ψ
q
{\displaystyle \psi _{\text{q}}\to e^{i\alpha /3}\psi _{\text{q}}}
E
L
→
e
i
β
E
L
and
(
e
R
)
c
→
e
i
β
(
e
R
)
c
{\displaystyle E_{\rm {L}}\to e^{i\beta }E_{\rm {L}}{\text{ and }}(e_{\rm {R}})^{\text{c}}\to e^{i\beta }(e_{\rm {R}})^{\text{c}}}
M
L
→
e
i
β
M
L
and
(
μ
R
)
c
→
e
i
β
(
μ
R
)
c
{\displaystyle M_{\rm {L}}\to e^{i\beta }M_{\rm {L}}{\text{ and }}(\mu _{\rm {R}})^{\text{c}}\to e^{i\beta }(\mu _{\rm {R}})^{\text{c}}}
T
L
→
e
i
β
T
L
and
(
τ
R
)
c
→
e
i
β
(
τ
R
)
c
{\displaystyle T_{\rm {L}}\to e^{i\beta }T_{\rm {L}}{\text{ and }}(\tau _{\rm {R}})^{\text{c}}\to e^{i\beta }(\tau _{\rm {R}})^{\text{c}}}
The first transformation rule is shorthand meaning that all quark fields for all generations must be rotated by an identical phase simultaneously. The fields ML, TL and
(
μ
R
)
c
,
(
τ
R
)
c
{\displaystyle (\mu _{\rm {R}})^{\text{c}},(\tau _{\rm {R}})^{\text{c}}}
are the 2nd (muon) and 3rd (tau) generation analogs of EL and
(
e
R
)
c
{\displaystyle (e_{\rm {R}})^{\text{c}}}
fields.
By Noether's theorem, each symmetry above has an associated conservation law: the conservation of baryon number, electron number, muon number, and tau number. Each quark is assigned a baryon number of
1
3
{\textstyle {\frac {1}{3}}}
, while each antiquark is assigned a baryon number of
−
1
3
{\textstyle -{\frac {1}{3}}}
. Conservation of baryon number implies that the number of quarks minus the number of antiquarks is a constant. Within experimental limits, no violation of this conservation law has been found.
Similarly, each electron and its associated neutrino is assigned an electron number of +1, while the anti-electron and the associated anti-neutrino carry a −1 electron number. Similarly, the muons and their neutrinos are assigned a muon number of +1 and the tau leptons are assigned a tau lepton number of +1. The Standard Model predicts that each of these three numbers should be conserved separately in a manner similar to the way baryon number is conserved. These numbers are collectively known as lepton family numbers (LF). (This result depends on the assumption made in Standard Model that neutrinos are massless. Experimentally, neutrino oscillations imply that individual electron, muon and tau numbers are not conserved.)
In addition to the accidental (but exact) symmetries described above, the Standard Model exhibits several approximate symmetries. These are the "SU(2) custodial symmetry" and the "SU(2) or SU(3) quark flavor symmetry".
=== U(1) symmetry ===
For the leptons, the gauge group can be written SU(2)l × U(1)L × U(1)R. The two U(1) factors can be combined into U(1)Y × U(1)l, where l is the lepton number. Gauging of the lepton number is ruled out by experiment, leaving only the possible gauge group SU(2)L × U(1)Y. A similar argument in the quark sector also gives the same result for the electroweak theory.
=== Charged and neutral current couplings and Fermi theory ===
The charged currents
j
∓
=
j
1
±
i
j
2
{\displaystyle j^{\mp }=j^{1}\pm ij^{2}}
are
j
μ
−
=
U
¯
i
L
γ
μ
D
i
L
+
ν
¯
i
L
γ
μ
l
i
L
.
{\displaystyle j_{\mu }^{-}={\overline {U}}_{i\mathrm {L} }\gamma _{\mu }D_{i\mathrm {L} }+{\overline {\nu }}_{i\mathrm {L} }\gamma _{\mu }l_{i\mathrm {L} }.}
These charged currents are precisely those that entered the Fermi theory of beta decay. The action contains the charge current piece
L
C
C
=
g
2
(
j
μ
+
W
−
μ
+
j
μ
−
W
+
μ
)
.
{\displaystyle {\mathcal {L}}_{\rm {CC}}={\frac {g}{\sqrt {2}}}(j_{\mu }^{+}W^{-\mu }+j_{\mu }^{-}W^{+\mu }).}
For energy much less than the mass of the W-boson, the effective theory becomes the current–current contact interaction of the Fermi theory,
2
2
G
F
J
μ
+
J
μ
−
{\displaystyle 2{\sqrt {2}}G_{\rm {F}}~~J_{\mu }^{+}J^{\mu ~~-}}
.
However, gauge invariance now requires that the component
W
3
{\displaystyle W^{3}}
of the gauge field also be coupled to a current that lies in the triplet of SU(2). However, this mixes with the U(1), and another current in that sector is needed. These currents must be uncharged in order to conserve charge. So neutral currents are also required,
j
μ
3
=
1
2
(
U
¯
i
L
γ
μ
U
i
L
−
D
¯
i
L
γ
μ
D
i
L
+
ν
¯
i
L
γ
μ
ν
i
L
−
l
¯
i
L
γ
μ
l
i
L
)
{\displaystyle j_{\mu }^{3}={\frac {1}{2}}\left({\overline {U}}_{i\mathrm {L} }\gamma _{\mu }U_{i\mathrm {L} }-{\overline {D}}_{i\mathrm {L} }\gamma _{\mu }D_{i\mathrm {L} }+{\overline {\nu }}_{i\mathrm {L} }\gamma _{\mu }\nu _{i\mathrm {L} }-{\overline {l}}_{i\mathrm {L} }\gamma _{\mu }l_{i\mathrm {L} }\right)}
j
μ
e
m
=
2
3
U
¯
i
γ
μ
U
i
−
1
3
D
¯
i
γ
μ
D
i
−
l
¯
i
γ
μ
l
i
.
{\displaystyle j_{\mu }^{\rm {em}}={\frac {2}{3}}{\overline {U}}_{i}\gamma _{\mu }U_{i}-{\frac {1}{3}}{\overline {D}}_{i}\gamma _{\mu }D_{i}-{\overline {l}}_{i}\gamma _{\mu }l_{i}.}
The neutral current piece in the Lagrangian is then
L
N
C
=
e
j
μ
e
m
A
μ
+
g
cos
θ
W
(
J
μ
3
−
sin
2
θ
W
J
μ
e
m
)
Z
μ
.
{\displaystyle {\mathcal {L}}_{\rm {NC}}=ej_{\mu }^{\rm {em}}A^{\mu }+{\frac {g}{\cos \theta _{\rm {W}}}}(J_{\mu }^{3}-\sin ^{2}\theta _{\rm {W}}J_{\mu }^{\rm {em}})Z^{\mu }.}
== Physics beyond the Standard Model ==
== See also ==
Overview of Standard Model of particle physics
Fundamental interaction
Noncommutative standard model
Open questions: CP violation, Neutrino masses, Quark matter
Physics beyond the Standard Model
Strong interactions
Flavor
Quantum chromodynamics
Quark model
Weak interactions
Electroweak interaction
Fermi's interaction
Weinberg angle
Symmetry in quantum mechanics
Quantum Field Theory in a Nutshell by A. Zee
== References and external links == | Wikipedia/Standard_Model_(mathematical_formulation) |
The General Theory of Everything (Polish: Ogólna Teoria Wszystkiego) is a sarcastic coinage of Stanisław Lem introduced in 1966. The biographical sketch of Ijon Tichy in "The Twenty-eighth Voyage" of Tychy's Star Diaries says that a grandfather of Ijon, Jeremiasz Tichy, "decided to create the General Theory of Everything, and nothing stopped him from doing this".
Apart from being a precursor of the term "Theory of Everything," the term GTE was used to characterize Lem's essays of fundamental character, such as The Philosophy of Chance and Science Fiction and Futurology, as well as the pseudoscientific work of Polish scifi writer Adam Wiśniewski-Snerg, Jednolita teoria czasoprzestrzeni ["The Uniform Theory of the Spacetime"] (1990)
== References == | Wikipedia/General_Theory_of_Everything |
In theoretical physics, the nonsymmetric gravitational theory (NGT) of John Moffat is a classical theory of gravitation that tries to explain the observation of the flat rotation curves of galaxies.
In general relativity, the gravitational field is characterized by a symmetric rank-2 tensor, the metric tensor. The possibility of generalizing the metric tensor has been considered by many, including Albert Einstein and others. A general (nonsymmetric) tensor can always be decomposed into a symmetric and an antisymmetric part. As the electromagnetic field is characterized by an antisymmetric rank-2 tensor, there is an obvious possibility for a unified theory: a nonsymmetric tensor composed of a symmetric part representing gravity, and an antisymmetric part that represents electromagnetism. Research in this direction ultimately proved fruitless; the desired classical unified field theory was not found.
In 1979, Moffat made the observation that the antisymmetric part of the generalized metric tensor need not necessarily represent electromagnetism; it may represent a new, hypothetical force. Later, in 1995, Moffat noted that the field corresponding with the antisymmetric part need not be massless, like the electromagnetic (or gravitational) fields.
In its original form, the theory may be unstable, although this has only been shown in the case of the linearized version.
In the weak field approximation where interaction between fields is not taken into account, NGT is characterized by a symmetric rank-2 tensor field (gravity), an antisymmetric tensor field, and a constant characterizing the mass of the antisymmetric tensor field. The antisymmetric tensor field is found to satisfy the equations of a Maxwell–Proca massive antisymmetric tensor field. This led Moffat to propose metric-skew-tensor-gravity (MSTG), in which a skew symmetric tensor field postulated as part of the gravitational action.
A newer version of MSTG, in which the skew symmetric tensor field was replaced by a vector field, is scalar–tensor–vector gravity (STVG). STVG, like Milgrom's Modified Newtonian Dynamics (MOND), can provide an explanation for flat rotation curves of galaxies.
In 2013, Hammond showed the nonsymmetric part of the metric tensor was shown to be equal to the torsion potential, a result following the metricity condition, that the length of a vector is invariant under parallel transport. In addition, the energy momentum tensor is not symmetric, and both the symmetric and nonsymmetric parts are those of a string.
== See also ==
Reinventing Gravity
== References == | Wikipedia/Nonsymmetric_gravitational_theory |
Le Sage's theory of gravitation is a kinetic theory of gravity originally proposed by Nicolas Fatio de Duillier in 1690 and later by Georges-Louis Le Sage in 1748. The theory proposed a mechanical explanation for Newton's gravitational force in terms of streams of tiny unseen particles (which Le Sage called ultra-mundane corpuscles) impacting all material objects from all directions. According to this model, any two material bodies partially shield each other from the impinging corpuscles, resulting in a net imbalance in the pressure exerted by the impact of corpuscles on the bodies, tending to drive the bodies together. This mechanical explanation for gravity never gained widespread acceptance.
== Basic theory ==
The theory posits that the force of gravity is the result of tiny particles (corpuscles) moving at high speed in all directions, throughout the universe. The intensity of the flux of particles is assumed to be the same in all directions, so an isolated object A is struck equally from all sides, resulting in only an inward-directed pressure but no net directional force (P1).
With a second object B present, however, a fraction of the particles that would otherwise have struck A from the direction of B is intercepted, so B works as a shield, i.e. from the direction of B, A will be struck by fewer particles than from the opposite direction. Likewise B will be struck by fewer particles from the direction of A than from the opposite direction. One can say that A and B are "shadowing" each other, and the two bodies are pushed toward each other by the resulting imbalance of forces (P2). Thus the apparent attraction between bodies is, according to this theory, actually a diminished push from the direction of other bodies, so the theory is sometimes called push gravity or shadow gravity, although it is more widely referred to as Lesage gravity.
=== Nature of collisions ===
If the collisions of body A and the gravific particles are fully elastic, the intensity of the reflected particles would be as strong as of the incoming ones, so no net directional force would arise. The same is true if a second body B is introduced, where B acts as a shield against gravific particles in the direction of A. The gravific particle C which ordinarily would strike on A is blocked by B, but another particle D which ordinarily would not have struck A, is re-directed by the reflection on B, and therefore replaces C. Thus if the collisions are fully elastic, the reflected particles between A and B would fully compensate any shadowing effect. In order to account for a net gravitational force, it must be assumed that the collisions are not fully elastic, or at least that the reflected particles are slowed, so that their momentum is reduced after the impact. This would result in streams with diminished momentum departing from A, and streams with undiminished momentum arriving at A, so a net directional momentum toward the center of A would arise (P3). Under this assumption, the reflected particles in the two-body case will not fully compensate the shadowing effect, because the reflected flux is weaker than the incident flux.
=== Inverse square law ===
Since it is assumed that some or all of the gravific particles converging on an object are either absorbed or slowed by the object, it follows that the intensity of the flux of gravific particles emanating from the direction of a massive object is less than the flux converging on the object. We can imagine this imbalance of momentum flow – and therefore of the force exerted on any other body in the vicinity – distributed over a spherical surface centered on the object (P4). The imbalance of momentum flow over an entire spherical surface enclosing the object is independent of the size of the enclosing sphere, whereas the surface area of the sphere increases in proportion to the square of the radius. Therefore, the momentum imbalance per unit area decreases inversely as the square of the distance.
=== Mass proportionality ===
From the premises outlined so far, there arises only a force which is proportional to the surface of the bodies. But gravity is proportional to the masses. To satisfy the need for mass proportionality, the theory posits that a) the basic elements of matter are very small so that gross matter consists mostly of empty space, and b) that the particles are so small, that only a small fraction of them would be intercepted by gross matter. The result is, that the "shadow" of each body is proportional to the surface of every single element of matter. If it is then assumed that the elementary opaque elements of all matter are identical (i.e., having the same ratio of density to area), it will follow that the shadow effect is, at least approximately, proportional to the mass (P5).
== Fatio ==
Nicolas Fatio presented the first formulation of his thoughts on gravitation in a letter to Christiaan Huygens in the spring of 1690. Two days later Fatio read the content of the letter before the Royal Society in London. In the following years Fatio composed several draft manuscripts of his major work De la Cause de la Pesanteur, but none of this material was published in his lifetime. In 1731 Fatio also sent his theory as a Latin poem, in the style of Lucretius, to the Paris Academy of Science, but it was dismissed. Some fragments of these manuscripts and copies of the poem were later acquired by Le Sage who failed to find a publisher for Fatio's papers. So it lasted until 1929, when the only complete copy of Fatio's manuscript was published by Karl Bopp, and in 1949 Gagnebin used the collected fragments in possession of Le Sage to reconstruct the paper. The Gagnebin edition includes revisions made by Fatio as late as 1743, forty years after he composed the draft on which the Bopp edition was based. However, the second half of the Bopp edition contains the mathematically most advanced parts of Fatio's theory, and were not included by Gagnebin in his edition. For a detailed analysis of Fatio's work, and a comparison between the Bopp and the Gagnebin editions, see Zehe. The following description is mainly based on the Bopp edition.
=== Features of Fatio's theory ===
==== Fatio's pyramid (Problem I) ====
Fatio assumed that the universe is filled with minute particles, which are moving indiscriminately with very high speed and rectilinearly in all directions. To illustrate his thoughts he used the following example: Suppose an object C, on which an infinite small plane zz and a sphere centered about zz is drawn. Into this sphere Fatio placed the pyramid PzzQ, in which some particles are streaming in the direction of zz and also some particles, which were already reflected by C and therefore depart from zz. Fatio proposed that the mean velocity of the reflected particles is lower and therefore their momentum is weaker than that of the incident particles. The result is one stream, which pushes all bodies in the direction of zz. So on one hand the speed of the stream remains constant, but on the other hand at larger proximity to zz the density of the stream increases and therefore its intensity is proportional to 1/r2. And because one can draw an infinite number of such pyramids around C, the proportionality applies to the entire range around C.
==== Reduced speed ====
In order to justify the assumption, that the particles are traveling after their reflection with diminished velocities, Fatio stated the following assumptions:
Either ordinary matter, or the gravific particles, or both are inelastic, or
the impacts are fully elastic, but the particles are not absolutely hard, and therefore are in a state of vibration after the impact, and/or
due to friction the particles begin to rotate after their impacts.
These passages are the most incomprehensible parts of Fatio's theory, because he never clearly decided which sort of collision he actually preferred. However, in the last version of his theory in 1742 he shortened the related passages and ascribed "perfect elasticity or spring force" to the particles and on the other hand "imperfect elasticity" to gross matter, therefore the particles would be reflected with diminished velocities. Additionally, Fatio faced another problem: What is happening if the particles collide with each other? Inelastic collisions would lead to a steady decrease of the particle speed and therefore a decrease of the gravitational force. To avoid this problem, Fatio supposed that the diameter of the particles is very small compared to their mutual distance, so their interactions are very rare.
==== Condensation ====
Fatio thought for a long time that, since corpuscles approach material bodies at a higher speed than they recede from them (after reflection), there would be a progressive accumulation of corpuscles near material bodies (an effect which he called "condensation"). However, he later realized that although the incoming corpuscles are quicker, they are spaced further apart than are the reflected corpuscles, so the inward and outward flow rates are the same. Hence there is no secular accumulation of corpuscles, i.e., the density of the reflected corpuscles remains constant (assuming that they are small enough that no noticeably greater rate of self-collision occurs near the massive body). More importantly, Fatio noted that, by increasing both the velocity and the elasticity of the corpuscles, the difference between the speeds of the incoming and reflected corpuscles (and hence the difference in densities) can be made arbitrarily small while still maintaining the same effective gravitational force.
==== Porosity of gross matter ====
In order to ensure mass proportionality, Fatio assumed that gross matter is extremely permeable to the flux of corpuscles. He sketched 3 models to justify this assumption:
He assumed that matter is an accumulation of small "balls" whereby their diameter compared with their distance among themselves is "infinitely" small. But he rejected this proposal, because under this condition the bodies would approach each other and therefore would not remain stable.
Then he assumed that the balls could be connected through bars or lines and would form some kind of crystal lattice. However, he rejected this model too – if several atoms are together, the gravific fluid is not able to penetrate this structure equally in all direction, and therefore mass proportionality is impossible.
At the end Fatio also removed the balls and only left the lines or the net. By making them "infinitely" smaller than their distance among themselves, thereby a maximum penetration capacity could be achieved.
==== Pressure force of the particles (Problem II) ====
Already in 1690 Fatio assumed, that the "push force" exerted by the particles on a plain surface is the sixth part of the force, which would be produced if all particles are lined up normal to the surface. Fatio now gave a proof of this proposal by determination of the force, which is exerted by the particles on a certain point zz. He derived the formula p = ρv2zz/6. This solution is very similar to the formula known in the kinetic theory of gases p = ρv2/3, which was found by Daniel Bernoulli in 1738. This was the first time that a solution analogous to the similar result in kinetic theory was pointed out – long before the basic concept of the latter theory was developed. However, Bernoulli's value is twice as large as Fatio's one, because according to Zehe, Fatio only calculated the value mv for the change of impulse after the collision, but not 2mv and therefore got the wrong result. (His result is only correct in the case of totally inelastic collisions.) Fatio tried to use his solution not only for explaining gravitation, but for explaining the behaviour of gases as well. He tried to construct a thermometer, which should indicate the "state of motion" of the air molecules and therefore estimate the temperature. But Fatio (unlike Bernoulli) did not identify heat and the movements of the air particles – he used another fluid, which should be responsible for this effect. It is also unknown, whether Bernoulli was influenced by Fatio or not.
==== Infinity (Problem III) ====
In this chapter Fatio examines the connections between the term infinity and its relations to his theory. Fatio often justified his considerations with the fact that different phenomena are "infinitely smaller or larger" than others and so many problems can be reduced to an undetectable value. For example, the diameter of the bars is infinitely smaller than their distance to each other; or the speed of the particles is infinitely larger than those of gross matter; or the speed difference between reflected and non-reflected particles is infinitely small.
==== Resistance of the medium (Problem IV) ====
This is the mathematically most complex part of Fatio's theory. There he tried to estimate the resistance of the particle streams for moving bodies. Supposing u is the velocity of gross matter, v is the velocity of the gravific particles and ρ the density of the medium. In the case v ≪ u and ρ = constant Fatio stated that the resistance is ρu2. In the case v ≫ u and ρ = constant the resistance is 4/3ρuv. Now, Newton stated that the lack of resistance to the orbital motion requires an extreme sparseness of any medium in space. So Fatio decreased the density of the medium and stated, that to maintain sufficient gravitational force this reduction must be compensated by changing v "inverse proportional to the square root of the density". This follows from Fatio's particle pressure, which is proportional to ρv2. According to Zehe, Fatio's attempt to increase v to a very high value would actually leave the resistance very small compared with gravity, because the resistance in Fatio's model is proportional to ρuv but gravity (i.e. the particle pressure) is proportional to ρv2.
=== Reception of Fatio's theory ===
Fatio was in communication with some of the most famous scientists of his time.
There was a strong personal relationship between Isaac Newton and Fatio in the years 1690 to 1693. Newton's statements on Fatio's theory differed widely. For example, after describing the necessary conditions for a mechanical explanation of gravity, he wrote in an (unpublished) note in his own printed copy of the Principia in 1692:The unique hypothesis by which gravity can be explained is however of this kind, and was first devised by the most ingenious geometer Mr. N. Fatio. On the other hand, Fatio himself stated that although Newton had commented privately that Fatio's theory was the best possible mechanical explanation of gravity, he also acknowledged that Newton tended to believe that the true explanation of gravitation was not mechanical. Also, Gregory noted in his "Memoranda": "Mr. Newton and Mr. Halley laugh at Mr. Fatio's manner of explaining gravity." This was allegedly noted by him on December 28, 1691. However, the real date is unknown, because both ink and feather which were used, differ from the rest of the page. After 1694, the relationship between the two men cooled down.
Christiaan Huygens was the first person informed by Fatio of his theory, but never accepted it. Fatio believed he had convinced Huygens of the consistency of his theory, but Huygens denied this in a letter to Gottfried Leibniz. There was also a short correspondence between Fatio and Leibniz on the theory. Leibniz criticized Fatio's theory for demanding empty space between the particles, which was rejected by him (Leibniz) on philosophical grounds. Jakob Bernoulli expressed an interest in Fatio's Theory, and urged Fatio to write his thoughts on gravitation in a complete manuscript, which was actually done by Fatio. Bernoulli then copied the manuscript, which now resides in the university library of Basel, and was the base of the Bopp edition.
Nevertheless, Fatio's theory remained largely unknown with a few exceptions like Cramer and Le Sage, because he never was able to formally publish his works and he fell under the influence of a group of religious fanatics called the "French prophets" (which belonged to the camisards) and therefore his public reputation was ruined.
== Cramer and Redeker ==
In 1731 the Swiss mathematician Gabriel Cramer published a dissertation, at the end of which appeared a sketch of a theory very similar to Fatio's – including net structure of matter, analogy to light, shading – but without mentioning Fatio's name. It was known to Fatio that Cramer had access to a copy of his main paper, so he accused Cramer of only repeating his theory without understanding it. It was also Cramer who informed Le Sage about Fatio's theory in 1749. In 1736 the German physician Franz Albert Redeker also published a similar theory. Any connection between Redeker and Fatio is unknown.
== Le Sage ==
The first exposition of his theory, Essai sur l'origine des forces mortes, was sent by Le Sage to the Academy of Sciences at Paris in 1748, but it was never published. According to Le Sage, after creating and sending his essay he was informed on the theories of Fatio, Cramer and Redeker. In 1756 for the first time one of his expositions of the theory was published, and in 1758 he sent a more detailed exposition, Essai de Chymie Méchanique, to a competition to the Academy of Sciences in Rouen. In this paper he tried to explain both the nature of gravitation and chemical affinities. The exposition of the theory which became accessible to a broader public, Lucrèce Newtonien (1784), in which the correspondence with Lucretius' concepts was fully developed. Another exposition of the theory was published from Le Sage's notes posthumously by Pierre Prévost in 1818.
=== Le Sage's basic concept ===
Le Sage discussed the theory in great detail and he proposed quantitative estimates for some of the theory's parameters.
He called the gravitational particles ultramundane corpuscles, because he supposed them to originate beyond our known universe. The distribution of the ultramundane flux is isotropic and the laws of its propagation are very similar to that of light.
Le Sage argued that no gravitational force would arise if the matter-particle-collisions are perfectly elastic . So he proposed that the particles and the basic constituents of matter are "absolutely hard" and asserted that this implies a complicated form of interaction, completely inelastic in the direction normal to the surface of the ordinary matter, and perfectly elastic in the direction tangential to the surface. He then commented that this implies the mean speed of scattered particles is 2/3 of their incident speed. To avoid inelastic collisions between the particles, he supposed that their diameter is very small relative to their mutual distance.
That resistance of the flux is proportional to uv (where v is the velocity of the particles and u that of gross matter) and gravity is proportional to v2, so the ratio resistance/gravity can be made arbitrarily small by increasing v. Therefore, he suggested that the ultramundane corpuscles might move at the speed of light, but after further consideration he adjusted this to 105 times the speed of light.
To maintain mass proportionality, ordinary matter consists of cage-like structures, in which their diameter is only the 107th part of their mutual distance. Also the "bars", which constitute the cages, were small (around 1020 times as long as thick) relative to the dimensions of the cages, so the particles can travel through them nearly unhindered.
Le Sage also attempted to use the shadowing mechanism to account for the forces of cohesion, and for forces of different strengths, by positing the existence of multiple species of ultramundane corpuscles of different sizes, as illustrated in Figure 9.
Le Sage said that he was the first one, who drew all consequences from the theory and also Prévost said that Le Sage's theory was more developed than Fatio's theory. However, by comparing the two theories and after a detailed analysis of Fatio's papers (which also were in possession of Le Sage) Zehe judged that Le Sage contributed nothing essentially new and he often did not reach Fatio's level.
=== Reception of Le Sage's theory ===
Le Sage's ideas were not well-received during his day, except for some of his friends and associates like Pierre Prévost, Charles Bonnet, Jean-André Deluc, Charles Mahon, 3rd Earl Stanhope and Simon L'Huillier. They mentioned and described Le Sage's theory in their books and papers, which were used by their contemporaries as a secondary source for Le Sage's theory (because of the lack of published papers by Le Sage himself) .
==== Euler, Bernoulli, and Boscovich ====
Leonhard Euler once remarked that Le Sage's model was "infinitely better" than that of all other authors, and that all objections are balanced out in this model, but later he said the analogy to light had no weight for him, because he believed in the wave nature of light. After further consideration, Euler came to disapprove of the model, and he wrote to Le Sage:
You must excuse me Sir, if I have a great repugnance for your ultramundane corpuscles, and I shall always prefer to confess my ignorance of the cause of gravity than to have recourse to such strange hypotheses.
Daniel Bernoulli was pleased by the similarity of Le Sage's model and his own thoughts on the nature of gases. However, Bernoulli himself was of the opinion that his own kinetic theory of gases was only a speculation, and likewise he regarded Le Sage's theory as highly speculative.
Roger Joseph Boscovich pointed out, that Le Sage's theory is the first one, which actually can explain gravity by mechanical means. However, he rejected the model because of the enormous and unused quantity of ultramundane matter. John Playfair described Boscovich's arguments by saying:
An immense multitude of atoms, thus destined to pursue their never ending journey through the infinity of space, without changing their direction, or returning to the place from which they came, is a supposition very little countenanced by the usual economy of nature. Whence is the supply of these innumerable torrents; must it not involve a perpetual exertion of creative power, infinite both in extent and in duration?
A very similar argument was later given by Maxwell (see the sections below). Additionally, Boscovich denied the existence of all contact and immediate impulse at all, but proposed repulsive and attractive actions at a distance.
==== Lichtenberg, Kant, and Schelling ====
Georg Christoph Lichtenberg's knowledge of Le Sage's theory was based on "Lucrece Newtonien" and a summary by Prévost. Lichtenberg originally believed (like Descartes) that every explanation of natural phenomena must be based on rectilinear motion and impulsion, and Le Sage's theory fulfilled these conditions. In 1790 he expressed in one of his papers his enthusiasm for the theory, believing that Le Sage's theory embraces all of our knowledge and makes any further dreaming on that topic useless. He went on by saying: "If it is a dream, it is the greatest and the most magnificent which was ever dreamed..." and that we can fill with it a gap in our books, which can only be filled by a dream.
He often referred to Le Sage's theory in his lectures on physics at the University of Göttingen. However, around 1796 Lichtenberg changed his views after being persuaded by the arguments of Immanuel Kant, who criticized any kind of theory that attempted to replace attraction with impulsion. Kant pointed out that the very existence of spatially extended configurations of matter, such as particles of non-zero radius, implies the existence of some sort of binding force to hold the extended parts of the particle together. Now, that force cannot be explained by the push from the gravitational particles, because those particles too must hold together in the same way. To avoid this circular reasoning, Kant asserted that there must exist a fundamental attractive force. This was precisely the same objection that had always been raised against the impulse doctrine of Descartes in the previous century, and had led even the followers of Descartes to abandon that aspect of his philosophy.
Another German philosopher, Friedrich Wilhelm Joseph Schelling, rejected Le Sage's model because its mechanistic materialism was incompatible with Schelling's very idealistic and anti-materialistic philosophy.
==== Laplace ====
Partly in consideration of Le Sage's theory, Pierre-Simon Laplace undertook to determine the necessary speed of gravity in order to be consistent with astronomical observations. He calculated that the speed must be "at least a hundred millions of times greater than that of light", in order to avoid unacceptably large inequalities due to aberration effects in the lunar motion. This was taken by most researchers, including Laplace, as support for the Newtonian concept of instantaneous action at a distance, and to indicate the implausibility of any model such as Le Sage's. Laplace also argued that to maintain mass-proportionality the upper limit for Earth's molecular surface area is at the most the ten-millionth of Earth's surface. To Le Sage's disappointment, Laplace never directly mentioned Le Sage's theory in his works.
== Kinetic theory ==
Because the theories of Fatio, Cramer and Redeker were not widely known, Le Sage's exposition of the theory enjoyed a resurgence of interest in the latter half of the 19th century, coinciding with the development of the kinetic theory of gases.
=== Leray ===
Since Le Sage's particles must lose speed when colliding with ordinary matter (in order to produce a net gravitational force), a huge amount of energy must be converted to internal energy modes. If those particles have no internal energy modes, the excess energy can only be absorbed by ordinary matter. Addressing this problem, Armand Jean Leray proposed a particle model (perfectly similar to Le Sage's) in which he asserted that the absorbed energy is used by the bodies to produce magnetism and heat. He suggested, that this might be an answer for the question of where the energy output of the stars comes from.
=== Kelvin and Tait ===
Le Sage's own theory became a subject of renewed interest in the latter part of the 19th century following a paper published by Kelvin in 1873. Unlike Leray, who treated the heat problem imprecisely, Kelvin stated that the absorbed energy represents a very high heat, sufficient to vaporize any object in a fraction of a second. So Kelvin reiterated an idea that Fatio had originally proposed in the 1690s for attempting to deal with the thermodynamic problem inherent in Le Sage's theory. He proposed that the excess heat might be absorbed by internal energy modes of the particles themselves, based on his proposal of the vortex-nature of matter. In other words, the original translational kinetic energy of the particles is transferred to internal energy modes, chiefly vibrational or rotational, of the particles. Appealing to Clausius's proposition that the energy in any particular mode of a gas molecule tends toward a fixed ratio of the total energy, Kelvin went on to suggest that the energized but slower moving particles would subsequently be restored to their original condition due to collisions (on the cosmological scale) with other particles. Kelvin also asserted that it would be possible to extract limitless amounts of free energy from the ultramundane flux, and described a perpetual motion machine to accomplish this.
Subsequently, Peter Guthrie Tait called the Le Sage theory the only plausible explanation of gravitation which has been propounded at that time. He went on by saying:
The most singular thing about it is that, if it be true, it will probably lead us to regard all kinds of energy as ultimately Kinetic.
Kelvin himself, however, was not optimistic that Le Sage's theory could ultimately give a satisfactory account of phenomena. After his brief paper in 1873 noted above, he never returned to the subject, except to make the following comment:
This kinetic theory of matter is a dream, and can be nothing else, until it can explain chemical affinity, electricity, magnetism, gravitation, and the inertia of masses (that is, crowds) of vortices. Le Sage's theory might give an explanation of gravity and of its relation to inertia of masses, on the vortex theory, were it not for the essential aeolotropy of crystals, and the seemingly perfect isotropy of gravity. No finger post pointing towards a way that can possibly lead to a surmounting of this difficulty, or a turning of its flank, has been discovered, or imagined as discoverable.
=== Preston ===
Samuel Tolver Preston illustrated that many of the postulates introduced by Le Sage concerning the gravitational particles, such as rectilinear motion, rare interactions, etc.., could be collected under the single notion that they behaved (on the cosmological scale) as the particles of a gas with an extremely long mean free path. Preston also accepted Kelvin's proposal of internal energy modes of the particles. He illustrated Kelvin's model by comparing it with the collision of a steel ring and an anvil – the anvil would not be shaken very much, but the steel ring would be in a state of vibration and therefore departs with diminished velocity. He also argued, that the mean free path of the particles is at least the distance between the planets – on longer distances the particles regain their translational energy due collisions with each other, so he concluded that on longer distances there would be no attraction between the bodies, independent of their size. Paul Drude suggested that this could possibly be a connection with some theories of Carl Gottfried Neumann and Hugo von Seeliger, who proposed some sort of absorption of gravity in open space.
=== Maxwell ===
A review of the Kelvin-Le Sage theory was published by James Clerk Maxwell in the Ninth Edition of the Encyclopædia Britannica under the title Atom in 1875. After describing the basic concept of the theory he wrote (with sarcasm according to Aronson):
Here, then, seems to be a path leading towards an explanation of the law of gravitation, which, if it can be shown to be in other respects consistent with facts, may turn out to be a royal road into the very arcana of science.
Maxwell commented on Kelvin's suggestion of different energy modes of the particles that this implies the gravitational particles are not simple primitive entities, but rather systems, with their own internal energy modes, which must be held together by (unexplained) forces of attraction. He argues that the temperature of bodies must tend to approach that at which the average kinetic energy of a molecule of the body would be equal to the average kinetic energy of an ultra-mundane particle and he states that the latter quantity must be much greater than the former and concludes that ordinary matter should be incinerated within seconds under the Le Sage bombardment. He wrote:
We have devoted more space to this theory than it seems to deserve, because it is ingenious, and because it is the only theory of the cause of gravitation which has been so far developed as to be capable of being attacked and defended.
Maxwell also argued that the theory requires "an enormous expenditure of external power" and therefore violating the conservation of energy as the fundamental principle of nature. Preston responded to Maxwell's criticism by arguing that the kinetic energy of each individual simple particle could be made arbitrarily low by positing a sufficiently low mass (and higher number density) for the particles. But this issue later was discussed in a more detailed way by Poincaré, who showed that the thermodynamic problem within Le Sage models remained unresolved.
=== Isenkrahe, Ryšánek, du Bois-Reymond ===
Caspar Isenkrahe presented his model in a variety of publications between 1879 and 1915.
His basic assumptions were very similar to those of Le Sage and Preston, but he gave a more detailed application of the kinetic theory. However, by asserting that the velocity of the corpuscles after collision was reduced without any corresponding increase in the energy of any other object, his model violated the conservation of energy. He noted that there is a connection between the weight of a body and its density (because any decrease in the density of an object reduces the internal shielding) so he went on to assert that warm bodies should be heavier than colder ones (related to the effect of thermal expansion).
In another model Adalbert Ryšánek in 1887
also gave a careful analysis, including an application of Maxwell's law of the particle velocities in a gas. He distinguished between a gravitational and a luminiferous aether. This separation of those two mediums was necessary, because according to his calculations the absence of any drag effect in the orbit of Neptune implies a lower limit for the particle velocity of 5 · 1019 cm/s. He (like Leray) argued that the absorbed energy is converted into heat, which might be transferred into the luminiferous aether and/or is used by the stars to maintain their energy output. However, these qualitative suggestions were unsupported by any quantitative evaluation of the amount of heat actually produced.
In 1888 Paul du Bois-Reymond argued against Le Sage's model, partly because the predicted force of gravity in Le Sage's theory is not strictly proportional to mass. In order to achieve exact mass proportionality as in Newton's theory (which implies no shielding or saturation effects and an infinitely porous structure of matter), the ultramundane flux must be infinitely intense. Du Bois-Reymond rejected this as absurd. In addition, du Bois-Reymond like Kant observed that Le Sage's theory cannot meet its goal, because it invokes concepts like "elasticity" and "absolute hardness" etc., which (in his opinion) can only be explained by means of attractive forces. The same problem arises for the cohesive forces in molecules. As a result, the basic intent of such models, which is to dispense with elementary forces of attraction, is impossible.
== Wave models ==
=== Keller and Boisbaudran ===
In 1863, François Antoine Edouard and Em. Keller presented a theory by using a Le Sage type mechanism in combination with longitudinal waves of the aether. They supposed that those waves are propagating in every direction and losing some of their momentum after the impact on bodies, so between two bodies the pressure exerted by the waves is weaker than the pressure around them. In 1869, Paul-Emile Lecoq de Boisbaudran presented the same model as Leray (including absorption and the production of heat etc.), but like Keller and Keller, he replaced the particles with longitudinal waves of the aether.
=== Lorentz ===
After these attempts, other authors in the early 20th century substituted electromagnetic radiation for Le Sage's particles. This was in connection with Lorentz ether theory and the electron theory of that time, in which the electrical constitution of matter was assumed.
In 1900 Hendrik Lorentz wrote that Le Sage's particle model is not consistent with the electron theory of his time. But the realization that trains of electromagnetic waves could produce some pressure, in combination with the penetrating power of Röntgen rays (now called x-rays), led him to conclude that nothing argues against the possible existence of even more penetrating radiation than x-rays, which could replace Le Sage's particles. Lorentz showed that an attractive force between charged particles (which might be taken to model the elementary subunits of matter) would indeed arise, but only if the incident energy were entirely absorbed. This was the same fundamental problem which had afflicted the particle models. So Lorentz wrote:
The circumstance however, that this attraction could only exist, if in some way or other electromagnetic energy were continually disappearing, is so serious a difficulty, that what has been said cannot be considered as furnishing an explanation of gravitation. Nor is this the only objection that can be raised. If the mechanism of gravitation consisted in vibrations which cross the aether with the velocity of light, the attraction ought to be modified by the motion of the celestial bodies to a much larger extent than astronomical observations make it possible to admit.
In 1922 Lorentz first examined Martin Knudsen's investigation on rarefied gases and in connection with that he discussed Le Sage's particle model, followed by a summary of his own electromagnetic Le Sage model – but he repeated his conclusion from 1900: Without absorption no gravitational effect.
In 1913 David Hilbert referred to Lorentz's theory and criticised it by arguing that no force in the form 1/r2 can arise, if the mutual distance of the atoms is large enough when compared with their wavelength.
=== J.J. Thomson ===
In 1904 J. J. Thomson considered a Le Sage-type model in which the primary ultramundane flux consisted of a hypothetical form of radiation much more penetrating even than x-rays. He argued that Maxwell's heat problem might be avoided by assuming that the absorbed energy is not converted into heat, but re-radiated in a still more penetrating form. He noted that this process possibly can explain where the energy of radioactive substances comes from – however, he stated that an internal cause of radioactivity is more probable. In 1911 Thomson went back to this subject in his article "Matter" in the Encyclopædia Britannica Eleventh Edition. There he stated, that this form of secondary radiation is somewhat analogous to how the passage of electrified particles through matter causes the radiation of the even more penetrating x-rays. He remarked:
It is a very interesting result of recent discoveries that the machinery which Le Sage introduced for the purpose of his theory has a very close analogy with things for which we have now direct experimental evidence....Röntgen rays, however, when absorbed do not, as far as we know, give rise to more penetrating Röntgen rays as they should to explain attraction, but either to less penetrating rays or to rays of the same kind.
=== Tommasina and Brush ===
Unlike Lorentz and Thomson, Thomas Tommasina between 1903 and 1928 suggested long wavelength radiation to explain gravity, and short wavelength radiation for explaining the cohesive forces of matter. Charles F. Brush in 1911 also proposed long wavelength radiation. But he later revised his view and changed to extremely short wavelengths.
== Later assessments ==
=== Darwin ===
In 1905, George Darwin subsequently calculated the gravitational force between two bodies at extremely close range to determine if geometrical effects would lead to a deviation from Newton's law. Here Darwin replaced Le Sage's cage-like units of ordinary matter with microscopic hard spheres of uniform size. He concluded that only in the instance of perfectly inelastic collisions (zero reflection) would Newton's law stand up, thus reinforcing the thermodynamic problem of Le Sage's theory. Also, such a theory is only valid if the normal and the tangential components of impact are totally inelastic (contrary to Le Sage's scattering mechanism), and the elementary particles are exactly of the same size. He went on to say that the emission of light is the exact converse of the absorption of Le Sage's particles. A body with different surface temperatures will move in the direction of the colder part. In a later review of gravitational theories, Darwin briefly described Le Sage's theory and said he gave the theory serious consideration, but then wrote:
I will not refer further to this conception, save to say that I believe that no man of science is disposed to accept it as affording the true road.
=== Poincaré ===
Partially based on the calculations of Darwin, an important criticism was given by Henri Poincaré in 1908. He concluded that the attraction is proportional to
S
ρ
v
{\displaystyle S{\sqrt {\rho }}v}
, where S is earth's molecular surface area, v is the velocity of the particles, and ρ is the density of the medium. Following Laplace, he argued that to maintain mass-proportionality the upper limit for S is at the most a ten-millionth of the Earth's surface. Now, drag (i.e. the resistance of the medium) is proportional to Sρv and therefore the ratio of drag to attraction is inversely proportional to Sv. To reduce drag, Poincaré calculated a lower limit for v = 24 · 1017 times the speed of light. So there are lower limits for Sv and v, and an upper limit for S and with those values one can calculate the produced heat, which is proportional to Sρv3. The calculation shows that earth's temperature would rise by 1026 degrees per second. Poincaré noticed, "that the earth could not long stand such a regime." Poincaré also analyzed some wave models (Tommasina and Lorentz), remarking that they suffered the same problems as the particle models. To reduce drag, superluminal wave velocities were necessary, and they would still be subject to the heating problem. After describing a similar re-radiation model like Thomson, he concluded: "Such are the complicated hypotheses to which we are led when we seek to make Le Sage's theory tenable".
He also stated that if in Lorentz' model the absorbed energy is fully converted into heat, that would raise earth's temperature by 1013 degrees per second. Poincaré then went on to consider Le Sage's theory in the context of the "new dynamics" that had been developed at the end of the 19th and the beginning of the 20th centuries, specifically recognizing the relativity principle. For a particle theory, he remarked that "it is difficult to imagine a law of collision compatible with the principle of relativity", and the problems of drag and heating remain.
== Predictions and criticism ==
=== Matter and particles ===
==== Porosity of matter ====
A basic prediction of the theory is the extreme porosity of matter. As supposed by Fatio and Le Sage in 1690/1758 (and before them, Huygens) matter must consist mostly of empty space so that the very small particles can penetrate the bodies nearly undisturbed and therefore every single part of matter can take part in the gravitational interaction. This prediction has been (in some respects) confirmed over the course of the time. Indeed, matter consists mostly of empty space and certain particles like neutrinos can pass through matter nearly unhindered. However, the image of elementary particles as classical entities who interact directly, determined by their shapes and sizes (in the sense of the net structure proposed by Fatio/Le Sage and the equisized spheres of Isenkrahe/Darwin), is not consistent with current understanding of elementary particles. The Lorentz/Thomson proposal of electrical charged particles as the basic constituents of matter is inconsistent with current physics as well.
==== Cosmic radiation ====
Every Le Sage-type model assumes the existence of a space-filling isotropic flux or radiation of enormous intensity and penetrating capability. This has some similarity to the cosmic microwave background radiation (CMBR) discovered in the 20th century. CMBR is indeed a space-filling and fairly isotropic flux, but its intensity is extremely small, as is its penetrating capability. The flux of neutrinos, emanating from (for example) the sun, possesses the penetrating properties envisaged by Le Sage for his ultramundane corpuscles, but this flux is not isotropic (since individual stars are the main sources of neutrinos) and the intensity is even less than that of the CMBR. Of course, neither the CMBR nor neutrinos propagate at superluminal speeds, which is another necessary attribute of Le Sage's particles. From a more modern point of view, discarding the simple "push" concept of Le Sage, the suggestion that the neutrino (or some other particle similar to the neutrino) might be the mediating particle in a quantum field theory of gravitation was considered and disproved by Feynman.
=== Gravitational shielding ===
Although matter is postulated to be very sparse in the Fatio–Le Sage theory, it cannot be perfectly transparent, because in that case no gravitational force would exist. However, the lack of perfect transparency leads to problems: with sufficient mass the amount of shading produced by two pieces of matter becomes less than the sum of the shading that each of them would produce separately, due to the overlap of their shadows (P10, above). This hypothetical effect, called gravitational shielding, implies that addition of matter does not result in a direct proportional increase in the gravitational mass. Therefore, in order to be viable, Fatio and Le Sage postulated that the shielding effect is so small as to be undetectable, which requires that the interaction cross-section of matter must be extremely small (P10, below). This places an extremely high lower-bound on the intensity of the flux required to produce the observed force of gravity. Any form of gravitational shielding would represent a violation of the equivalence principle, and would be inconsistent with the extremely precise null result observed in the Eötvös experiment and its successors – all of which have instead confirmed the precise equivalence of active and passive gravitational mass with inertial mass that was predicted by general relativity. For more historical information on the connection between gravitational shielding and Le Sage gravity, see Martins, and Borzeszkowski et al.
Since Isenkrahe's proposal on the connection between density, temperature and weight was based purely on the anticipated effects of changes in material density, and since temperature at a given density can be increased or decreased, Isenkrahe's comments do not imply any fundamental relation between temperature and gravitation. (There actually is a relation between temperature and gravitation, as well as between binding energy and gravitation, but these actual effects have nothing to do with Isenkrahe's proposal. See the section below on "Coupling to energy".) Regarding the prediction of a relation between gravitation and density, all experimental evidence indicates that there is no such relation.
=== Speed of gravity ===
==== Drag ====
According to Le Sage's theory, an isolated body is subjected to drag if it is in motion relative to the unique isotropic frame of the ultramundane flux (i.e., the frame in which the speed of the ultramundane corpuscles is the same in all directions). This is due to the fact that, if a body is in motion, the particles striking the body from the front have a higher speed (relative to the body) than those striking the body from behind – this effect will act to decrease the distance between the sun and the earth. The magnitude of this drag is proportional to vu, where v is the speed of the particles and u is the speed of the body, whereas the characteristic force of gravity is proportional to v2, so the ratio of drag to gravitational force is proportional to u/v. Thus for a given characteristic strength of gravity, the amount of drag for a given speed u can be made arbitrarily small by increasing the speed v of the ultramundane corpuscles. However, in order to reduce the drag to an acceptable level (i.e., consistent with observation) in terms of classical mechanics, the speed v must be many orders of magnitude greater than the speed of light. This makes Le Sage theory fundamentally incompatible with the modern science of mechanics based on special relativity, according to which no particle (or wave) can exceed the speed of light. In addition, even if superluminal particles were possible, the effective temperature of such a flux would be sufficient to incinerate all ordinary matter in a fraction of a second.
==== Aberration ====
As shown by Laplace, another possible Le Sage effect is orbital aberration due to finite speed of gravity. Unless the Le Sage particles are moving at speeds much greater than the speed of light, as Le Sage and Kelvin supposed, there is a time delay in the interactions between bodies (the transit time). In the case of orbital motion this results in each body reacting to a retarded position of the other, which creates a leading force component. Contrary to the drag effect, this component will act to accelerate both objects away from each other. In order to maintain stable orbits, the effect of gravity must either propagate much faster than the speed of light or must not be a purely central force. This has been suggested by many as a conclusive disproof of any Le Sage type of theory. In contrast, general relativity is consistent with the lack of appreciable aberration identified by Laplace, because even though gravity propagates at the speed of light in general relativity, the expected aberration is almost exactly cancelled by velocity-dependent terms in the interaction.
=== Range of gravity ===
In many particle models, such as Kelvin's, the range of gravity is limited due to the nature of particle interactions amongst themselves. The range is effectively determined by the rate that the proposed internal modes of the particles can eliminate the momentum defects (shadows) that are created by passing through matter. Such predictions as to the effective range of gravity will vary and are dependent upon the specific aspects and assumptions as to the modes of interactions that are available during particle interactions. However, for this class of models the observed large-scale structure of the cosmos constrains such dispersion to those that will allow for the aggregation of such immense gravitational structures.
=== Energy ===
==== Absorption ====
As noted in the historical section, a major problem for every Le Sage model is the energy and heat issue. As Maxwell and Poincaré showed, inelastic collisions lead to a vaporization of matter within fractions of a second and the suggested solutions were not convincing. For example, Aronson gave a simple proof of Maxwell's assertion:
Suppose that, contrary to Maxwell's hypothesis, the molecules of gross matter actually possess more energy than the particles. In that case the particles would, on the average, gain energy in the collision and the particles intercepted by body B would be replaced by more energetic ones rebounding from body B. Thus the effect of gravity would be reversed: there would be a mutual repulsion between all bodies of mundane matter, contrary to observation. If, on the other hand, the average kinetic energies of the particles and of the molecules are the same, then no net transfer of energy would take place, and the collisions would be equivalent to elastic ones, which, as has been demonstrated, do not yield a gravitational force.
Likewise Isenkrahe's violation of the energy conservation law is unacceptable, and Kelvin's application of Clausius' theorem leads (as noted by Kelvin himself) to some sort of perpetual motion mechanism. The suggestion of a secondary re-radiation mechanism for wave models attracted the interest of JJ Thomson, but was not taken very seriously by either Maxwell or Poincaré, because it entails a gross violation of the second law of thermodynamics (huge amounts of energy spontaneously being converted from a colder to a hotter form), which is one of the most solidly established of all physical laws.
The energy problem has also been considered in relation to the idea of mass accretion in connection with the Expanding Earth theory. Among the early theorists to link mass increase in some sort of push gravity model to Earth expansion were Yarkovsky and Hilgenberg. The idea of mass accretion and the expanding earth theory are not currently considered to be viable by mainstream scientists. This is because, among other reasons, according to the principle of mass–energy equivalence, if the Earth was absorbing the energy of the ultramundane flux at the rate necessary to produce the observed force of gravity (i.e. by using the values calculated by Poincaré), its mass would be doubling in each fraction of a second.
==== Coupling to energy ====
Based on observational evidence, it is now known that gravity interacts with all forms of energy, and not just with mass. The electrostatic binding energy of the nucleus, the energy of weak interactions in the nucleus, and the kinetic energy of electrons in atoms, all contribute to the gravitational mass of an atom, as has been confirmed to high precision in Eötvös type experiments.
This means, for example, that when the atoms of a quantity of gas are moving more rapidly, the gravitation of that gas increases. Moreover, Lunar Laser Ranging experiments have shown that even gravitational binding energy itself also gravitates, with a strength consistent with the equivalence principle to high precision – which furthermore demonstrates that any successful theory of gravitation must be nonlinear and self-coupling.
Le Sage's theory does not predict any of these aforementioned effects, nor do any of the known variants of Le Sage's theory.
== Non-gravitational applications and analogies ==
=== Mock gravity ===
Lyman Spitzer in 1941 calculated, that absorption of radiation between two dust particles lead to a net attractive force which varies proportional to 1/r2 (evidently he was unaware of Le Sage's shadow mechanism and especially Lorentz's considerations on radiation pressure and gravity). George Gamow, who called this effect "mock gravity", proposed in 1949 that after the Big Bang the temperature of electrons dropped faster than the temperature of background radiation. Absorption of radiation lead to a Lesage mechanism between electrons, which might have had an important role in the process of galaxy formation shortly after the Big Bang. However, this proposal was disproved by Field in 1971, who showed that this effect was much too small, because electrons and background radiation were nearly in thermal equilibrium. Hogan and White proposed in 1986 that mock gravity might have influenced the formation of galaxies by absorption of pregalactic starlight. But it was shown by Wang and Field that any form of mock gravity is incapable of producing enough force to influence galaxy formation.
=== Plasma ===
The Le Sage mechanism also has been identified as a significant factor in the behavior of dusty plasma. A.M. Ignatov has shown that an attractive force arises between two dust grains suspended in an isotropic collisionless plasma due to inelastic collisions between ions of the plasma and the grains of dust. This attractive force is inversely proportional to the square of the distance between dust grains, and can counterbalance the Coulomb repulsion between dust grains.
=== Vacuum energy ===
In quantum field theory the existence of virtual particles is proposed, which lead to the so-called Casimir effect. Casimir calculated that between two plates only particles with specific wavelengths should be counted when calculating the vacuum energy. Therefore, the energy density between the plates is less if the plates are close together, leading to a net attractive force between the plates. However, the conceptual framework of this effect is very different from the theory of Fatio and Le Sage.
== Recent activity ==
The re-examination of Le Sage's theory in the 19th century identified several closely interconnected problems with the theory. These relate to excessive heating, frictional drag, shielding, and gravitational aberration. The recognition of these problems, in conjunction with a general shift away from mechanical based theories, resulted in a progressive loss of interest in Le Sage's theory. Ultimately in the 20th century Le Sage's theory was eclipsed by Einstein's theory of general relativity.
In 1965 Richard Feynman examined the Fatio/Lesage mechanism, primarily as an example of an attempt to explain a "complicated" physical law (in this case, Newton's inverse-square law of gravity) in terms of simpler primitive operations without the use of complex mathematics, and also as an example of a failed theory. He notes that the mechanism of "bouncing particles" reproduces the inverse-square force law and that "the strangeness of the mathematical relation will be very much reduced", but then remarks that the scheme "does not work", because of the drag it predicts would be experienced by moving bodies.
There are occasional attempts to re-habilitate the theory outside the mainstream, including those of Radzievskii and Kagalnikova (1960), Shneiderov (1961), Buonomano and Engels (1976), Adamut (1982), Popescu (1982), Jaakkola (1996), Tom Van Flandern (1999), and Edwards (2014).
A variety of Le Sage models and related topics are discussed in Edwards, et al.
== Primary sources ==
== Secondary sources ==
== External links ==
Media related to Le Sage gravity at Wikimedia Commons | Wikipedia/Le_Sage's_theory_of_gravitation |
The theory of causal fermion systems is an approach to describe fundamental physics. It provides a unification of the weak, the strong and the electromagnetic forces with gravity at the level of classical field theory. Moreover, it gives quantum mechanics as a limiting case and has revealed close connections to quantum field theory. Therefore, it is a candidate for a unified physical theory.
Instead of introducing physical objects on a preexisting spacetime manifold, the general concept is to derive spacetime as well as all the objects therein as secondary objects from the structures of an underlying causal fermion system. This concept also makes it possible to generalize notions of differential geometry to the non-smooth setting. In particular, one can describe situations when spacetime no longer has a manifold structure on the microscopic scale (like a spacetime lattice or other discrete or continuous structures on the Planck scale). As a result, the theory of causal fermion systems is a proposal for quantum geometry and an approach to quantum gravity.
Causal fermion systems were introduced by Felix Finster and collaborators.
== Motivation and physical concept ==
The physical starting point is the fact that the Dirac equation in Minkowski space has solutions of negative energy which are usually associated to the Dirac sea. Taking the concept seriously that the states of the Dirac sea form an integral part of the physical system, one finds that many structures (like the causal and metric structures as well as the bosonic fields) can be recovered from the wave functions of the sea states. This leads to the idea that the wave functions of all occupied states (including the sea states) should be regarded as the basic physical objects, and that all structures in spacetime arise as a result of the collective interaction of the sea states with each other and with the additional particles and "holes" in the sea. Implementing this picture mathematically leads to the framework of causal fermion systems.
More precisely, the correspondence between the above physical situation and the mathematical framework is obtained as follows. All occupied states span a Hilbert space of wave functions in Minkowski space
M
^
{\displaystyle {\hat {M}}}
. The observable information on the distribution of the wave functions in spacetime is encoded in the local correlation operators
F
(
x
)
,
x
∈
M
^
,
{\displaystyle F(x),x\in {\hat {M}},}
which in an orthonormal basis
(
ψ
i
)
{\displaystyle (\psi _{i})}
have the matrix representation
(
F
(
x
)
)
j
i
=
−
ψ
i
(
x
)
¯
ψ
j
(
x
)
{\displaystyle {\big (}F(x){\big )}_{j}^{i}=-{\overline {\psi _{i}(x)}}\psi _{j}(x)}
(where
ψ
¯
{\displaystyle {\overline {\psi }}}
is the adjoint spinor).
In order to make the wave functions into the basic physical objects, one considers the set
{
F
(
x
)
|
x
∈
M
^
}
{\displaystyle \{F(x)\,|\,x\in {\hat {M}}\}}
as a set of linear operators on an abstract Hilbert space. The structures of Minkowski space are all disregarded, except for the volume measure
d
4
x
{\displaystyle d^{4}x}
, which is transformed to a corresponding measure on the linear operators (the "universal measure"). The resulting structures, namely a Hilbert space together with a measure on the linear operators thereon, are the basic ingredients of a causal fermion system.
The above construction can also be carried out in more general spacetimes. Moreover, taking the abstract definition as the starting point, causal fermion systems allow for the description of generalized "quantum spacetimes." The physical picture is that one causal fermion system describes a spacetime together with all structures and objects therein (like the causal and the metric structures, wave functions and quantum fields). In order to single out the physically admissible causal fermion systems, one must formulate physical equations. In analogy to the Lagrangian formulation of classical field theory, the physical equations for causal fermion systems are formulated via a variational principle, the so-called causal action principle. Since one works with different basic objects, the causal action principle has a novel mathematical structure where one minimizes a positive action under variations of the universal measure. The connection to conventional physical equations is obtained in a certain limiting case (the continuum limit) in which the interaction can be described effectively by gauge fields coupled to particles and antiparticles, whereas the Dirac sea is no longer apparent.
== General mathematical setting ==
In this section the mathematical framework of causal fermion systems is introduced.
=== Definition ===
A causal fermion system of spin dimension
n
∈
N
{\displaystyle n\in \mathbb {N} }
is a triple
(
H
,
F
,
ρ
)
{\displaystyle ({\mathcal {H}},{\mathcal {F}},\rho )}
where
(
H
,
⟨
.
|
.
⟩
H
)
{\displaystyle ({\mathcal {H}},\langle .|.\rangle _{\mathcal {H}})}
is a complex Hilbert space.
F
{\displaystyle {\mathcal {F}}}
is the set of all self-adjoint linear operators of finite rank on
H
{\displaystyle {\mathcal {H}}}
which (counting multiplicities) have at most
n
{\displaystyle n}
positive and at most
n
{\displaystyle n}
negative eigenvalues.
ρ
{\displaystyle \rho }
is a measure on
F
{\displaystyle {\mathcal {F}}}
.
The measure
ρ
{\displaystyle \rho }
is referred to as the universal measure.
As will be outlined below, this definition is rich enough to encode analogs of the mathematical structures needed to formulate physical theories. In particular, a causal fermion system gives rise to a spacetime together with additional structures that generalize objects like spinors, the metric and curvature. Moreover, it comprises quantum objects like wave functions and a fermionic Fock state.
=== The causal action principle ===
Inspired by the Langrangian formulation of classical field theory, the dynamics on a causal fermion system is described by a variational principle defined as follows.
Given a Hilbert space
(
H
,
⟨
.
|
.
⟩
H
)
{\displaystyle ({\mathcal {H}},\langle .|.\rangle _{\mathcal {H}})}
and the spin dimension
n
{\displaystyle n}
, the set
F
{\displaystyle {\mathcal {F}}}
is defined as above. Then for any
x
,
y
∈
F
{\displaystyle x,y\in {\mathcal {F}}}
, the product
x
y
{\displaystyle xy}
is an operator of rank at most
2
n
{\displaystyle 2n}
. It is not necessarily self-adjoint because in general
(
x
y
)
∗
=
y
x
≠
x
y
{\displaystyle (xy)^{*}=yx\neq xy}
. We denote the non-trivial eigenvalues of the operator
x
y
{\displaystyle xy}
(counting algebraic multiplicities) by
λ
1
x
y
,
…
,
λ
2
n
x
y
∈
C
.
{\displaystyle \lambda _{1}^{xy},\ldots ,\lambda _{2n}^{xy}\in {\mathbb {C} }.}
Moreover, the spectral weight
|
.
|
{\displaystyle |.|}
is defined by
|
x
y
|
=
∑
i
=
1
2
n
|
λ
i
x
y
|
and
|
(
x
y
)
2
|
=
∑
i
=
1
2
n
|
λ
i
x
y
|
2
.
{\displaystyle |xy|=\sum _{i=1}^{2n}|\lambda _{i}^{xy}|\quad {\text{and}}\quad {\big |}(xy)^{2}{\big |}=\sum _{i=1}^{2n}|\lambda _{i}^{xy}|^{2}{\,}.}
The Lagrangian is introduced by
L
(
x
,
y
)
=
|
(
x
y
)
2
|
−
1
2
n
|
x
y
|
2
=
1
4
n
∑
i
,
j
=
1
2
n
(
|
λ
i
x
y
|
−
|
λ
j
x
y
|
)
2
≥
0
.
{\displaystyle {\mathcal {L}}(x,y)={\big |}(xy)^{2}{\big |}-{\frac {1}{2n}}{\,}|xy|^{2}={\frac {1}{4n}}\sum _{i,j=1}^{2n}{\big (}|\lambda _{i}^{xy}|-|\lambda _{j}^{xy}|{\big )}^{2}\geq 0{\,}.}
The causal action is defined by
S
=
∬
F
×
F
L
(
x
,
y
)
d
ρ
(
x
)
d
ρ
(
y
)
.
{\displaystyle {\mathcal {S}}=\iint _{{\mathcal {F}}\times {\mathcal {F}}}{\mathcal {L}}(x,y){\,}d\rho (x){\,}d\rho (y){\,}.}
The causal action principle is to minimize
S
{\displaystyle {\mathcal {S}}}
under variations of
ρ
{\displaystyle \rho }
within the class of (positive) Borel measures under the following constraints:
Boundedness constraint:
∬
F
×
F
|
x
y
|
2
d
ρ
(
x
)
d
ρ
(
y
)
≤
C
{\displaystyle \iint _{{\mathcal {F}}\times {\mathcal {F}}}|xy|^{2}{\,}d\rho (x){\,}d\rho (y)\leq C}
for some positive constant
C
{\displaystyle C}
.
Trace constraint:
∫
F
tr
(
x
)
d
ρ
(
x
)
{\displaystyle \;\;\;\int _{\mathcal {F}}{\text{tr}}(x){\,}d\rho (x)}
is kept fixed.
The total volume
ρ
(
F
)
{\displaystyle \rho ({\mathcal {F}})}
is preserved.
Here on
F
⊂
L
(
H
)
{\displaystyle {\mathcal {F}}\subset {\mathrm {L} }({\mathcal {H}})}
one considers the topology induced by the
sup
{\displaystyle \sup }
-norm on the bounded linear operators on
H
{\displaystyle {\mathcal {H}}}
.
The constraints prevent trivial minimizers and ensure existence, provided that
H
{\displaystyle {\mathcal {H}}}
is finite-dimensional.
This variational principle also makes sense in the case that the total volume
ρ
(
F
)
{\displaystyle \rho ({\mathcal {F}})}
is infinite if one considers variations
δ
ρ
{\displaystyle \delta \rho }
of bounded variation with
(
δ
ρ
)
(
F
)
=
0
{\displaystyle (\delta \rho )({\mathcal {F}})=0}
.
== Inherent structures ==
In contemporary physical theories, the word spacetime refers to a Lorentzian manifold
(
M
,
g
)
{\displaystyle (M,g)}
. This means that spacetime is a set of points enriched by topological and geometric structures. In the context of causal fermion systems, spacetime does not need to have a manifold structure. Instead, spacetime
M
{\displaystyle M}
is a set of operators on a Hilbert space (a subset of
F
{\displaystyle {\mathcal {F}}}
). This implies additional inherent structures that correspond to and generalize usual objects on a spacetime manifold.
For a causal fermion system
(
H
,
F
,
ρ
)
{\displaystyle ({\mathcal {H}},{\mathcal {F}},\rho )}
,
we define spacetime
M
{\displaystyle M}
as the support of the universal measure,
M
:=
supp
ρ
⊂
F
.
{\displaystyle M:={\text{supp}}\,\rho \subset {\mathcal {F}}.}
With the topology induced by
F
{\displaystyle {\mathcal {F}}}
,
spacetime
M
{\displaystyle M}
is a topological space.
=== Causal structure ===
For
x
,
y
∈
M
{\displaystyle x,y\in M}
, we denote the non-trivial eigenvalues of the operator
x
y
{\displaystyle xy}
(counting algebraic multiplicities) by
λ
1
x
y
,
…
,
λ
2
n
x
y
∈
C
{\displaystyle \lambda _{1}^{xy},\ldots ,\lambda _{2n}^{xy}\in {\mathbb {C} }}
.
The points
x
{\displaystyle x}
and
y
{\displaystyle y}
are defined to be spacelike separated if all the
λ
j
x
y
{\displaystyle \lambda _{j}^{xy}}
have the same absolute value. They are timelike separated if the
λ
j
x
y
{\displaystyle \lambda _{j}^{xy}}
do not all have the same absolute value and are all real. In all other cases, the points
x
{\displaystyle x}
and
y
{\displaystyle y}
are lightlike separated.
This notion of causality fits together with the "causality" of the above causal action in the sense that if two spacetime points
x
,
y
∈
M
{\displaystyle x,y\in M}
are space-like separated, then the Lagrangian
L
(
x
,
y
)
{\displaystyle {\mathcal {L}}(x,y)}
vanishes. This corresponds to the physical notion of causality that spatially separated spacetime points do not interact. This causal structure is the reason for the notion "causal" in causal fermion system and causal action.
Let
π
x
{\displaystyle \pi _{x}}
denote the orthogonal projection on the subspace
S
x
:=
x
(
H
)
⊂
H
{\displaystyle S_{x}:=x({\mathcal {H}})\subset {\mathcal {H}}}
. Then the sign of the functional
i
Tr
(
x
y
π
x
π
y
−
y
x
π
y
π
x
)
{\displaystyle i{\text{Tr}}{\big (}x\,y\,\pi _{x}\,\pi _{y}-y\,x\,\pi _{y}\,\pi _{x})}
distinguishes the future from the past. In contrast to the structure of a partially ordered set, the relation "lies in the future of" is in general not transitive. But it is transitive on the macroscopic scale in typical examples.
=== Spinors and wave functions ===
For every
x
∈
M
{\displaystyle x\in M}
the spin space is defined by
S
x
=
x
(
H
)
{\displaystyle S_{x}=x({\mathcal {H}})}
; it is a subspace of
H
{\displaystyle {\mathcal {H}}}
of dimension at most
2
n
{\displaystyle 2n}
. The spin scalar product
≺
⋅
|
⋅
≻
x
{\displaystyle {\prec }\cdot |\cdot {\succ }_{x}}
defined by
≺
u
|
v
≻
x
=
−
⟨
u
|
x
v
⟩
H
for all
u
,
v
∈
S
x
{\displaystyle {\prec }u|v{\succ }_{x}=-{\langle }u|xv{\rangle }_{\mathcal {H}}\qquad {\text{for all }}u,v\in S_{x}}
is an indefinite inner product on
S
x
{\displaystyle S_{x}}
of signature
(
p
,
q
)
{\displaystyle (p,q)}
with
p
,
q
≤
n
{\displaystyle p,q\leq n}
.
A wave function
ψ
{\displaystyle \psi }
is a mapping
ψ
:
M
→
H
with
ψ
(
x
)
∈
S
x
for all
x
∈
M
.
{\displaystyle \psi {\,}:{\,}M\rightarrow {\mathcal {H}}\qquad {\text{with}}\qquad \psi (x)\in S_{x}\quad {\text{for all }}x\in M{\,}.}
On wave functions for which the norm
|
|
|
⋅
|
|
|
{\displaystyle {|\!|\!|}\cdot {|\!|\!|}}
defined by
|
|
|
ψ
|
|
|
2
=
∫
M
⟨
ψ
(
x
)
|
|
x
|
ψ
(
x
)
⟩
H
d
ρ
(
x
)
{\displaystyle {|\!|\!|}\psi {|\!|\!|}^{2}=\int _{M}\left\langle \psi (x){\bigg |}\,|x|\,\psi (x)\right\rangle _{\mathcal {H}}{\,}d\rho (x)}
is finite (where
|
x
|
=
x
2
{\displaystyle |x|={\sqrt {x^{2}}}}
is the absolute value of the symmetric operator
x
{\displaystyle x}
), one can define the inner product
<
ψ
|
ϕ
>
=
∫
M
≺
ψ
(
x
)
|
ϕ
(
x
)
≻
x
d
ρ
(
x
)
.
{\displaystyle {\mathopen {<}}\psi |\phi {\mathclose {>}}=\int _{M}{\prec }\psi (x)|\phi (x){\succ }_{x}{\,}d\rho (x){\,}.}
Together with the topology induced by the norm
|
|
|
⋅
|
|
|
{\displaystyle {|\!|\!|}\cdot {|\!|\!|}}
, one obtains a Krein space
(
K
,
<
⋅
|
⋅
>
)
{\displaystyle ({\mathcal {K}},{\mathopen {<}}\cdot |\cdot {\mathclose {>}})}
.
To any vector
u
∈
H
{\displaystyle u\in {\mathcal {H}}}
we can associate the wave function
ψ
u
(
x
)
:=
π
x
u
{\displaystyle \psi ^{u}(x):=\pi _{x}u}
(where
π
x
:
H
→
S
x
{\displaystyle \pi _{x}:{\mathcal {H}}\rightarrow S_{x}}
is again the orthogonal projection to the spin space).
This gives rise to a distinguished family of wave functions, referred to as the
wave functions of the occupied states.
=== The fermionic projector ===
The kernel of the fermionic projector
P
(
x
,
y
)
{\displaystyle P(x,y)}
is defined by
P
(
x
,
y
)
=
π
x
y
|
S
y
:
S
y
→
S
x
{\displaystyle P(x,y)=\pi _{x}\,y|_{S_{y}}{\,}:{\,}S_{y}\rightarrow S_{x}}
(where
π
x
:
H
→
S
x
{\displaystyle \pi _{x}:{\mathcal {H}}\rightarrow S_{x}}
is again the orthogonal projection on the spin space,
and
|
S
y
{\displaystyle |_{S_{y}}}
denotes the restriction to
S
y
{\displaystyle S_{y}}
). The fermionic projector
P
{\displaystyle P}
is the operator
P
:
K
→
K
,
(
P
ψ
)
(
x
)
=
∫
M
P
(
x
,
y
)
ψ
(
y
)
d
ρ
(
y
)
,
{\displaystyle P{\,}:{\,}{\mathcal {K}}\rightarrow {\mathcal {K}}{\,},\qquad (P\psi )(x)=\int _{M}P(x,y)\,\psi (y)\,d\rho (y){\,},}
which has the dense domain of definition given by all vectors
ψ
∈
K
{\displaystyle \psi \in {\mathcal {K}}}
satisfying the conditions
ϕ
:=
∫
M
x
ψ
(
x
)
d
ρ
(
x
)
∈
H
and
|
|
|
ϕ
|
|
|
<
∞
.
{\displaystyle \phi :=\int _{M}x\,\psi (x)\,d\rho (x){\,}\in {\,}{\mathcal {H}}\quad {\text{and}}\quad {|\!|\!|}\phi {|\!|\!|}<\infty {\,}.}
As a consequence of the causal action principle, the kernel of the fermionic projector has additional normalization properties which justify the name projector.
=== Connection and curvature ===
Being an operator from one spin space to another, the kernel of the fermionic projector gives relations between different spacetime points. This fact can be used to introduce a spin connection
D
x
,
y
:
S
y
→
S
x
unitary
.
{\displaystyle D_{x,y}\,:\,S_{y}\rightarrow S_{x}\quad {\text{unitary}}\,.}
The basic idea is to take a polar decomposition of
P
(
x
,
y
)
{\displaystyle P(x,y)}
. The construction becomes more involved by the fact that the spin connection should induce a corresponding metric connection
∇
x
,
y
:
T
y
→
T
x
isometric
,
{\displaystyle \nabla _{x,y}\,:\,T_{y}\rightarrow T_{x}\quad {\text{isometric}}\,,}
where the tangent space
T
x
{\displaystyle T_{x}}
is a specific subspace of the linear operators on
S
x
{\displaystyle S_{x}}
endowed with a Lorentzian metric.
The spin curvature is defined as the holonomy of the spin connection,
R
(
x
,
y
,
z
)
=
D
x
,
y
D
y
,
z
D
z
,
x
:
S
x
→
S
x
.
{\displaystyle {\mathfrak {R}}(x,y,z)=D_{x,y}\,D_{y,z}\,D_{z,x}\,:\,S_{x}\rightarrow S_{x}\,.}
Similarly, the metric connection gives rise to metric curvature. These geometric structures give rise to a proposal for a quantum geometry.
=== The Euler–Lagrange equations and the linearized field equations ===
A minimizer
ρ
{\displaystyle \rho }
of the causal action satisfies corresponding Euler–Lagrange equations. They state that the function
ℓ
κ
{\displaystyle \ell _{\kappa }}
defined by
ℓ
κ
(
x
)
:=
∫
M
(
L
κ
(
x
,
y
)
+
κ
|
x
y
|
2
)
d
ρ
(
y
)
−
s
{\displaystyle \ell _{\kappa }(x):=\int _{M}{\big (}{\mathcal {L}}_{\kappa }(x,y)+\kappa \,|xy|^{2}{\big )}\,d\rho (y)\,-\,{\mathfrak {s}}}
(with two Lagrange parameters
κ
{\displaystyle \kappa }
and
s
{\displaystyle {\mathfrak {s}}}
) vanishes and is minimal on the support of
ρ
{\displaystyle \rho }
,
ℓ
κ
|
M
≡
inf
x
∈
F
ℓ
κ
(
x
)
=
0
.
{\displaystyle \ell _{\kappa }|_{M}\equiv \inf _{x\in {\mathcal {F}}}\ell _{\kappa }(x)=0\,.}
For the analysis, it is convenient to introduce jets
u
:=
(
a
,
u
)
{\displaystyle {\mathfrak {u}}:=(a,u)}
consisting of a real-valued function
a
{\displaystyle a}
on
M
{\displaystyle M}
and a vector field
u
{\displaystyle u}
on
T
F
{\displaystyle T{\mathcal {F}}}
along
M
{\displaystyle M}
, and to denote the combination of multiplication and directional derivative by
∇
u
g
(
x
)
:=
a
(
x
)
g
(
x
)
+
(
D
u
g
)
(
x
)
{\displaystyle \nabla _{\mathfrak {u}}g(x):=a(x)\,g(x)+{\big (}D_{u}g{\big )}(x)}
. Then the Euler–Lagrange equations imply that the weak Euler–Lagrange equations
∇
u
ℓ
|
M
=
0
{\displaystyle \nabla _{\mathfrak {u}}\ell |_{M}=0}
hold for any test jet
u
{\displaystyle {\mathfrak {u}}}
.
Families of solutions of the Euler–Lagrange equations are generated infinitesimally by a jet
v
{\displaystyle {\mathfrak {v}}}
which satisfies the linearized field equations
⟨
u
,
Δ
v
⟩
|
M
=
0
,
{\displaystyle \langle {\mathfrak {u}},\Delta {\mathfrak {v}}\rangle |_{M}=0\,,}
to be satisfied for all test jets
u
{\displaystyle {\mathfrak {u}}}
, where the Laplacian
Δ
{\displaystyle \Delta }
is defined by
⟨
u
,
Δ
v
⟩
(
x
)
:=
∇
u
(
∫
M
(
∇
1
,
v
+
∇
2
,
v
)
L
(
x
,
y
)
d
ρ
(
y
)
−
∇
v
s
)
.
{\displaystyle \langle {\mathfrak {u}},\Delta {\mathfrak {v}}\rangle (x):=\nabla _{\mathfrak {u}}{\bigg (}\int _{M}{\big (}\nabla _{1,{\mathfrak {v}}}+\nabla _{2,{\mathfrak {v}}}{\big )}{\mathcal {L}}(x,y)\,d\rho (y)-\nabla _{\mathfrak {v}}{\mathfrak {s}}{\bigg )}\,.}
The Euler–Lagrange equations describe the dynamics of the causal fermion system, whereas small perturbations of the system are described by the linearized field equations.
=== Conserved surface layer integrals ===
In the setting of causal fermion systems, spatial integrals are expressed by so-called surface layer integrals. In general terms, a surface layer integral is a double integral of the form
∫
Ω
(
∫
M
∖
Ω
⋯
L
(
x
,
y
)
d
ρ
(
y
)
)
d
ρ
(
x
)
,
{\displaystyle \int _{\Omega }{\bigg (}\int _{M\setminus \Omega }\cdots {\mathcal {L}}(x,y)\,d\rho (y){\bigg )}\,d\rho (x)\,,}
where one variable is integrated over a subset
Ω
⊂
M
{\displaystyle \Omega \subset M}
, and the other variable is integrated over the complement of
Ω
{\displaystyle \Omega }
. It is possible to express the usual conservation laws for charge, energy, ... in terms of surface layer integrals. The corresponding conservation laws are a consequence of the Euler–Lagrange equations of the causal action principle and the linearized field equations. For the applications, the most important surface layer integrals are the current integral
γ
ρ
Ω
(
v
)
{\displaystyle \gamma _{\rho }^{\Omega }({\mathfrak {v}})}
, the symplectic form
σ
ρ
Ω
(
u
,
v
)
{\displaystyle \sigma _{\rho }^{\Omega }({\mathfrak {u}},{\mathfrak {v}})}
, the surface layer inner product
⟨
u
,
v
⟩
ρ
Ω
{\displaystyle \langle {\mathfrak {u}},{\mathfrak {v}}\rangle _{\rho }^{\Omega }}
and the nonlinear surface layer integral
γ
Ω
(
ρ
~
,
ρ
)
{\displaystyle \gamma ^{\Omega }({\tilde {\rho }},\rho )}
.
=== Bosonic Fock space dynamics ===
Based on the conservation laws for the above surface layer integrals, the dynamics of a causal fermion system as described by the Euler–Lagrange equations corresponding to the causal action principle can be rewritten as a linear, norm-preserving dynamics on the bosonic Fock space built up of solutions of the linearized field equations. In the so-called holomorphic approximation, the time evolution respects the complex structure, giving rise to a unitary time evolution on the bosonic Fock space.
=== A fermionic Fock state ===
If
H
{\displaystyle {\mathcal {H}}}
has finite dimension
f
{\displaystyle f}
, choosing an orthonormal basis
u
1
,
…
,
u
f
{\displaystyle u_{1},\ldots ,u_{f}}
of
H
{\displaystyle {\mathcal {H}}}
and taking the wedge product of the corresponding wave functions
(
ψ
u
1
∧
⋯
∧
ψ
u
f
)
(
x
1
,
…
,
x
f
)
{\displaystyle {\big (}\psi ^{u_{1}}\wedge \cdots \wedge \psi ^{u_{f}}{\big )}(x_{1},\ldots ,x_{f})}
gives a state of an
f
{\displaystyle f}
-particle fermionic Fock space. Due to the total anti-symmetrization, this state depends on the choice of the basis of
H
{\displaystyle {\mathcal {H}}}
only by a phase factor. This correspondence explains why the vectors in the particle space are to be interpreted as fermions. It also motivates the name causal fermion system.
== Underlying physical principles ==
Causal fermion systems incorporate several physical principles in a specific way:
A local gauge principle: In order to represent the wave functions in components, one chooses bases of the spin spaces. Denoting the signature of the spin scalar product at
x
{\displaystyle x}
by
(
p
x
,
q
x
)
{\displaystyle ({\mathfrak {p}}_{x},{\mathfrak {q}}_{x})}
, a pseudo-orthonormal basis
(
e
α
(
x
)
)
α
=
1
,
…
,
p
x
+
q
x
{\displaystyle ({\mathfrak {e}}_{\alpha }(x))_{\alpha =1,\ldots ,{\mathfrak {p}}_{x}+{\mathfrak {q}}_{x}}}
of
S
x
{\displaystyle S_{x}}
is given by
≺
e
α
|
e
β
≻
=
s
α
δ
α
β
with
s
1
,
…
,
s
p
x
=
1
,
s
p
x
+
1
,
…
,
s
p
x
+
q
x
=
−
1
.
{\displaystyle {\prec }{\mathfrak {e}}_{\alpha }|{\mathfrak {e}}_{\beta }{\succ }=s_{\alpha }{\,}\delta _{\alpha \beta }\quad {\text{with}}\quad s_{1},\ldots ,s_{{\mathfrak {p}}_{x}}=1,\;\;s_{{\mathfrak {p}}_{x}+1},\ldots ,s_{{\mathfrak {p}}_{x}+{\mathfrak {q}}_{x}}=-1{\,}.}
Then a wave function
ψ
{\displaystyle \psi }
can be represented with component functions,
ψ
(
x
)
=
∑
α
=
1
p
x
+
q
x
ψ
α
(
x
)
e
α
(
x
)
.
{\displaystyle \psi (x)=\sum _{\alpha =1}^{{\mathfrak {p}}_{x}+{\mathfrak {q}}_{x}}\psi ^{\alpha }(x){\,}{\mathfrak {e}}_{\alpha }(x){\,}.}
The freedom of choosing the bases
(
e
α
(
x
)
)
{\displaystyle ({\mathfrak {e}}_{\alpha }(x))}
independently at every spacetime point corresponds to local unitary transformations of the wave functions,
ψ
α
(
x
)
→
∑
β
=
1
p
x
+
q
x
U
(
x
)
β
α
ψ
β
(
x
)
with
U
(
x
)
∈
U
(
p
x
,
q
x
)
.
{\displaystyle \psi ^{\alpha }(x)\rightarrow \sum _{\beta =1}^{{\mathfrak {p}}_{x}+{\mathfrak {q}}_{x}}U(x)_{\beta }^{\alpha }\,\,\psi ^{\beta }(x)\quad {\text{with}}\quad U(x)\in {\text{U}}({\mathfrak {p}}_{x},{\mathfrak {q}}_{x}){\,}.}
These transformations have the interpretation as local gauge transformations. The gauge group is determined to be the isometry group of the spin scalar product. The causal action is gauge invariant in the sense that it does not depend on the choice of spinor bases.
The equivalence principle: For an explicit description of spacetime one must work with local coordinates. The freedom in choosing such coordinates generalizes the freedom in choosing general reference frames in a spacetime manifold. Therefore, the equivalence principle of general relativity is respected. The causal action is generally covariant in the sense that it does not depend on the choice of coordinates.
The Pauli exclusion principle: The fermionic Fock state associated to the causal fermion system makes it possible to describe the many-particle state by a totally antisymmetric wave function. This gives agreement with the Pauli exclusion principle.
The principle of causality is incorporated by the form of the causal action in the sense that spacetime points with spacelike separation do not interact.
== Limiting cases ==
Causal fermion systems have mathematically sound limiting cases that give a connection to conventional physical structures.
=== Lorentzian spin geometry of globally hyperbolic spacetimes ===
Starting on any globally hyperbolic Lorentzian spin manifold
(
M
^
,
g
)
{\displaystyle ({\hat {M}},g)}
with spinor bundle
S
M
^
{\displaystyle S{\hat {M}}}
, one gets into the framework of causal fermion systems by choosing
(
H
,
⟨
.
|
.
⟩
H
)
{\displaystyle ({\mathcal {H}},{\langle }.|.{\rangle }_{\mathcal {H}})}
as a subspace of the solution space of the Dirac equation. Defining the so-called local correlation operator
F
(
p
)
{\displaystyle F(p)}
for
p
∈
M
^
{\displaystyle p\in {\hat {M}}}
by
⟨
ψ
|
F
(
p
)
ϕ
⟩
H
=
−
≺
ψ
|
ϕ
≻
p
{\displaystyle {\langle }\psi |F(p)\phi {\rangle }_{\mathcal {H}}=-{\prec }\psi |\phi {\succ }_{p}}
(where
≺
ψ
|
ϕ
≻
p
{\displaystyle {\prec }\psi |\phi {\succ }_{p}}
is the inner product on the fibre
S
p
M
^
{\displaystyle S_{p}{\hat {M}}}
) and introducing the universal measure as the push-forward of the volume measure on
M
^
{\displaystyle {\hat {M}}}
,
ρ
=
F
∗
d
μ
,
{\displaystyle \rho =F_{*}d\mu {\,},}
one obtains a causal fermion system. For the local correlation operators to be well-defined,
H
{\displaystyle {\mathcal {H}}}
must consist of continuous sections, typically making it necessary to introduce a regularization on the microscopic scale
ε
{\displaystyle \varepsilon }
. In the limit
ε
↘
0
{\displaystyle \varepsilon \searrow 0}
, all the intrinsic structures on the causal fermion system (like the causal structure, connection and curvature) go over to the corresponding structures on the Lorentzian spin manifold. Thus the geometry of spacetime is encoded completely in the corresponding causal fermion systems.
=== Quantum mechanics and classical field equations ===
The Euler–Lagrange equations corresponding to the causal action principle have a well-defined limit if the spacetimes
M
:=
supp
ρ
{\displaystyle M:={\text{supp}}\,\rho }
of the causal fermion systems go over to Minkowski space. More specifically, one considers a sequence of causal fermion systems (for example with
H
{\displaystyle {\mathcal {H}}}
finite-dimensional in order to ensure the existence of the fermionic Fock state as well as of minimizers of the causal action), such that the corresponding wave functions go over to a configuration of interacting Dirac seas involving additional particle states or "holes" in the seas. This procedure, referred to as the continuum limit, gives effective equations having the structure of the Dirac equation coupled to classical field equations. For example, for a simplified model involving three elementary fermionic particles
in spin dimension two, one obtains an interaction via a classical axial gauge field
A
{\displaystyle A}
described by the coupled Dirac– and Yang–Mills equations
(
i
∂
/
+
γ
5
A
/
−
m
)
ψ
=
0
C
0
(
∂
j
k
A
j
−
◻
A
k
)
−
C
2
A
k
=
12
π
2
ψ
¯
γ
5
γ
k
ψ
.
{\displaystyle {\begin{aligned}(i\partial \!\!\!/\ +\gamma ^{5}A\!\!\!/\ -m)\psi &=0\\C_{0}(\partial _{j}^{k}A^{j}-\Box A^{k})-C_{2}A^{k}&=12\pi ^{2}{\bar {\psi }}\gamma ^{5}\gamma ^{k}\psi \,.\end{aligned}}}
Taking the non-relativistic limit of the Dirac equation, one obtains the Pauli equation or the Schrödinger equation, giving the correspondence to quantum mechanics. Here
C
0
{\displaystyle C_{0}}
and
C
2
{\displaystyle C_{2}}
depend on the regularization and determine the coupling constant as well as the rest mass.
Likewise, for a system involving neutrinos in spin dimension 4, one gets effectively a massive
S
U
(
2
)
{\displaystyle SU(2)}
gauge field coupled to the left-handed component of the Dirac spinors. The fermion configuration of the standard model can be described in spin dimension 16.
=== The Einstein field equations ===
For the just-mentioned system involving neutrinos, the continuum limit also yields the Einstein field equations coupled to the Dirac spinors,
R
j
k
−
1
2
R
g
j
k
+
Λ
g
j
k
=
κ
T
j
k
[
Ψ
,
A
]
,
{\displaystyle R_{jk}-{\frac {1}{2}}\,R\,g_{jk}+\Lambda \,g_{jk}=\kappa \,T_{jk}[\Psi ,A]\,,}
up to corrections of higher order in the curvature tensor. Here the cosmological constant
Λ
{\displaystyle \Lambda }
is undetermined, and
T
j
k
{\displaystyle T_{jk}}
denotes the energy-momentum tensor of the spinors and the
S
U
(
2
)
{\displaystyle SU(2)}
gauge field. The gravitation constant
κ
{\displaystyle \kappa }
depends on the regularization length.
=== Quantum field theory in Minkowski space ===
Starting from the coupled system of equations obtained in the continuum limit and expanding in powers of the coupling constant, one obtains integrals which correspond to Feynman diagrams on the tree level. Fermionic loop diagrams arise due to the interaction with the sea states, whereas bosonic loop diagrams appear when taking averages over the microscopic (in generally non-smooth) spacetime structure of a causal fermion system (so-called microscopic mixing). The detailed analysis and comparison with standard quantum field theory is work in progress.
== References ==
== Further reading ==
Web platform on causal fermion systems | Wikipedia/Causal_fermion_systems |
Milieu control is a term popularized by psychiatrist Robert Jay Lifton to describe tactics that control environment and human communication through the use of social pressure and group language. This includes tactics such as dogma, protocols, innuendo, slang, and pronunciation, which enables group members to identify other members, or to promote cognitive changes in individuals. Lifton originally used "milieu control" to describe brainwashing and mind control, but the term has since been applied to other contexts.
== Background ==
Milieu control involves the control of communication within a group environment, that also may (or may not) result in a significant degree of isolation from surrounding society. When non-group members, or outsiders, are considered or potentially labeled as less valuable without basis for stated group-supported and group-reinforced prejudice, group members may have a tendency to then consider themselves as intellectually superior, which can limit alternate points of view, thus becoming a self-fulfilling prophecy in which group members automatically begin to devalue others and the intellect of others that are separate from their group, without logical rationale for doing so. Additionally, Milieu control "includes other techniques to restrict members' contact with the outside world and to be able to make critical, rational, judgments about information."
== See also ==
Politico-media complex
== References ==
== External links ==
"Robert Jay Lifton's eight criteria of thought reform as applied to the Executive Success Programs", (2013-10-03)
Attacks on Peripheral versus Central Elements of Self and the Impact of Thought Reforming Technique at the Wayback Machine (archived 2002-11-04)
Cognitive Impairment in Thought Reform Environments at the Wayback Machine (archived 2011-05-14)
International Cultic Studies Association | Wikipedia/Milieu_control |
Industrialisation (UK) or industrialization (US) is the period of social and economic change that transforms a human group from an agrarian society into an industrial society. This involves an extensive reorganisation of an economy for the purpose of manufacturing. Industrialisation is associated with increase of polluting industries heavily dependent on fossil fuels. With the increasing focus on sustainable development and green industrial policy practices, industrialisation increasingly includes technological leapfrogging, with direct investment in more advanced, cleaner technologies.
The reorganisation of the economy has many unintended consequences both economically and socially. As industrial workers' incomes rise, markets for consumer goods and services of all kinds tend to expand and provide a further stimulus to industrial investment and economic growth. Moreover, family structures tend to shift as extended families tend to no longer live together in one household, location or place.
== Background ==
The first transformation from an agricultural to an industrial economy is known as the Industrial Revolution and took place from the mid-18th to early 19th century. It began in Great Britain, spreading to Belgium, Switzerland, Germany, and France and eventually to other areas in Europe and North America. Characteristics of this early industrialisation were technological progress, a shift from rural work to industrial labour, and financial investments in new industrial structures. Later commentators have called this the First Industrial Revolution.
The "Second Industrial Revolution" labels the later changes that came about in the mid-19th century after the refinement of the steam engine, the invention of the internal combustion engine, the harnessing of electricity and the construction of canals, railways, and electric-power lines. The invention of the assembly line gave this phase a boost. Coal mines, steelworks, and textile factories replaced homes as the place of work.
By the end of the 20th century, East Asia had become one of the most recently industrialised regions of the world.
There is considerable literature on the factors facilitating industrial modernisation and enterprise development.
== Social consequences ==
The Industrial Revolution was accompanied by significant changes in the social structure, the main change being a transition from farm work to factory-related activities. This has resulted in the concept of Social class, i.e., hierarchical social status defined by an individual's economic power. It has changed the family system as most people moved into cities, with extended family living apart becoming more common. The movement into more dense urban areas from less dense agricultural areas has consequently increased the transmission of diseases. The place of women in society has shifted from primary caregivers to breadwinners, thus reducing the number of children per household. Furthermore, industrialisation contributed to increased cases of child labour and thereafter education systems.
=== Urbanisation ===
As the Industrial Revolution was a shift from the agrarian society, people migrated from villages in search of jobs to places where factories were established. This shifting of rural people led to urbanisation and an increase in the population of towns. The concentration of labour in factories has increased urbanisation and the size of settlements, to serve and house the factory workers.
=== Exploitation ===
=== Changes in family structure ===
Family structure changes with industrialisation. Sociologist Talcott Parsons noted that in pre-industrial societies there is an extended family structure spanning many generations who probably remained in the same location for generations. In industrialised societies the nuclear family, consisting of only parents and their growing children, predominates. Families and children reaching adulthood are more mobile and tend to relocate to where jobs exist. Extended family bonds become more tenuous. One of the most important criticisms of industrialisation is that it caused children to stay away from home for many hours and to use them as cheap workers in factories.
== Industrialisation in East Asia ==
Between the early 1960s and 1990s, the Four Asian Tigers underwent rapid industrialisation and maintained exceptionally high growth rates.
== Current situation ==
As of 2018 the international development community (World Bank, Organisation for Economic Co-operation and Development (OECD), many United Nations departments, FAO WHO ILO and UNESCO, endorses development policies like water purification or primary education and co-operation amongst third world communities. Some members of the economic communities do not consider contemporary industrialisation policies as being adequate to the global south (Third World countries) or beneficial in the longer term, with the perception that they may only create inefficient local industries unable to compete in the free-trade dominated political order which industrialisation has fostered. Environmentalism and Green politics may represent more visceral reactions to industrial growth. Nevertheless, repeated examples in history of apparently successful industrialisation (Britain, Soviet Union, South Korea, China, etc.) may make conventional industrialisation seem like an attractive or even natural path forward, especially as populations grow, consumerist expectations rise and agricultural opportunities diminish.
The relationships among economic growth, employment, and poverty reduction are complex, and higher productivity can sometimes lead to static or even lower employment (see jobless recovery).
There are differences across sectors, whereby manufacturing is less able than the tertiary sector to accommodate both increased productivity and employment opportunities; more than 40% of the world's employees are "working poor", whose incomes fail to keep themselves and their families above the $2-a-day poverty line. There is also a phenomenon of deindustrialisation, as in the former USSR countries' transition to market economies, and the agriculture sector is often the key sector in absorbing the resultant unemployment.
== See also ==
== References ==
== Further reading ==
Ahmady, Kameel (2021). Traces of Exploitation in the World of Childhood (A Comprehensive Research on Forms, Causes and Consequences of Child Labour in Iran). Denmark: Avaye Buf. ISBN 9788793926646.
Chandler Jr., Alfred D. (1993). The Visible Hand: The Management Revolution in American Business. Belknap Press of Harvard University Press. ISBN 978-0674940529.
Hewitt, T., Johnson, H. and Wield, D. (Eds) (1992) industrialisation and Development, Oxford University Press: Oxford.
Hobsbawm, Eric (1962): The Age of Revolution. Abacus.
Kemp, Tom (1993) Historical Patterns of Industrialisation, Longman: London. ISBN 0-582-09547-6
Kiely, R (1998) industrialisation and Development: A comparative analysis, UCL Press:London.
Landes, David. S. (1969). The Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present. Cambridge, New York: Press Syndicate of the University of Cambridge. ISBN 0-521-09418-6.
Pomeranz, Ken (2001)The Great Divergence: China, Europe and the Making of the Modern World Economy (Princeton Economic History of the Western World) by (Princeton University Press; New Ed edition, 2001)
Tilly, Richard H.: Industrialization as an Historical Process, European History Online, Main: Institute of European History, 2010, retrieved: 29 February 2011.
== External links == | Wikipedia/Industrialisation |
Forced conversion is the adoption of a religion or irreligion under duress. Someone who has been forced to convert to a different religion or irreligion may continue, covertly, to adhere to the beliefs and practices which were originally held, while outwardly behaving as a convert. Crypto-Jews, Crypto-Christians, Crypto-Muslims, Crypto-Hindus and Crypto-Pagans are historical examples of the latter.
== Religion and proselytization ==
The religions of the world are divided into two groups: those that actively seek new followers (missionary religions) and those that do not (non-missionary religions). This classification dates back to a lecture given by Max Müller in 1873, and is based on whether or not a religion seeks to gain new converts. The three main religions classified as missionary religions are Christianity, Islam, and Buddhism, while the non-missionary religions include Judaism, Hinduism, and Zoroastrianism. Other religions, such as Primal Religions, Confucianism, and Taoism, may also be considered non-missionary religions.
== Religion and power ==
In general, anthropologists have shown that the relationship between religion and politics is complex, especially when it is viewed over the expanse of human history.
While religious leaders and the state generally have different aims, both are concerned about power and order; both use reason and emotion to motivate behavior. Throughout history, leaders of religious and political institutions have cooperated, opposed one another, and/or attempted to co-opt each other, for purposes which are both noble and base, and they have implemented programs with a wide range of driving values, from compassion, which is aimed at alleviating current suffering, to brutal change, which is aimed at achieving long-term goals, for the benefit of groups which have ranged from small cliques to all of humanity. The relationship is far from simple. But religion has frequently been used in a coercive manner, and it has also used coercion.
== Buddhism ==
People may express their faith through the act of taking refuge, and conversions usually require people to recite their acceptance of the Triple Gems of Buddhism. However, they may always practice Buddhism without fully abandoning their own religion. According to Chin Human Rights Organisation (CHRO), Christians from the Chin ethnic minority group in Myanmar are facing coercion to convert to Buddhism by state actors and programme.
== Christianity ==
Christianity was a minority religion during much of the middle Roman Classical Period, and the early Christians were persecuted during that time. When Constantine I converted to Christianity, it had already grown to be the dominant religion of the Roman Empire. Already under the reign of Constantine I, Christian heretics were being persecuted; beginning in the late 4th century, the ancient pagan religions were also actively suppressed. In the view of many historians, the Constantinian shift turned Christianity from a persecuted religion into a religion which was capable of persecuting and sometimes eager to persecute.
=== Late Antiquity ===
On 27 February 380, together with Gratian and Valentinian II, Theodosius I issued the decree Cunctos populos, the so-called Edict of Thessalonica, recorded in the Codex Theodosianus xvi.1.2. This declared Trinitarian Nicene Christianity to be the only legitimate imperial religion and the only one entitled to call itself Catholic. Other Christians he described as "foolish madmen". He also ended official state support for the traditional polytheist religions and customs.
The Codex Theodosianus (Eng. Theodosian Code) was a compilation of the laws of the Roman Empire under the Christian emperors since 312. A commission was established by Theodosius II and his co-emperor Valentinian III on 26 March 429 and the compilation was published by a constitution of 15 February 438. It went into force in the eastern and western parts of the empire on 1 January 439.
It is Our will that all the peoples who are ruled by the administration of Our Clemency shall practice that religion which the divine Peter the Apostle transmitted to the Romans.... The rest, whom We adjudge demented and insane, shall sustain the infamy of heretical dogmas, their meeting places shall not receive the name of churches, and they shall be smitten first by divine vengeance and secondly by the retribution of Our own initiative (Codex Theodosianus XVI 1.2.).
Forced conversions of Jews were carried out with the support of rulers during Late Antiquity and the early Middle Ages in Gaul, the Iberian Peninsula and in the Byzantine Empire.
In Gregory of Tours' writing, he claimed that the Vandals attempted to force all Spanish Catholics to become Arian Christians during their rule in Spain. Gregory also recounted episodes of forced conversion of Jews by Chilperic I and Avitus of Clermont.
=== Medieval western Europe ===
During the Saxon Wars, Charlemagne, King of the Franks, forcibly converted the Saxons from their native Germanic paganism by way of warfare, and law upon conquest. Examples are the Massacre of Verden in 782, when Charlemagne reportedly had 4,500 captive Saxons massacred for rebelling, and the Capitulatio de partibus Saxoniae, a law imposed on conquered Saxons in 785, after another rebellion and destruction of churches and killing of missionary priests and monks, that prescribed death to those who refused to convert to Christianity.
Forced conversion that occurred after the seventh century generally took place during riots and massacres carried out by mobs and clergy without support of the rulers. In contrast, royal persecutions of Jews from the late eleventh century onward generally took the form of expulsions, with some exceptions, such as conversions of Jews in southern Italy of the 13th century, which were carried out by Dominican Inquisitors but instigated by King Charles II of Naples.
Jews were forced to convert to Christianity by the Crusaders in Lorraine, on the Lower Rhine, in Bavaria and Bohemia, in Mainz and in Worms (see Rhineland massacres, Worms massacre (1096)).
Though he strongly condemned and prohibited forced conversion and baptism by decree, Pope Innocent III suggested in a private letter to a bishop in 1201 that those who agreed to be baptized to avoid torture and intimidation might be compelled to outwardly observe Christianity:
[T]hose who are immersed even though reluctant, do belong to ecclesiastical jurisdiction at least by reason of the sacrament, and might therefore be reasonably compelled to observe the rules of the Christian Faith. It is, to be sure, contrary to the Christian Faith that anyone who is unwilling and wholly opposed to it should be compelled to adopt and observe Christianity. For this reason a valid distinction is made by some between kinds of unwilling ones and kinds of compelled ones. Thus one who is drawn to Christianity by violence, through fear and through torture, and receives the sacrament of Baptism in order to avoid loss, he (like one who comes to Baptism in dissimulation) does receive the impress of Christianity, and may be forced to observe the Christian Faith as one who expressed a conditional willingness though, absolutely speaking, he was unwilling ...
During the 12th–13th century Northern Crusades against the pagan Finnic, Baltic, and West Slavic peoples around the Baltic Sea forced conversions were a widely used tactic, which received papal sanction. These tactics were first adopted during the Wendish Crusade and became more widespread during the Livonian Crusade and Prussian Crusade, in which tactics included killing hostages, massacre, and devastation of the lands of tribes that had not yet submitted. Most of the populations of these regions were converted only after the repeated rebellion of native populations that did not want to accept Christianity even after initial forced conversion; in Old Prussia, the tactics employed in the initial conquest and subsequent conversion of the territory resulted in the death of most of the native population, whose language consequently became extinct.
=== Early modern Iberian peninsula ===
After the end of Islamic control of Spain, Jews were expelled from Spain in 1492. In Portugal, following an order for their expulsion in 1496, only a handful of them were allowed to leave and the rest of them were forced to convert. Muslims were expelled from Portugal in 1497, and they were gradually forced to convert in the constituent kingdoms of Spain. The forced conversion of Muslims was implemented in the Crown of Castile from 1500 to 1502 and it was implemented in the Crown of Aragon in the 1520s. After the conversions, the so-called "New Christians" were those inhabitants (Sephardic Jews or Mudéjar Muslims) who were baptized under coercion as well as in the face of execution, becoming forced converts from Islam (Moriscos, Conversos and "secret Moors") or converts from Judaism (Conversos, Crypto-Jews and Marranos).
After the forced conversions, when all former Muslims and Jews had ostensibly become Catholic, the Spanish and Portuguese Inquisitions primarily targeted forced converts from Judaism and Islam, who came under suspicion, because they were either accused of continuing to adhere to their old religion, or they were accused of falling back into it. Jewish conversos who still resided in Spain and frequently practiced Judaism in secret were suspected of being Crypto-Jews by the "Old Christians". The Spanish Inquisition generated much wealth and income for the church and individual inquisitors by confiscating the property of the persecuted. The end of Al-Andalus and the expulsion of the Sephardic Jews from the Iberian Peninsula went hand in hand with the increasing amount of Spanish and Portuguese influence in the world, influence which was exemplified by the Christian conquest of the aboriginal Indian populations of the Americas. The Ottoman Empire and Morocco absorbed most of the Jewish and Muslim refugees, but a large majority of them remained in Spain and Portugal by choosing to be Conversos.
=== European wars of religion ===
The Peace of Augsburg (1555), signed by Charles V, Holy Roman Emperor, stated that German princes could choose the religion (Lutheranism or Catholicism) of their realms according to their conscience (the principle of cuius regio, eius religio). .Subjects, citizens, or residents were generally forced to convert to their prince's religion, through a principle called ius reformandi. Those who did not wish to conform to the prince's choice were given a grace period in which they were free to emigrate to different regions in which their desired religion had been accepted. However, serfs were essentially excluded from this right to emigrate.
After the defeat of the rebellious Protestant Estates of the Kingdom of Bohemia by the Habsburg monarchy at the Battle of White Mountain in 1620, the Habsburgs introduced a Counter-Reformation and forcibly converted all Bohemians, even the Utraquist Hussites, back to the Catholic Church. In 1624, Emperor Ferdinand II issued a patent that allowed only the Catholic religion in Bohemia. In the 1620s, Protestant nobility, burghers, and clergy of Bohemia and Austria were expelled from the Habsburg lands or converted to Catholicism, while peasants were forced to adopt the religion of their new Catholic masters.
The Dragonnades was a policy implemented by Louis XIV in 1681 to force French Protestants known as Huguenots to convert to Catholicism. The dragonnades caused Protestants to flee France, even before the Edict of Fontainebleau of 1685 revoked the religious rights granted them by the Edict of Nantes.
=== Colonial Americas ===
During the European colonization of the Americas, forced conversion of the continents' indigenous, non-Christian population was common, especially in South America and Mesoamerica, where the conquest of large indigenous polities like the Inca and Aztec Empires placed colonizers in control of large non-Christian populations. According to some South American leaders and indigenous groups, there were cases among native populations of conversion under the threat of violence, often because they were compelled to after being conquered, and that the Catholic Church cooperated with civil authority to achieve this end.
=== Russia ===
Upon converting to Christianity in the 10th century, Vladimir the Great, the ruler of Kievan Rus', ordered Kiev's citizens to undergo a mass baptism in the Dnieper river.
In the 13th century the pagan populations of the Baltics faced campaigns of forcible conversion by crusading knight corps such as the Livonian Brothers of the Sword and the Teutonic Order, which often meant simply dispossessing these populations of their lands and property.
After Ivan the Terrible's conquest of the Khanate of Kazan, the Muslim population faced slaughter, expulsion, forced resettlement and conversion to Christianity.
In the 18th century, Elizabeth of Russia launched a campaign of forced conversion of Russia's non-Orthodox subjects, including Muslims and Jews.
=== Goa Inquisition ===
The Portuguese carried out the Christianisation of Goa in India in the 16th and 17th centuries. The majority of the natives of Goa had converted to Christianity by the end of the 16th century. The Portuguese rulers had implemented state policies encouraging and even rewarding conversions among Hindu subjects. The rapid rise of converts in Goa was mostly the result of Portuguese economic and political control over the Hindus, who were vassals of the Portuguese crown.
In 1567, the conversion of the majority of the native villagers to Christianity allowed the Portuguese to destroy temples in Bardez, with 300 Hindu temples destroyed. Prohibitions were then declared from December 4, 1567, on public performances of Hindu marriages, sacred thread wearing and cremation. All persons above 15 years of age were compelled to listen to Christian preaching, failing which they were punished. In 1583, Hindu temples at Assolna and Cuncolim were also destroyed by the Portuguese army after the majority of the native villagers there had also converted to Christianity.
"The fathers of the Church forbade the Hindus under terrible penalties the use of their own sacred books, and prevented them from all exercise of their religion. They destroyed their temples, and so harassed and interfered with the people that they abandoned the city in large numbers, refusing to remain any longer in a place where they had no liberty, and were liable to imprisonment, torture and death if they worshiped after their own fashion the gods of their fathers", wrote Filippo Sassetti, who was in India from 1578 to 1588.
=== Papal States ===
In 1858, Edgardo Mortara was taken from his Jewish parents and raised as a Catholic, because he had been baptized by a maid without his parents' consent or knowledge. This incident was called the Mortara case.
=== Serbs during World War II in Yugoslavia ===
During World War II in Yugoslavia, Orthodox Serbs were forcibly converted to Catholicism by the Ustashe.
== Hinduism ==
Indian Christians have alleged that Hindu groups in southern Chhattisgarh have forced Christian converts from Hinduism to revert to Hinduism. In the aftermath of the violence, American Christian evangelical groups have claimed that Hindu groups are forcibly reverting Christian converts from Hinduism back to Hinduism. It has also been alleged that these same Hindu groups have used allurements to convert poor Muslims and Christians to Hinduism against their will.
Apart from the incidents in Chhattisgarh, there are other reports of forced conversions of Christians and Muslims in India to Hinduism. Some of them were converted under duress or against their will, specifically through the ghar wapsi scheme by Hindu extremists, such as Shiv Sena, the VHP & also by the political party of the BJP. The Shiv Sena has said that India or Hindustan is not the homeland of Muslims and Christians. Hindu extremist groups like the Hindu Mahasabha, have gone so far as to call for massacres and forced sterilisations, of religious minorities, particularly the Muslims, that have not done ghar wapsi ("returned home") to Hinduism.
== Islam ==
=== Against Christians ===
After the Arab conquests, a number of Christian Arab tribes suffered enslavement and forced conversion.
The Teaching of Jacob (written soon after the death of Muhammad), is one of the earliest records on Islam and "implies that Muslims tried, on threat of death to make Christians abjure Christianity and accept Islam.”
=== Jizya and conversion ===
Non-Muslims were required to pay the jizya while pagans were either required to accept Islam, pay the jizya, be exiled, or be killed, depending on which of the four main schools of Islamic law their conqueror followed. Some historians believe that forced conversion was rare in early Islamic history, and most conversions to Islam were voluntary. Muslim rulers were often more interested in conquest than conversion. Ira Lapidus points towards "interwoven terms of political and economic benefits and of a sophisticated culture and religion" as appealing to the masses. He writes that:The question of why people convert to Islam has always generated the intense feeling. Earlier generations of European scholars believed that conversions to Islam were made at the point of the sword, and that conquered peoples were given the choice of conversion or death. It is now apparent that conversion by force, while not unknown in Muslim countries, was, in fact, rare. Muslim conquerors ordinarily wished to dominate rather than convert, and most conversions to Islam were voluntary. (...) In most cases, worldly and spiritual motives for conversion blended together. Moreover, conversion to Islam did not necessarily imply a complete turning from an old to a totally new life. While it entailed the acceptance of new religious beliefs and membership in a new religious community, most converts retained a deep attachment to the cultures and communities from which they came.Muslim scholars like Abu Hanifa and Abu Yusuf stated that the jizya tax should be paid by Non-Muslims (Kuffar) regardless of their religion, some later and also earlier Muslim jurists did not permit Non-Muslims who are not People of the Book or Ahle-Kitab (Jews, Christians, Sabians) pay the jizya. Instead, they only allowed them (non-Ahle-Kitab) to avoid death by choosing to convert to Islam. Of the four schools of Islamic jurisprudence, the Hanafi and Maliki schools allow polytheists to be granted dhimmi status, except Arab polytheists. However, the Shafi'i, Hanbali and Zahiri schools only consider Christians, Jews, and Sabians to be eligible to belong to the dhimmi category.
Wael Hallaq states that in theory, Islamic religious tolerance only applied to those religious groups that Islamic jurisprudence considered to be monotheistic "People of the Book", i.e. Christians, Jews, and Sabians if they paid the jizya tax, while to those excluded from the "People of the Book" were only offered two choices: convert to Islam or fight to the death. In practice, the "People of the Book" designation and dhimmi status were even extended to the non-monotheistic religions of the conquered peoples, such as Hindus, Jains, Buddhists, and other non-monotheists.
=== Druze ===
The Druze have frequently experienced persecution by different Muslim regimes such as the Shia Ismaili Fatimid State, Mamluk, Sunni Ottoman Empire, and Egypt Eyalet. The persecution of the Druze included massacres, demolishing Druze prayer houses and holy places and forced conversion to Islam. Those were no ordinary killings and massacres in the Druze's narrative, they were meant to eradicate the whole community according to the Druze narrative.
=== Early period ===
The wars of the Ridda (lit. apostasy) undertaken by Abu Bakr, the first caliph of the Rashidun Caliphate, against Arab tribes who had accepted Islam but refused to pay Zakat and Jizya Tax, have been described by some historians as an instance of forced conversion or "reconversion". The rebellion of these Arab tribes was less a relapse to the pre-Islamic Arabian religion than termination of a political contract they had made with Muhammad. Some of these tribal leaders claimed prophethood, bringing themselves in direct conflict with the Muslim Caliphate.
Two out of the four schools of Islamic law, i.e. Hanafi and Maliki schools, accepted non-Arab polytheists to be eligible for the dhimmi status. Under this doctrine, Arab polytheists were forced to choose between conversion and death. However, according to perception of most Muslim jurists, all Arabs had embraced Islam during the lifetime of Muhammad. Their exclusion therefore had little practical significance after his death in 632.
Arab historian Al-Baladhuri says that Caliph Umar deported Christians who refused to apostatize and convert to Islam, and that he obeyed the order of the prophet who advised: “there shall not remain two religions in the land of Arabia.”
In the 9th century, the Samaritan population of Palestine faced persecution and attempts at forced conversion at the hands of the rebel leader ibn Firāsa, against whom they were defended by Abbasid caliphal troops. Historians recognize that during the Early Middle Ages, the Christian populations living in the lands invaded by the Arab Muslim armies between the 7th and 10th centuries suffered religious discrimination, religious persecution, religious violence, and martyrdom multiple times at the hands of Arab Muslim officials and rulers. As People of the Book, Christians under Muslim rule were subjected to dhimmi status (along with Jews, Samaritans, Gnostics, Mandeans, and Zoroastrians), which was inferior to the status of Muslims. Christians and other religious minorities thus faced religious discrimination and religious persecution in that they were banned from proselytising (for Christians, it was forbidden to evangelize or spread Christianity) in the lands invaded by the Arab Muslims on pain of death, they were banned from bearing arms, undertaking certain professions, and were obligated to dress differently in order to distinguish themselves from Arabs. Under sharia, Non-Muslims were obligated to pay jizya and kharaj taxes, together with periodic heavy ransom levied upon Christian communities by Muslim rulers in order to fund military campaigns, all of which contributed a significant proportion of income to the Islamic states while conversely reducing many Christians to poverty, and these financial and social hardships forced many Christians to convert to Islam. Christians unable to pay these taxes were forced to surrender their children to the Muslim rulers as payment who would sell them as slaves to Muslim households where they were forced to convert to Islam. Many Christian martyrs were executed under the Islamic death penalty for defending their Christian faith through dramatic acts of resistance such as refusing to convert to Islam, repudiation of the Islamic religion and subsequent reconversion to Christianity, and blasphemy towards Muslim beliefs.
=== Umayyad Caliphate ===
After the Arab conquests a number of Christian Arab tribes suffered enslavement and forced conversion.
During the rise of the Islamic Caliphates, it was increasingly expected for all Arabs to be Muslims and pressure was put on many to convert. The Umayyad Caliph Al-Walid I said to Shamala, the Christian Arab leader of the Banu Taghlib: "As you are a chief of the Arabs you shame them all by worshipping the cross; obey my wish and turn Muslim." He replied, 'How so? I am chief of Taghlib, and I fear lest I become a cause of destruction to them all if I and they cease to believe in christ" Enraged Al-Walid had him dragged away on his face and tortured; afterward he commanded him again to convert to Islam or else prepare to "eat his own flesh." The Christian Arab again refused, and the order was carried out: Walid's servants "cut off a slice from Shamala's thigh and roasted it in the fire, and they thrust it into his mouth" and he was blinded during this as well. This event is confirmed by the Muslim historian Abu al-Faraj al-Isfahani
In the early eighth century under the Umayyads, 63 out of a group of 70 Christian pilgrims from Iconium were captured, tortured, and executed under the orders of the Arab Governor of Ceaserea for refusing to convert to Islam (seven were forcibly converted to Islam under torture). Soon afterwards, sixty more Christian pilgrims from Amorium were crucified in Jerusalem.
=== Almohad Caliphate ===
There were forced conversions in the 12th century under the Almohad dynasty of North Africa and al-Andalus, who suppressed the dhimmi status of Jews and Christians and gave them the choice between conversion, exile, and being executed. The treatment and persecution of Jews under Almohad rule was a drastic change. Prior to Almohad rule during the Caliphate of Córdoba, Jewish culture experienced a Golden Age. María Rosa Menocal, a specialist in Iberian literature at Yale University, has argued that "tolerance was an inherent aspect of Andalusian society", and that the Jewish dhimmis living under the Caliphate, while allowed fewer rights than Muslims, were still better off than in Christian Europe. Many Jews migrated to al-Andalus, where they were not just tolerated but allowed to practice their faith openly. Christians had also practiced their religion openly in Córdoba, and both Jews and Christians lived openly in Morocco as well.
The first Almohad ruler, Abd al-Mumin, allowed an initial seven-month grace period. Then he forced most of the urban dhimmi population in Morocco, both Jewish and Christian, to convert to Islam. In 1198, the Almohad emir Abu Yusuf Yaqub al-Mansur decreed that Jews must wear a dark blue garb, with very large sleeves and a grotesquely oversized hat; his son altered the colour to yellow, a change that may have influenced Catholic ordinances some time later. Those who converted had to wear clothing that identified them as Jews since they were not regarded as sincere Muslims. Cases of mass martyrdom of Jews who refused to convert to Islam are recorded.
Many of the conversions were superficial. Maimonides urged Jews to choose the superficial conversion over martyrdom and argued, "Muslims know very well that we do not mean what we say, and that what we say is only to escape the ruler's punishment and to satisfy him with this simple confession." Abraham Ibn Ezra (1089–1164), who himself fled the persecutions of the Almohads, composed an elegy mourning the destruction of many Jewish communities throughout Spain and the Maghreb under the Almohads. Many Jews fled from territories ruled by the Almohads to Christian lands, and others, like the family of Maimonides, fled east to more tolerant Muslim lands. However, a few Jewish traders still working in North Africa are recorded.
The treatment and persecution of Christians under Almohad rule was a drastic change as well. Many Christians were killed, forced to convert, or forced to flee. Some Christians fled to the Christian kingdoms in the north and west and helped fuel the Reconquista.
Christians under the Almohad rule generally chose to relocate to the Christian principalities (most notably the Kingdom of Asturias) in the north of the Iberian Peninsula, whereas Jews decided to stay in order to keep their properties, and many of them feigned conversion to Islam, while continuing to believe and practice Judaism in secrecy.
During the Almohad persecution, the medieval Jewish philosopher and rabbi Moses Maimonides (1135–1204), one of the leading exponents of the Golden Age of Jewish culture in the Iberian Peninsula, wrote his Epistle on Apostasy, in which he permitted Jews to feign apostasy under duress, though strongly recommending leaving the country instead. There is dispute amongst scholars as to whether Maimonides himself converted to Islam in order to freely escape from Almohad territory, and then reconverted back to Judaism in either the Levant or in Egypt. He was later denounced as an apostate and tried in an Islamic court.
=== Seljuk Empire ===
In order to increase their numbers in Anatolia, the newly arrived Seljuk Turks took Christian children and forcibly converted them to Islam and turkified them, acts specifically mentioned in Antioch, around Samosata, and in western Asia Minor.
=== Danishmend's campaigns ===
During his campaigns, Sultan Malik Danishmend swore to forcibly convert the population of the city of Sisiya Comana to Islam and he did so upon capturing it. The governor of Comana forced its population to pray 5 times a day and those who refused to go to the mosque were brought to it by threat of physical violence. Those who continued to drink wine or do other things that Islam forbids were publicly whipped. The fate of the city of Euchaita was similar, with Malik giving the people the option of converting to Islam or death.
=== Yemen ===
In the late 1160s, the Yemenite ruler 'Abd-al-Nabī ibn Mahdi left Jews with the choice between conversion to Islam or martyrdom. Ibn Mahdi also imposed his beliefs upon the Muslims besides the Jews. This led to a revival of Jewish messianism, but also led to mass-conversion. The persecution ended in 1173 with the defeat of Ibn Mahdi and conquest of Yemen by the brother of Saladin, and they were allowed to return to their Jewish faith.
According to two Cairo Genizah documents, the Ayyubid ruler of Yemen, al-Malik al-Mu'izz al-Ismail (reigned from 1197 to 1202) had attempted to force the Jews of Aden to convert. The second document details the relief of Jewish community after his murder, and those who had been forced to convert reverted to Judaism. While he did not impose Islam upon the foreign merchants, they were forced to pay triple the normal rate of poll tax.
A measure listed in the legal works by Al-Shawkānī is of forced conversion of Jewish orphans. No date is given for this decree by modern studies nor who issued it. The forced conversion of Jewish orphans was reintroduced under Imam Yahya in 1922. The Orphans' Decree was implemented aggressively for the first ten years. It was re-promulgated in 1928.
=== Ottoman Empire ===
A form of forced conversion became institutionalized during the Ottoman Empire in the practice of devşirme, a human levy in which Christian boys were seized and collected from their families (usually in the Balkans), enslaved, forcefully converted to Islam, and then trained as elite military unit within the Ottoman army or for high-ranking service to the sultan. From the mid to late 14th, through early 18th centuries, the devşirme–janissary system enslaved an estimated 500,000 to one million non-Muslim adolescent males. These boys would attain a great education and high social standing after their training and conversion.
In the 17th century, Sabbatai Zevi, a Sephardic Jew whose ancestors were welcomed in the Ottoman Empire during the Spanish Inquisition, proclaimed himself as the Jewish Messiah and called for the abolition of major Jewish laws and customs. After he attracted a large following, he was arrested by the Ottoman authorities and given a choice between execution or conversion to Islam. Zevi opted for a feigned conversion solely to escape the death penalty, and continued to believe and practice Judaism along with his followers in secrecy. The Byzantine historian Doukas recounts two other cases of forced or attempted forced conversion: one of a Christian official who had offended Sultan Murad II, and the other of an archbishop.
Speros Vryonis cites a pastoral letter from 1338 addressed to the residents of Nicaea indicating widespread, forcible conversion by the Turks after it was conquered: "And they [Turks] having captured and enslaved many of our own and violently forced them and dragging them along alas! So that they took up their evil and godlessness."
After the Siege of Nicaea (1328–1331) The Turks began to force the Christian inhabitants who had escaped the massacres to convert to Islam. The patriarch of Constantinople John XIX wrote a message to the people of Nicea shortly after the city was seized. His letter says that "The invaders endeavored to impose their impure religion on the populace, at all costs, intending to make the inhabitants followers of Muhammad". Patriarch advised the Christians to "be steadfast in your religion" and not to forget that the "Turks are masters of your bodies only, but not of your souls.
Apostolos Vakalopoulos comments on the first Ottoman invasions of Europe and Dimitar Angelov gives assessment on the Campaigns on Murad II and Mehmed II and their impact on the conquered native Balkan Christians:
From the very beginning of the Turkish onslaught [in Thrace] under Suleiman [son of Sultan Orhan], the Turks tried to consolidate their position by the forcible imposition of Islam. If [the Ottoman historian] Şükrullah is to be believed, those who refused to accept the Moslem faith were slaughtered and their families enslaved. "Where there were bells," writes the same author [Şükrullah], "Suleiman broke them up and cast them into fires. Where there were churches he destroyed them or converted them into mosques. Thus, in place of bells there were now muezzins. Wherever Christian infidels were still found, vassalage was imposed on their rulers. At least in public they could no longer say 'kyrie eleison' but rather 'There is no God but Allah'; and where once their prayers had been addressed to Christ, they were now to "Muhammad, the prophet of Allah."
According to historian Demetrios Constantelos, "Mass forced conversions were recorded during the caliphates of Selim I (1512–1520),...Selim II (1566–1574), and Murat III (1574–1595). On the occasion of some anniversary, such as the capture of a city, or a national holiday, many rayahs were forced to apostacize. On the day of the circumcision of Mohammed III, great numbers of Christians (Albanians, Greeks, Slavs) were forced to convert to Islam." After reviewing the martyrology of Christians killed by the Ottomans from the fall of Constantinople all the way to the final phases of the Greek War of Independence, Constantelos reports:
The Ottoman Turks condemned to death eleven Ecumenical Patriarchs of Constantinople, nearly one hundred bishops, and several thousand priests, deacons, and monks. It is impossible to say with certainty how many men of the cloth were forced to apostasize.
For strategic reasons, the Ottomans forcibly converted Christians living in the frontier regions of Macedonia and northern Bulgaria, particularly in the 16th and 17th centuries. Those who refused were either executed or burned alive.
The community budgets of Jews was heavily burdened by the repurchasing of Jewish slaves abducted by Arab, Berber, or Turkish pirates, or by military raids. The mental trauma due to captivity and slavery caused unransomed prisoners who had lost family, money, and friends to convert to Islam.
During his travels through the Salt lake region of central Anatolia, Jean-Baptiste Tavernier observed in the town of Mucur, "there are numbers of Greeks who are forced everyday to become Turks".
During the genocide and persecution of Greeks in the 20th century, there were cases of forced conversion to Islam (see also Armenian genocide, Assyrian genocide, and Hamidian massacres).
=== Iran ===
Ismail I, the founder of the Safavid dynasty, decreed Twelver Shiism to be the official religion of state and ordered executions of a number of Sunni intellectuals who refused to accept Shiism. Non-Muslims faced frequent persecutions and at times forced conversions under the rule of his dynastic successors. Thus, after the capture of the Hormuz Island, Abbas I required local Christians to convert to Twelver Shia Islam, Abbas II granted his ministers authority to force Jews to become Shia Muslims, and Sultan Husayn decreed forcible conversion of Zoroastrians. In 1839, during the Qajar era the Jewish community in the city of Mashhad was attacked by a mob and subsequently forced to convert to Shia Islam.
In Persia, instances of forced conversion of Jews took place in 1291 and 1318, and those in Baghdad in 1333 and 1344. In 1617 and 1622, a wave of forced conversions and persecution, provoked by the slander of Jewish apostates, swept over the Jews of Persia, sparing neither Nestorian Christians nor Armenians. From 1653 to 1666, during the reign of Shah Abbas II, all the Jews in Persia were Islamized by force. However, religious freedom was eventually restored. A law in 1656 gave Jewish or Christian converts to Islam exclusive rights of inheritance. This law was alleviated for the Christians as a concession to Pope Alexander VII but remained in force for Jews until the end of the nineteenth century. David Cazés mentions the existence in Tunisia of similar inheritance laws favoring converts to Islam.
=== Indian Subcontinent ===
In an invasion of the Kashmir valley (1015), Mahmud of Ghazni plundered the valley, took many prisoners and carried out conversions to Islam. In his later campaigns, in Mathura, Baran and Kanauj, again, many conversions took place. Those soldiers who surrendered to him were converted to Islam. In Baran (Bulandshahr) alone 10,000 persons were converted to Islam including the king. Tarikh-i-Yamini, Rausat-us-Safa and Tarikh-i-Ferishtah speak of construction of mosques and schools and appointment of preachers and teachers by Mahmud and his successor Masud. Wherever Mahmud went, he insisted on the people to convert to Islam. The raids by Muhammad Ghori and his generals brought in thousands of slaves in the late 12th century, most of whom were compelled to convert as one of the preconditions of their freedom. Sikandar Butshikan (1394–1417) demolished Hindu temples and forcefully converted Hindus.
Aurangzeb employed a number of means to encourage conversions to Islam. The ninth guru of Sikhs, Guru Tegh Bahadur, was beheaded in Delhi on orders of Aurangzeb for refusing to convert to Islam. In a Mughal-Sikh war in 1715, 700 followers of Banda Singh Bahadur were beheaded. Sikhs were executed for not apostatizing from Sikhism. Banda Singh Bahadur was offered a pardon if he converted to Islam. Upon refusal, he was tortured, and was killed with his five-year-old son. Following the execution of Banda, the emperor ordered to apprehend Sikhs anywhere they were found.
18th century ruler Tipu Sultan persecuted the Hindus, Christians and Mappila Muslims. During Sultan's Mysorean invasion of Kerala, hundreds of temples and churches were demolished and ten thousands of Christians and Hindus were killed or converted to Islam by force.
=== Contemporary period ===
==== South Asia ====
===== Bangladesh =====
In Bangladesh, the International Crimes Tribunal tried and convicted several leaders of the Islamic Razakar militias, as well as Bangladesh Muslim Awami league (Forid Uddin Mausood), of war crimes committed against Hindus during the 1971 Bangladesh genocide. The charges included forced conversion of Bengali Hindus to Islam.
===== India =====
In the 1998 Prankote massacre, 26 Kashmiri Hindus were beheaded by Islamist militants after their refusal to convert to Islam. The militants struck when the villagers refused demands from the gunmen to convert to Islam and prove their conversion by eating beef.
During the Noakhali riots in 1946, several thousand Hindus were forcibly converted to Islam by Muslim mobs.
===== Pakistan =====
Members of minority religions in Pakistan face discrimination every day. This leads to socio-political and economic exclusion and severe marginalization in all aspects of life. In a country that is 96 percent Muslim, targeting of its religious minorities (3 percent), especially Shias, Ahmadis, Hindus and Christians, is widespread.
The rise of Taliban insurgency in Pakistan has been an influential and increasing factor in the persecution of and discrimination against religious minorities, such as Hindus, Christians, Sikhs, and other minorities.
The Human Rights Council of Pakistan has reported that cases of forced conversion are increasing. A 2014 report by the Movement for Solidarity and Peace (MSP) says about 1,000 women in Pakistan are forcibly converted to Islam every year (700 Christian and 300 Hindu).
In 2003, a six-year-old Sikh girl was kidnapped by a member of the Afridi tribe in Northwest Frontier Province; the alleged kidnapper claimed the girl was actually 12 years old, had converted to Islam, and therefore could not be returned to her non-Muslim family. In Pakistan's Sindh province, a distressing pattern of crimes has emerged, including the abduction, coerced conversion to Islam, and subsequent marriage to older Muslim men who are often abductors. These crimes primarily target underage girls from impoverished Hindu families.
Rinkle Kumari, a 19-year Pakistani student, Lata Kumari, and Asha Kumari, a Hindu working in a beauty parlor, were allegedly forced to convert from Hinduism to Islam. They told the judge that they wanted to go with their parents. Their cases were appealed all the way to the Supreme Court of Pakistan. The appeal was admitted but remained unheard ever after. Rinkle was abducted by a gang and "forced" to convert to Islam, before being head shaved.
Sikhs in Hangu district stated they were being pressured to convert to Islam by Yaqoob Khan, the assistant commissioner of Tall Tehsil, in December 2017. However, the Deputy Commissioner of Hangu Shahid Mehmood denied it occurred and claimed that Sikhs were offended during a conversation with Yaqub though it was not intentional.
Many Hindu girls living in Pakistan are kidnapped, forcibly converted and married to Muslims. According to another report from the Movement for Solidarity and Peace, about 1,000 non-Muslim girls are converted to Islam each year in Pakistan. According to the Amarnath Motumal, the vice chairperson of the Human Rights Commission of Pakistan, every month, an estimated 20 or more Hindu girls are abducted and converted, although exact figures are impossible to gather. In 2014 alone, 265 legal cases of forced conversion were reported mostly involving Hindu girls.
A total of 57 Hindus converted in Pasrur during May 14–19. On May 14, 35 Hindus of the same family were forced to convert by their employer because his sales dropped after Muslims started boycotting his eatable items as they were prepared by Hindus as well as their persecution by the Muslim employees of neighbouring shops according to their relatives. Since the impoverished Hindu had no other way to earn and needed to keep the job to survive, they converted. 14 members of another family converted on May 17 since no one was employing them, later another Hindu man and his family of eight under pressure from Muslims to avoid their land being grabbed.
In 2017, the Sikh community in Hangu district of Pakistan's Khyber-Pakhtunkhwa province alleged that they were "being forced to convert to Islam" by a government official. Farid Chand Singh, who filed the complaint, has claimed that Assistant Commissioner Tehsil Tall Yaqoob Khan was allegedly forcing Sikhs to convert to Islam and the residents of Doaba area are being tortured religiously. According to reports, about 60 Sikhs of Doaba had demanded security from the administration.
Many Hindus voluntarily convert to Islam in order to acquire Watan Cards and National Identification Cards. These converts are also given land and money. For example, 428 poor Hindus in Matli were converted between 2009 and 2011 by the Madrassa Baitul Islam, a Deobandi seminary in Matli, which pays off the debts of Hindus converting to Islam. Another example is the conversion of 250 Hindus to Islam in Chohar Jamali area in Thatta. Conversions are also carried out by Ex Hindu Baba Deen Mohammad Shaikh mission which converted 108,000 people to Islam since 1989.
Within Pakistan, the southern province of Sindh had over 1,000 forced conversions of Christian and Hindu girls according to the annual report of the Human Rights Commission of Pakistan in 2018. According to victims' families and activists, Mian Abdul Haq, who is a local political and religious leader in Sindh, has been accused of being responsible for forced conversions of girls within the province.
More than 100 Hindus in Sindh converted to Islam in June 2020 to escape discrimination and economic pressures. Islamic charities and clerics offer incentives of jobs or land to impoverished minorities on the condition that they convert. New York Times summarised the view of Hindu groups that these seemingly voluntary conversions "take place under such economic duress that they are tantamount to a forced conversion anyway."
In October 2020, the Pakistani High Court upheld the validity of a forced marriage between 44-year-old Ali Azhar and 13-year-old Christian Arzoo Raja. Raja was abducted by Azhar, forcibly wed to Azhar and then forcibly converted to Islam by Azhar. Pakistan has been found in breach of its international commitments to safeguard non-Muslim girls from exploitation by influential factions and criminal elements, as forced conversions have become commonplace within the nation. This concerning trend is on the rise, notably observed in the districts of Tharparkar, Umerkot, and Mirpur Khas in Sindh.
==== Indonesia ====
In 2012, over 1000 Catholic children in East Timor, removed from their families, were reported to being held in Indonesia without consent of their parents, forcibly converted to Islam, educated in Islamic schools and naturalized. Other reports claim forced conversion of minority Ahmadiyya sect Muslims to Sunni Islam, with the use of violence.
In 2001 the Indonesian army evacuated hundreds of Christian refugees from the remote Kesui and Teor islands in Maluku after the refugees stated that they had been forced to convert to Islam. According to reports, some of the men had been circumcised against their will, and a paramilitary group involved in the incident confirmed that circumcisions had taken place while denying any element of coercion.
In 2017, many members of the Orang Rimba tribe, especially children, were being forced to renounce their folk religion and convert to Islam.
==== West Asia ====
There have been a number of reports of attempts to forcibly convert religious minorities in Iraq. The Yazidi people of northern Iraq, who follow an ethnoreligious syncretic faith, have been threatened with forced conversion by the Islamic State of Iraq and the Levant, who consider their practices to be Satanism. UN investigators have reported mass killings of Yazidi men and boys who refused to convert to Islam. In Baghdad, hundreds of Assyrian Christians fled their homes in 2007 when a local extremist group announced that they had to convert to Islam, pay the jizya or die. In March 2007, the BBC reported that people in the Mandaean ethnic and religious minority in Iraq alleged that they were being targeted by Islamist insurgents, who offered them the choice of conversion or death.
In 2006, two journalists of the Fox News Network were kidnapped at gunpoint in the Gaza Strip by a previously unknown militant group. After being forced to read statements on videotape proclaiming that they had converted to Islam, they were released by their captors.
Allegations of Coptic Christian girls being forced to marry Arab Muslim men and convert to Islam in Egypt have been reported by a number of news and advocacy organizations and have sparked public protests. According to a 2009 report by the US State Department, observers have found it extremely difficult to determine whether compulsion was used, and in recent years no such cases have been independently verified.
Coptic women and girls are abducted, forced to convert to Islam and marry Muslim men. In 2009, the Washington, D.C.–based group Christian Solidarity International published a study of the abductions and forced marriages and the anguish felt by the young women because returning to Christianity is against the law. Further allegations of organised abduction of Copts, trafficking and police collusion continue in 2017.
==== United Kingdom ====
According to the UK prison officers' union, some Muslim prisoners in the UK have been forcibly converting fellow inmates to Islam in prisons. An independent government report published in 2023 found that there have been multiple cases of Muslim gangs threatening non-Muslim prisoners to "convert or get hurt".
In 2007, a Sikh girl's family claimed that she had been forcibly converted to Islam, and they received a police guard after being attacked by an armed gang, although the "Police said no one was injured in the incident".
In response to these news stories, an open letter to Sir Ian Blair, signed by ten Hindu academics, argued that claims that Hindu and Sikh girls were being forcefully converted were "part of an arsenal of myths propagated by right-wing Hindu supremacist organisations in India". The Muslim Council of Britain issued a press release pointing out there is a "lack of evidence" of any forced conversions and suggested it is an underhand attempt to smear the British Muslim population.
An academic paper by Katy Sian published in the journal South Asian Popular Culture in 2011 explored the question of how "'forced' conversion narratives" arose around the Sikh diaspora in the United Kingdom. Sian, who reports that claims of conversion through courtship on campuses are widespread in the UK, indicates that rather than relying on actual evidence they primarily rest on the word of "a friend of a friend" or on personal anecdote. According to Sian, the narrative is similar to accusations of "white slavery" lodged against the Jewish community and foreigners to the UK and the US, with the former having ties to antisemitism that mirror the Islamophobia betrayed by the modern narrative. Sian expanded on these views in 2013's Mistaken Identities, Forced Conversions, and Postcolonial Formations.
In 2018, a report by a Sikh activist organisation, Sikh Youth UK, entitled "The Religiously Aggravated Sexual Exploitation of Young Sikh Women Across the UK" made allegations of similarities between the case of Sikh Women and the Rotherham child sexual exploitation scandal. However, in 2019, this report was criticised by researchers and an official UK government report led by two Sikh academics for false and misleading information. It noted: "The RASE report lacks solid data, methodological transparency and rigour. It is filled instead with sweeping generalisations and poorly substantiated claims around the nature and scale of abuse of Sikh girls and causal factors driving it. It appealed heavily to historical tensions between Sikhs and Muslims and narratives of honour in a way that seemed designed to whip up fear and hate".
== Judaism ==
Under the Hasmonean Kingdom, the Idumeans were forced to convert to Judaism, by threat of exile or death, depending on the source.
In Eusebíus, Christianity, and Judaism, Harold W. Attridge claims that Josephus' account was accurate and that Alexander Jannaeus (around 80 BCE) demolished the city of Pella in Moab, because the inhabitants refused to adopt Jewish national customs. Maurice Sartre writes of the "policy of forced Judaization adopted by Hyrcanos, Aristobulus I and Jannaeus", who offered "the conquered peoples a choice between expulsion or conversion," William Horbury postulates that an existing small Jewish population in Lower Galilee was massively expanded by forced conversion around 104 BCE. Yigal Levin, conversely, argues that many non-Jewish communities, such as Idumeans, voluntarily assimilated in Hasmonean Judea, based on archaeological evidence and cultural affinities between the groups.
In 2009, the BBC claimed that in 524 CE the Himyarite Kingdom, who had adopted Judaism as the de facto state religion two centuries earlier, led by King Yusuf Dhu Nuwas, had offered residents of a village in what is now Saudi Arabia the choice between conversion to Judaism or death, and that 20,000 Christians had then been massacred. During the reign of Dhu Nuwas, a political-power transferring process began and during it, the Himyarite kingdom became a tributary of the Kingdom of Aksum, which had adopted Christianity as its de facto state religion two centuries earlier. This process was completed by the time of the reign of Ma'dīkarib Yafur (519-522), a Christian who was appointed by the Aksumites. A coup d'état ensued, with Dhu Nuwas assuming authority after the killing of the Aksumite garrison in Zafar. A general was sent against Najrān, a predominantly Christian oasis, with a good number of Jews, who refused to recognize his authority. The general blocked the caravan route which connected Najrān with Eastern Arabia and he also persecuted the Christian population of Najrān. Dhu Nuwas campaign eventually killed between 11,500 and 14,000, and took a similar number of prisoners.
Ethiopian Jews (also known as Beta Israel) were forcibly converted to mainstream Rabbinical Judaism following their covert evacuation to Israel during Operation Moses and Operation Solomon. Their native form of Judaism, commonly called Haymanot, is looked down upon by the Israeli government and the Chief Rabbinate of Israel.
Other instances of forced conversion to Judaism are unknown. Forced conversion has been forbidden for over 1000 years and forced conversions are considered invalid.
== Atheism ==
=== Eastern Bloc ===
Under the doctrine of state atheism in the Soviet Union, there was a "government-sponsored program of forced conversion to atheism" conducted by communists. This program included the overarching objective to establish not only a fundamentally materialistic conception of the universe, but to foster "direct and open criticism of the religious outlook" by means of establishing an "anti-religious trend" across the entire school. The Russian Orthodox Church, for centuries the strongest of all Orthodox Churches, was violently suppressed. Revolutionary leader Vladimir Lenin wrote that every religious idea and every idea of God "is unutterable vileness... of the most dangerous kind, 'contagion of the most abominable kind". Many priests were killed and imprisoned. Thousands of churches were closed, some turned into hospitals. In 1925, the government founded the League of Militant Atheists to intensify the persecution.
Christopher Marsh, a professor at Baylor University writes that "Tracing the social nature of religion from Schleiermacher and Feurbach to Marx, Engels, and Lenin... the idea of religion as a social product evolved to the point of policies aimed at the forced conversion of believers to atheism."
Jonathan Blake of the Department of Political Science at Columbia University elucidates the history of this practice in the USSR, stating that:
God, however, did not simply vanish after the Bolshevik revolution. Soviet authorities relied heavily on coercion to spread their idea of scientific atheism. This included confiscating church goods and property, forcibly closing religious institutions and executing religious leaders and believers or sending them to the gulag... Later, the United States passed the Jackson–Vanik amendment which harmed US–Soviet trade relations until the USSR permitted the emigration of religious minorities, primarily Jews. Despite the threat from coreligionists abroad, however, the Soviet Union engaged in forced atheism from its earliest days.
Across Eastern Europe following World War II, the parts of the Nazi Empire conquered by the Soviet Red Army, and Yugoslavia became one party communist states and the project of coercive conversion continued. The Soviet Union ended its war time truce against the Russian Orthodox Church, and extended its persecutions to the newly communist Eastern bloc: "In Poland, Hungary, Lithuania and other Eastern European countries, Catholic leaders who were unwilling to be silent were denounced, publicly humiliated or imprisoned by the communists. Leaders of the national Orthodox Churches in Romania and Bulgaria had to be cautious and submissive", wrote Blainey. While the churches were generally not as severely treated as they had been in the USSR, nearly all their schools and many of their churches were closed, and they lost their formerly prominent roles in public life. Children were taught atheism, and clergy were imprisoned by the thousands.
In the Eastern Bloc, Christian churches, Jewish synagogues and Islamic mosques were forcibly "converted into museums of atheism." Historical essayist Andrei Brezianu expounds upon this situation, specifically in the Socialist Republic of Romania, writing that scientific atheism was "aggressively applied to Moldova, immediately after the 1940 annexation, when churches were profaned, clergy assaulted, and signs and public symbols of religion were prohibited"; he provides an example of this phenomenon, further writing that "St. Theodora Church in downtown Chişinău was converted into the city's Museum of Scientific Atheism". Marxist-Leninist regimes treated religious believers as subversives or abnormal, sometimes relegating them to psychiatric hospitals and reeducation. Nevertheless, historian Emily Baran writes that "some accounts suggest the conversion to militant atheism did not always end individuals' existential questions".
=== French Revolution ===
During the French Revolution, a campaign of dechristianization happened which included removal and destruction of religious objects from places of worship; English librarian Thomas Hartwell Horne and biblical scholar Samuel Davidson write that "churches were converted into 'temples of reason,' in which atheistical and licentious homilies were substituted for the proscribed service".
Unlike later establishments of state atheism by communist regimes, the French Revolutionary experiment was short (seven months), incomplete and inconsistent. Even though it was brief, the French experiment was particularly notable because it influenced atheists such as Ludwig Feuerbach, Sigmund Freud and Karl Marx.
=== East Asia ===
The emergence of communist states across East Asia after World War Two saw religion purged by atheist regimes across China, North Korea and much of Indo-China. In 1949, China became a communist state under the leadership of Mao Zedong's Chinese Communist Party. Prior to this takeover, China itself was previously a cradle of religious thought since ancient times, being the birthplace of Confucianism and Daoism, and Buddhists arrived in the first century CE. Under Mao, China became an officially atheist state, and even though some religious practices were permitted to continue under State supervision, religious groups which are considered a threat to law and order have been suppressed—such as Tibetan Buddhism from 1959 and Falun Gong in recent years. Religious schools and social institutions were closed, foreign missionaries were expelled, and local religious practices were discouraged. During the Cultural Revolution, Mao instigated "struggles" against the Four Olds: "old ideas, customs, culture, and habits of mind". In 1999, the Communist Party launched a three-year drive to promote atheism in Tibet, saying that intensifying atheist propaganda is "especially important for Tibet because atheism plays an extremely important role in promoting economic construction, social advancement and socialist spiritual civilization in the region".
As of November 2018, in present-day China, the government has detained many people in internment camps, "where Uighur Muslims are remade into atheist Chinese subjects". For children who were forcibly taken away from their parents, the Chinese government has established "orphanages" with the aim of "converting future generations of Uighur Muslim children into loyal subjects who embrace atheism".
=== Revolutionary Mexico ===
Articles 3, 5, 24, 27, and 130 of the Mexican Constitution of 1917 as originally enacted were anticlerical and enormously restricted religious freedoms. At first the anticlerical provisions were only sporadically enforced, but when President Plutarco Elías Calles took office, he enforced the provisions strictly. Calles' Mexico has been characterized as an atheist state and his program as being one to eradicate religion in Mexico.
All religions had their properties expropriated, and these became part of government wealth. There was a forced expulsion of foreign clergy and the seizure of Church properties. Article 27 prohibited any future acquisition of such property by the churches, and prohibited religious corporations and ministers from
establishing or directing primary schools. This second prohibition was sometimes interpreted to mean that the Church could not give religious instruction to children within the churches on Sundays, seen as destroying the ability of Catholics to be educated in their own religion.
The Constitution of 1917 also closed and forbade the existence of monastic orders (article 5), forbade any religious activity outside of church buildings (now owned by the government), and mandated that such religious activity would be overseen by the government (article 24).
On June 14, 1926, President Calles enacted anticlerical legislation known formally as The Law Reforming the Penal Code and unofficially as the Calles Law. His anti-Catholic actions included outlawing religious orders, depriving the Church of property rights and depriving the clergy of civil liberties, including their right to a trial by jury (in cases involving anti-clerical laws) and the right to vote. Catholic antipathy towards Calles was enhanced because of his vocal atheism.
Due to the strict enforcement of anti-clerical laws, people in strongly Catholic areas, especially the states of Jalisco, Zacatecas, Guanajuato, Colima and Michoacán, began to oppose him, and this opposition led to the Cristero War from 1926 to 1929, which was characterized by brutal atrocities on both sides. Some Cristeros applied terrorist tactics, while the Mexican government persecuted the clergy, killing suspected Cristeros and supporters and often retaliating against innocent individuals. In Tabasco state, the so-called "Red Shirts" began to act.
A truce was negotiated with the assistance of U.S. Ambassador Dwight Whitney Morrow. Calles, however, did not abide by the terms of the truce – in violation of its terms, he had approximately 500 Cristero leaders and 5,000 other Cristeros shot, frequently in their homes in front of their spouses and children. Particularly offensive to Catholics after the supposed truce was Calles' insistence on a complete state monopoly on education, suppressing all Catholic education and introducing "socialist" education in its place: "We must enter and take possession of the mind of childhood, the mind of youth". The persecution continued as Calles maintained control under his Maximato and did not relent until 1940, when President Manuel Ávila Camacho, a believing Catholic, took office. This attempt to indoctrinate the youth in atheism was begun in 1934 by amending Article 3 to the Mexican Constitution to eradicate religion by mandating "socialist education", which "in addition to removing all religious doctrine" would "combat fanaticism and prejudices", "build[ing] in the youth a rational and exact concept of the universe and of social life". In 1946 this "socialist education" was removed from the constitution and the document returned to the less egregious generalized secular education.
The effects of the war on the Church were profound. Between 1926 and 1934 at least 40 priests were killed. Where there were 4,500 priests operating within the country before the rebellion, in 1934 there were only 334 priests licensed by the government to serve fifteen million people, the rest having been eliminated by emigration, expulsion, and assassination. By 1935, 17 states had no priest at all.
== See also ==
== References == | Wikipedia/Forced_conversion |
A theory of everything is a hypothetical physical theory that would explain all known physical phenomena.
Theory of everything may also refer to:
== Philosophy ==
Theory of everything (philosophy), a hypothetical all-encompassing philosophical explanation of nature or reality
A Theory of Everything, a book by Ken Wilber dealing with his "integral theory"
== Film and television ==
"The Theory of Everything" (CSI), an episode of CSI: Crime Scene Investigation
The Theory of Everything (2006 film), a TV film
The Theory of Everything (2014 film), a biographical film about Stephen and Jane Hawking
The Theory of Everything (2023 film), a mystery thriller film
== Music ==
Theory of Everything (album), 2010 album by Children Collide
The Theory of Everything (Ayreon album), 2013
The Theory of Everything (Life on Planet 9 album), 2014
== See also ==
Theory of Everything (podcast), a radio show and then podcast by Benjamen Walker
Toe (disambiguation) | Wikipedia/Theory_of_everything_(disambiguation) |
Degenerate Higher-Order Scalar-Tensor theories (or DHOST theories) are theories of modified gravity. They have a Lagrangian containing second-order derivatives of a scalar field but do not generate ghosts (kinetic excitations with negative kinetic energy), because they only contain one propagating scalar mode (as well as the two usual tensor modes).
== History ==
DHOST theories were introduced in 2015 by David Langlois and Karim Noui. They are a generalisation of Beyond Horndeski (or GLPV) theories, which are themselves a generalisation of Horndeski theories. The equations of motion of Horndeski theories contain only two derivatives of the metric and the scalar field, and it was believed that only equations of motion of this form would not contain an extra scalar degree of freedom (which would lead to unwanted ghosts). However, it was first shown that a class of theories now named Beyond Horndeski also avoided the extra degree of freedom. Originally theories which were quadratic in the second derivative of the scalar field were studied, but DHOST theories up to cubic order have now been studied. A well-known specific example of a DHOST theory is mimetic gravity, introduced in 2013 by Chamseddine and Mukhanov.
== Action ==
All DHOST theories depend on a scalar field
ϕ
{\displaystyle \phi }
. The general action of DHOST theories is given by
S
[
g
,
ϕ
]
=
∫
d
4
x
−
g
[
f
0
(
X
,
ϕ
)
+
f
1
(
X
,
ϕ
)
◻
ϕ
+
f
2
(
X
,
ϕ
)
R
+
C
(
2
)
μ
ν
ρ
σ
ϕ
μ
ν
ϕ
ρ
σ
+
f
3
(
X
,
ϕ
)
G
μ
ν
ϕ
μ
ν
+
C
(
3
)
μ
ν
ρ
σ
α
β
ϕ
μ
ν
ϕ
ρ
σ
ϕ
α
β
]
,
{\displaystyle {\begin{aligned}S[g,\phi ]=\int d^{4}x{\sqrt {-g}}\left[f_{0}(X,\phi )+f_{1}(X,\phi )\square \phi +f_{2}(X,\phi )R+C_{(2)}^{\mu \nu \rho \sigma }\phi _{\mu \nu }\phi _{\rho \sigma }+\right.f_{3}(X,\phi )G_{\mu \nu }\phi ^{\mu \nu }+C_{(3)}^{\mu \nu \rho \sigma \alpha \beta }\phi _{\mu \nu }\phi _{\rho \sigma }\phi _{\alpha \beta }],\end{aligned}}}
where
X
{\displaystyle X}
is the kinetic energy of the scalar field,
ϕ
μ
ν
=
∇
μ
∇
ν
ϕ
{\displaystyle \phi _{\mu \nu }=\nabla _{\mu }\nabla _{\nu }\phi }
, and the quadratic terms in
ϕ
μ
ν
{\displaystyle \phi _{\mu \nu }}
are given by
C
(
2
)
μ
ν
ρ
σ
ϕ
μ
ν
ϕ
ρ
σ
=
∑
A
=
1
5
a
A
(
X
,
ϕ
)
L
A
(
2
)
,
{\displaystyle C_{(2)}^{\mu \nu \rho \sigma }\phi _{\mu \nu }\phi _{\rho \sigma }=\sum _{A=1}^{5}a_{A}(X,\phi )L_{A}^{(2)},}
where
L
1
(
2
)
=
ϕ
μ
ν
ϕ
μ
ν
,
L
2
(
2
)
=
(
◻
ϕ
)
2
,
L
3
(
2
)
=
(
◻
ϕ
)
ϕ
μ
ϕ
μ
ν
ϕ
ν
,
L
4
(
2
)
=
ϕ
μ
ϕ
μ
ρ
ϕ
ρ
ν
ϕ
ν
,
L
5
(
2
)
=
(
ϕ
μ
ϕ
μ
ν
ϕ
ν
)
2
,
{\displaystyle {\begin{array}{l}{L_{1}^{(2)}=\phi _{\mu \nu }\phi ^{\mu \nu },\quad L_{2}^{(2)}=(\square \phi )^{2},\quad L_{3}^{(2)}=(\square \phi )\phi ^{\mu }\phi _{\mu \nu }\phi ^{\nu }},\quad {L_{4}^{(2)}=\phi ^{\mu }\phi _{\mu \rho }\phi ^{\rho \nu }\phi _{\nu },\quad L_{5}^{(2)}=\left(\phi ^{\mu }\phi _{\mu \nu }\phi ^{\nu }\right)^{2}},\end{array}}}
and the cubic terms are given by
C
(
3
)
μ
ν
ρ
σ
α
β
ϕ
μ
ν
ϕ
ρ
σ
ϕ
α
β
=
∑
A
=
1
10
b
A
(
X
,
ϕ
)
L
A
(
3
)
,
{\displaystyle C_{(3)}^{\mu \nu \rho \sigma \alpha \beta }\phi _{\mu \nu }\phi _{\rho \sigma }\phi _{\alpha \beta }=\sum _{A=1}^{10}b_{A}(X,\phi )L_{A}^{(3)},}
where
L
1
(
3
)
=
(
◻
ϕ
)
3
,
L
2
(
3
)
=
(
◻
ϕ
)
ϕ
μ
ν
ϕ
μ
ν
,
L
3
(
3
)
=
ϕ
μ
ν
ϕ
ν
ρ
ϕ
ρ
μ
,
L
4
(
3
)
=
(
◻
ϕ
)
2
ϕ
μ
ϕ
μ
ν
ϕ
ν
,
L
5
(
3
)
=
◻
ϕ
ϕ
μ
ϕ
μ
ν
ϕ
ν
ρ
ϕ
ρ
,
L
6
(
3
)
=
ϕ
μ
ν
ϕ
μ
ν
ϕ
ρ
ϕ
ρ
σ
ϕ
σ
,
L
7
(
3
)
=
ϕ
μ
ϕ
μ
ν
ϕ
ν
ρ
ϕ
ρ
σ
ϕ
σ
,
L
8
(
3
)
=
ϕ
μ
ϕ
μ
ν
ϕ
ν
ρ
ϕ
ρ
ϕ
σ
ϕ
σ
λ
ϕ
λ
,
L
9
(
3
)
=
◻
ϕ
(
ϕ
μ
ϕ
μ
ν
ϕ
ν
)
2
,
L
10
(
3
)
=
(
ϕ
μ
ϕ
μ
ν
ϕ
ν
)
3
.
{\displaystyle {\begin{array}{l}{L_{1}^{(3)}=(\square \phi )^{3},\quad L_{2}^{(3)}=(\square \phi )\phi _{\mu \nu }\phi ^{\mu \nu },\quad L_{3}^{(3)}=\phi _{\mu \nu }\phi ^{\nu \rho }\phi _{\rho }^{\mu },\quad L_{4}^{(3)}=(\square \phi )^{2}\phi _{\mu }\phi ^{\mu \nu }\phi _{\nu }},\\{L_{5}^{(3)}=\square \phi \phi _{\mu }\phi ^{\mu \nu }\phi _{\nu \rho }\phi ^{\rho },\quad L_{6}^{(3)}=\phi _{\mu \nu }\phi ^{\mu \nu }\phi _{\rho }\phi ^{\rho \sigma }\phi _{\sigma },\quad L_{7}^{(3)}=\phi _{\mu }\phi ^{\mu \nu }\phi _{\nu \rho }\phi ^{\rho \sigma }\phi _{\sigma }},\\{L_{8}^{(3)}=\phi _{\mu }\phi ^{\mu \nu }\phi _{\nu \rho }\phi ^{\rho }\phi _{\sigma }\phi ^{\sigma \lambda }\phi _{\lambda },\quad L_{9}^{(3)}=\square \phi \left(\phi _{\mu }\phi ^{\mu \nu }\phi _{\nu }\right)^{2},\quad L_{10}^{(3)}=\left(\phi _{\mu }\phi ^{\mu \nu }\phi _{\nu }\right)^{3}}.\end{array}}}
The
a
A
{\displaystyle a_{A}}
and
b
A
{\displaystyle b_{A}}
are arbitrary functions of
ϕ
{\displaystyle \phi }
and
X
{\displaystyle X}
.
== References == | Wikipedia/Degenerate_Higher-Order_Scalar-Tensor_theories |
The propaganda model is a conceptual model in political economy advanced by Edward S. Herman and Noam Chomsky to explain how propaganda and systemic biases function in corporate mass media. The model seeks to explain how populations are manipulated and how consent for economic, social, and political policies, both foreign and domestic, is "manufactured" in the public mind due to this propaganda. The theory posits that the way in which corporate media is structured (e.g. through advertising, concentration of media ownership or government sourcing) creates an inherent conflict of interest and therefore acts as propaganda for anti-democratic elements.
First presented in their 1988 book Manufacturing Consent: The Political Economy of the Mass Media, the propaganda model views corporate media as businesses interested in the sale of a product—readers and audiences—to other businesses (advertisers) rather than the pursuit of quality journalism in service of the public. Describing the media's "societal purpose", Chomsky writes, "... the study of institutions and how they function must be scrupulously ignored, apart from fringe elements or a relatively obscure scholarly literature". The theory postulates five general classes of "filters" that determine the type of news that is presented in news media. These five classes are: ownership of the medium, the medium's funding sources, sourcing, flak, and anti-communism or "fear ideology".
The first three are generally regarded by the authors as being the most important. In versions published after the 9/11 attacks on the United States in 2001, Chomsky and Herman updated the fifth prong to instead refer to the "War on Terror" and "counter-terrorism", which they state operates in much the same manner.
Although the model was based mainly on the media of the United States, Chomsky and Herman believe the theory is equally applicable to any country that shares the basic economic structure and organizing principles that the model postulates as the cause of media biases. Their assessment has been supported by a number of scholars and the propaganda role of the media has since been empirically assessed in Western Europe and Latin America.
== Filters ==
=== Ownership ===
The size and profit-seeking imperative of dominant media corporations create a bias. The authors point to how in the early nineteenth century, a radical British press had emerged that addressed the concerns of workers, but excessive stamp duties, designed to restrict newspaper ownership to the 'respectable' wealthy, began to change the face of the press. Nevertheless, there remained a degree of diversity. In post World War II Britain, radical or worker-friendly newspapers such as the Daily Herald, News Chronicle, Sunday Citizen (all since failed or absorbed into other publications), and the Daily Mirror (at least until the late 1970s) regularly published articles questioning the capitalist system. The authors posit that these earlier radical papers were not constrained by corporate ownership and therefore, were free to criticize the capitalist system.
Herman and Chomsky argue that since mainstream media outlets are currently either large corporations or part of conglomerates (e.g. Westinghouse or General Electric), the information presented to the public will be biased with respect to these interests. Such conglomerates frequently extend beyond traditional media fields and thus have extensive financial interests that may be endangered when certain information is publicized. According to this reasoning, news items that most endanger the corporate financial interests of those who own the media will face the greatest bias and censorship.
It then follows that if to maximize profit means sacrificing news objectivity, then the news sources that ultimately survive must be fundamentally biased, with regard to news in which they have a conflict of interest.
=== Advertising ===
The second filter of the propaganda model is funding generated through advertising. Most newspapers have to attract advertising in order to cover the costs of production; without it, they would have to increase the price of their newspaper. There is fierce competition throughout the media to attract advertisers; a newspaper which gets less advertising than its competitors is at a serious disadvantage. Lack of success in raising advertising revenue was another factor in the demise of the 'people's newspapers' of the nineteenth and twentieth centuries.
The product is composed of the affluent readers who buy the newspaper—who also comprise the educated decision-making sector of the population—while the actual clientele served by the newspaper includes the businesses that pay to advertise their goods. According to this filter, the news is "filler" to get privileged readers to see the advertisements which makes up the content and will thus take whatever form is most conducive to attracting educated decision-makers. Stories that conflict with their "buying mood", it is argued, will tend to be marginalized or excluded, along with information that presents a picture of the world that collides with advertisers' interests. The theory argues that the people buying the newspaper are the product which is sold to the businesses that buy advertising space; the news has only a marginal role as the product.
=== Sourcing ===
The third of Herman and Chomsky's five filters relates to the sourcing of mass media news: "The mass media are drawn into a symbiotic relationship with powerful sources of information by economic necessity and reciprocity of interest." Even large media corporations such as the BBC cannot afford to place reporters everywhere. They concentrate their resources where news stories are likely to happen: the White House, the Pentagon, 10 Downing Street and other central news "terminals". Although British newspapers may occasionally complain about the "spin-doctoring" of New Labour, for example, they are dependent upon the pronouncements of "the Prime Minister's personal spokesperson" for government news. Business corporations and trade organizations are also trusted sources of stories considered newsworthy. Editors and journalists who offend these powerful news sources, perhaps by questioning the veracity or bias of the furnished material, can be threatened with the denial of access to their media life-blood - fresh news. Thus, the media has become reluctant to run articles that will harm corporate interests that provide them with the resources that they depend upon.
This relationship also gives rise to a "moral division of labor" where "officials have and give the facts" and "reporters merely get them". Journalists are then supposed to adopt an uncritical attitude that makes it possible for them to accept corporate values without experiencing cognitive dissonance.
=== Flak ===
The fourth filter is 'flak' (not to be confused with flack which means promoters or publicity agents), described by Herman and Chomsky as 'negative responses to a media statement or [TV or radio] program. It may take the form of letters, telegrams, phone calls, petitions, lawsuits, speeches and Bills before Congress and other modes of complaint, threat and punitive action'. Business organizations regularly come together to form flak machines. An example is the US-based Global Climate Coalition (GCC), comprising fossil fuel and automobile companies such as Exxon, Texaco and Ford. The GCC was conceived by Burson-Marsteller, one of the world's largest public relations companies, to attack the credibility of climate scientists and 'scare stories' about global warming.
For Chomsky and Herman "flak" refers to negative responses to a media statement or program. The term "flak" has been used to describe what Chomsky and Herman see as efforts to discredit organizations or individuals who disagree with or cast doubt on the prevailing assumptions which Chomsky and Herman view as favorable to established power (e.g., "The Establishment"). Unlike the first three "filtering" mechanisms—which are derived from analysis of market mechanisms—flak is characterized by concerted efforts to manage public information.
=== Anti-Communism and fear ===
So I think when we talked about the "fifth filter" we should have brought in all this stuff -- the way artificial fears are created with a dual purpose... partly to get rid of people you don't like but partly to frighten the rest.
Because if people are frightened, they will accept authority.
The fifth and final news filter that Herman and Chomsky identified was 'anti-communism'. Manufacturing Consent was written during the Cold War. Chomsky updated the model as "fear", often as 'the enemy' or an 'evil dictator' such as Colonel Gaddafi, Paul Biya, Saddam Hussein, Slobodan Milosevic, or Vladimir Putin. This is exemplified in British tabloid headlines of 'Smash Saddam!' and 'Clobba Slobba!'. The same is said to extend to mainstream reporting of environmentalists as 'eco-terrorists'. The Sunday Times ran a series of articles in 1999 accusing activists from the non-violent direct action group Reclaim The Streets of stocking up on CS gas and stun guns.
Anti-ideologies exploit public fear and hatred of groups that pose a potential threat, either real, exaggerated or imagined. Communism once posed the primary threat according to the model. Communism and socialism were portrayed by their detractors as endangering freedoms of speech, movement, the press and so forth. They argue that such a portrayal was often used as a means to silence voices critical of elite interests. Chomsky argues that since the end of the Cold War (1991), anticommunism was replaced by the "War on Terror", as the major social control mechanism: "Anti-communism has receded as an ideological factor in the Western media, but it is not dead... The 'war on terror' has provided a useful substitute for the Soviet Menace." Following the events of September 11, 2001, some scholars agree that Islamophobia is replacing anti-communism as a new source of public fear. Herman and Chomsky noted, in an interview given in 2009, that the popularity of 'anti-communism' as a news filter is slowly decreasing in favor of other more contemporary ideologies such as 'anti-terrorism'.
== Case examples ==
Following the theoretical exposition of the propaganda model, Manufacturing Consent contains a large section where the authors seek to test their hypotheses. If the propaganda model is right and the filters do influence media content, a particular form of bias would be expected—one that systematically favors corporate interests.
They also looked at what they perceived as naturally occurring "historical control groups" where two events, similar in their properties but differing in the expected media attitude towards them, are contrasted using objective measures such as coverage of key events (measured in column inches) or editorials favoring a particular issue (measured in number).
=== Coverage of "enemy" countries ===
Examples of bias given by the authors include the failure of the media to question the legality of the Vietnam War while greatly emphasizing the Soviet–Afghan War as an act of aggression.
Other biases include a propensity to emphasize violent acts such as genocide more in enemy or unfriendly countries such as Kosovo while ignoring greater genocide in allied countries such as the Indonesian occupation of East Timor. This bias is also said to exist in foreign elections, giving favorable media coverage to fraudulent elections in allied countries such as El Salvador and Guatemala, while unfavorable coverage is given to legitimate elections in enemy countries such as Nicaragua.
Chomsky also asserts that the media accurately covered events such as the Battle of Fallujah but because of an ideological bias, it acted as pro-government propaganda. In describing coverage of raid on Fallujah General Hospital he stated that The New York Times, "accurately recorded the battle of Fallujah but it was celebrated... it was a celebration of ongoing war crimes". The article in question was "Early Target of Offensive Is a Hospital".
=== Scandals of leaks ===
The authors point to biases that are based on only reporting scandals which benefit a section of power, while ignoring scandals that hurt the powerless. The biggest example of this was how the US media greatly covered the Watergate Scandal but ignored the COINTELPRO exposures. While the Watergate break-in was a political threat to powerful people (Democrats), COINTELPRO harmed average citizens and went as far as political assassination. Other examples include coverage of the Iran–Contra affair by only focusing on people in power such as Oliver North but omitting coverage of the civilians killed in Nicaragua as the result of aid to the Contras.
In a 2010 interview, Chomsky compared media coverage of the Afghan War Diaries and lack of media coverage of a study of severe health problems in Fallujah. While there was ample coverage of the Afghan War Diaries there was no American coverage of the Fallujah study, in which the health situation in Fallujah was described by the British media as "worse than Hiroshima".
== Applications ==
Since the publication of Manufacturing Consent, Herman and Chomsky have adopted the theory and have given it a prominent role in their writings, lectures and theoretical frameworks. Chomsky has made extensive use of its explanative power to lend support to his interpretations of mainstream media attitudes towards a wide array of events, including the following:
Gulf War (1990), the media's failure to report on Saddam's peace offers.
Iraq invasion (2003), the media's failure to report on the legality of the war despite overwhelming public opinion in favor of only invading Iraq with UN authorization. According to the liberal watchdog group Fairness and Accuracy In Reporting, there was a disproportionate focus on pro-war sources while total anti-war sources only made up 10% of the media (with only 3% of US sources being anti-war).
Global warming, a 2004 study found that the media gives near equal balance to people who deny climate change despite only "about one percent" of climate scientists taking this view. Chomsky commented that there are "three sides" on climate change (deniers, those who follow the scientific consensus, and people who think that the consensus underestimates the threat from global warming), but in framing the debate the media usually ignore people who say that the scientific consensus is unduly optimistic.
== Reception ==
On the rare occasions the propaganda model is discussed in the mainstream media there is usually a large reaction. In 1988, when Chomsky was interviewed by Bill Moyers, there were 1,000 letters in response, one of the biggest written reactions in the show's history. When he was interviewed by TV Ontario, the show generated 31,321 call-ins, which was a new record for the station.
In 1996, when Chomsky was interviewed by Andrew Marr the producer commented that the response was "astonishing". He commented that "[t]he audience reaction was astonishing... I have never worked on a programme which elicited so many letters and calls".
In May 2007, Chomsky and Herman spoke at the University of Windsor in Canada summarizing developments and responding to criticisms related to the model. Both authors stated they felt the propaganda model is still applicable (Herman said even more so than when it was introduced), although they did suggest a few areas where they believe it falls short and needs to be extended in light of recent developments.
Chomsky has insisted that while the propaganda role of the media "is intensified by ownership and advertising" the problem mostly lies with "ideological-doctrinal commitments that are part of intellectual life" or intellectual culture of the people in power. He compares the media to scholarly literature which he says has the same problems even without the constraints of the propaganda model.
At the Windsor talk, Chomsky pointed out that Edward S. Herman was primarily responsible for creating the theory although Chomsky supported it. According to Chomsky, he insisted Herman's name appear first on the cover of Manufacturing Consent because of his primary role researching and developing the theory.
=== Harvard media torture study ===
In April 2010, a study conducted by the Harvard Kennedy School showed that media outlets such as The New York Times and Los Angeles Times stopped using the term "torture" for waterboarding when the US government committed it, from 2002 to 2008. It also noted that the press was "much more likely to call waterboarding torture if a country other than the United States is the perpetrator." The study was similar to media studies done in Manufacturing Consent for topics such as comparing how the term "genocide" is used in the media when referring to allied and enemy countries. Glenn Greenwald said that "We don't need a state-run media because our media outlets volunteer for the task..." and commented that the media often act as propaganda for the government without coercion.
=== Studies of media outside the United States ===
Chomsky has commented in the "ChomskyChat Forum" on the applicability of the Propaganda Model to the media environment of other countries:
That's only rarely been done in any systematic way. There is work on the British media, by a good U[niversity] of Glasgow media group. And interesting work on British Central America coverage by Mark Curtis in his book Ambiguities of Power. There is work on France, done in Belgium mostly, also a recent book by Serge Halimi (editor of Le Monde diplomatique). There is one very careful study by a Dutch graduate student, applying the methods Ed Herman used in studying US media reaction to elections (El Salvador, Nicaragua) to 14 major European newspapers. ... Interesting results. Discussed a bit (along with some others) in a footnote in chapter 5 of my book Deterring Democracy, if you happen to have that around.
For more than a decade, a British-based website Media Lens has examined their domestic broadcasters and liberal press. Its criticisms are featured in the books Guardians of Power (2006) and Newspeak in the 21st Century (2009).
Studies have also expanded the propaganda model to examine news media in the People's Republic of China and for film production in Hollywood.
==== News of the World ====
In July 2011, the journalist Paul Mason, then working for the BBC, pointed out that the News International phone hacking scandal threw light on close links between the press and politicians. However, he argued that the closure of the mass-circulation newspaper News of the World, which took place after the scandal broke, conformed only partly to the propaganda model. He drew attention to the role of social media, saying that "large corporations pulled their advertising" because of the "scale of the social media response" (a response which was mainly to do with the newspaper's behaviour towards Milly Dowler, although Mason did not go into this level of detail).
Mason praised The Guardian for having told the truth about the phone-hacking, but expressed doubt about the financial viability of the newspaper.
One part of the Chomsky doctrine has been proven by exception. He stated that newspapers that told the truth could not make money. The Guardian...is indeed burning money and may run out of it in three years' time.
== Criticism ==
The propaganda model has received criticism, including accusations of being a conspiracy theory, being a solely structural model that does not "analyze the practical, mundane or organizational aspects of newsroom work.", being analogous to the "gatekeeper model" of mass media, failing to "theorize audience effects.", assuming "the existence of a unified ruling class.", and being "highly deterministic".
=== The Anti-Chomsky Reader ===
Eli Lehrer of the American Enterprise Institute criticized the theory in The Anti-Chomsky Reader. According to Lehrer, the fact that papers like The New York Times and The Wall Street Journal have disagreements is evidence that the media is not a monolithic entity. Lehrer also believes that the media cannot have a corporate bias because it reports on and exposes corporate corruption. Lehrer asserts that the model amounts to a Marxist conception of right-wing false consciousness.
Herman and Chomsky have asserted that the media "is not a solid monolith" but that it represents a debate between powerful interests while ignoring perspectives that challenge the "fundamental premises" of all these interests. For instance, during the Vietnam War there was disagreement among the media over tactics, but the broader issue of the legality and legitimacy of the war was ignored (see Coverage of "enemy" countries). Chomsky has said that while the media are against corruption, they are not against society legally empowering corporate interests which is a reflection of the powerful interests that the model would predict. The authors have also said that the model does not seek to address "the effects of the media on the public" which might be ineffective at shaping public opinion. Edward Herman has said "critics failed to comprehend that the propaganda model is about how the media work, not how effective they are".
=== Inroads: A Journal of Opinion ===
Gareth Morley argues in an article in Inroads: A Journal of Opinion that widespread coverage of Israeli mistreatment of protesters as compared with little coverage of similar (or much worse) events in sub-Saharan Africa is poorly explained. This was in response to Chomsky's assertion that in testing the Model, examples should be carefully paired to control reasons for discrepancies not related to political bias. Chomsky himself cites the examples of government mis-treatment of protesters and points out that general coverage of the two areas compared should be similar, raising the point that they are not: news from Israel (in any form) is far more common than news from sub-Saharan Africa. Morley considers this approach dubiously empirical.
=== The New York Times review ===
Writing for The New York Times, the historian Walter LaFeber criticized the book Manufacturing Consent for overstating its case, in particular with regards to reporting on Nicaragua and not adequately explaining how a powerful propaganda system would let military aid to the Contra rebels be blocked. Herman responded in a letter by stating that the system was not "all powerful" and that LaFeber did not address their main point regarding Nicaragua. LaFeber replied that:
Mr. Herman wants to have it both ways: to claim that leading American journals "mobilize bias" but object when I cite crucial examples that weaken the book's thesis. If the news media are so unqualifiedly bad, the book should at least explain why so many publications (including my own) can cite their stories to attack President Reagan's Central American policy.
Chomsky responds to LaFeber's reply in Necessary Illusions:
What is more, a propaganda model is not weakened by the discovery that with careful and critical reading, material could be unearthed in the media that could be used by those that objected to "President Reagan's Central American policy" on grounds of principle, opposing not its failures but its successes: the near destruction of Nicaragua and the blunting of the popular forces that threatened to bring democracy and social reform to El Salvador, among other achievements.
== See also ==
Concentration of media ownership
Corporate censorship
Crowd manipulation
Edward Bernays
Fairness and Accuracy in Reporting
Gatekeeping (communication)
Mass psychology
Politico-media complex
Propaganda
Propaganda in the United States
Propaganda techniques
Spin (public relations)
== References ==
=== Notes ===
=== Bibliography ===
== External links ==
The Propaganda Model Revisited by Edward S. Herman, 1996
The Propaganda Model: An Overview by David Cromwell, 2002
Klaehn, Jeffery (2002). "A Critical Review and Assessment of Herman and Chomsky's Propaganda Model". European Journal of Communication. 17 (2): 147–182. doi:10.1177/0267323102017002691. S2CID 51778637. As PDF
The Propaganda Model: A Retrospective by Edward S. Herman, 2003
Media, Power and the Origins of the Propaganda Model: An Interview with Edward S. Herman by Jeffery Klaehn, 2008
"The Herman-Chomsky Propaganda Model Twenty Years On". Westminster Papers in Communication and Culture. 6 (2). 2009.
Robertson, John W. (2011). "The Propaganda Model in 2011: Stronger Yet Still Neglected in UK Higher Education?" (PDF). Synaesthesia: Communication Across Cultures. 1 (1). ISSN 1883-5953.
Pedro, Joan (2011). "The Propaganda Model in the Early 21st Century: Part I". International Journal of Communication. 5: 1865–1905. ISSN 1932-8036. Archived from the original on 2011-12-29. Retrieved 2011-12-24. Part II
Propaganda Model Resource List at Source Watch
Herman, Edward S.; Chomsky, Noam (2010). Manufacturing Consent: The Political Economy of the Mass Media. Random House. ISBN 978-1-4070-5405-6.
=== Online videos ===
Manufacturing Consent, The Propaganda Model, 1992
"Noam Chomsky - The Political Economy of the Mass Media - Part 1"
"The Myth of the Liberal Media: The Propaganda Model of News"
Chomsky "Media" interview by Andrew Marr, The Big Idea, 1996
Noam Chomsky in conversation with Jonathan Freedland. British Library exhibition: Propaganda, Power and Persuasion. March 19, 2003.
Noam Chomsky: The 5 Filters of the Mass Media Machine | Wikipedia/Propaganda_model |
Religious conversion is the adoption of a set of beliefs identified with one particular religious denomination to the exclusion of others. Thus "religious conversion" would describe the abandoning of adherence to one denomination and affiliating with another. This might be from one to another denomination within the same religion, for example, from Protestant Christianity to Roman Catholicism or from Shi'a Islam to Sunni Islam. In some cases, religious conversion "marks a transformation of religious identity and is symbolized by special rituals".
People convert to a different religion for various reasons, including active conversion by free choice due to a change in beliefs, secondary conversion, deathbed conversion, conversion for convenience, marital conversion, and forced conversion. Religious conversion can also be driven by practical considerations. Historically, people have converted to evade taxes, to escape military service or to gain political representation.
Proselytism is the act of attempting to convert by persuasion another individual from a different religion or belief system. Apostate is a term used by members of a religion or denomination to refer to someone who has left that religion or denomination.
== Religion and proselytization ==
The religions of the world are divided into two groups: those that actively seek new followers (missionary religions) and those that do not (non-missionary religions). This classification dates back to a lecture given by Max Müller in 1873, and is based on whether or not a religion seeks to gain new converts. The three main religions classified as missionary religions are Buddhism, Christianity, and Islam, while the non-missionary religions include Judaism, Zoroastrianism, and Hinduism. Other religions, such as Primal Religions, Confucianism, and Taoism, may also be considered non-missionary religions.
== Abrahamic religions ==
=== Baháʼí Faith ===
In sharing their faith with others, Baháʼís are cautioned to "obtain a hearing" – meaning to make sure the person they are proposing to teach is open to hearing what they have to say. "Baháʼí pioneers", rather than attempting to supplant the cultural underpinnings of the people in their adopted communities, are encouraged to integrate into the society and apply Baháʼí principles in living and working with their neighbors.
Baháʼís recognize the divine origins of all revealed religion, and believe that these religions occurred sequentially as part of a divine plan (see Progressive revelation), with each new revelation superseding and fulfilling that of its predecessors. Baháʼís regard their own faith as the most recent (but not the last), and believe its teachings – which are centered around the principle of the oneness of humanity – are most suited to meeting the needs of a global community.
In most countries conversion is a simple matter of filling out a card stating a declaration of belief. This includes acknowledgement of Bahá'u'llah – the Founder of the Faith – as the Messenger of God for this age, awareness and acceptance of his teachings, and intention to be obedient to the institutions and laws he established.
Conversion to the Baháʼí Faith carries with it an explicit belief in the common foundation of all revealed religion, a commitment to the unity of mankind, and active service to the community at large, especially in areas that will foster unity and concord. Since the Baháʼí Faith has no clergy, converts are encouraged to be active in all aspects of community life. Even a recent convert may be elected to serve on a local Spiritual Assembly – the guiding Baháʼí institution at the community level.
=== Christianity ===
Within Christianity conversion refers variously to three different phenomena: a person becoming Christian who was previously not Christian; a Christian moving from one Christian denomination to another; a particular spiritual development, sometimes called the "second conversion", or "the conversion of the baptised".
Conversion to Christianity is the religious conversion of a previously non-Christian person to some form of Christianity. Some Christian sects require full conversion for new members regardless of any history in other Christian sects, or from certain other sects. The exact requirements vary between different churches and denominations. Baptism is traditionally seen as a sacrament of admission to Christianity. Christian baptism has some parallels with Jewish immersion by mikvah.
In the New Testament, Jesus commanded his disciples in the Great Commission to "go and make disciples of all nations". Evangelization – sharing the Gospel message or "Good News" in deed and word, is an expectation of Christians.
Conversions to Christianity have been widespread. Even Christian communities not known for proselytization, such as the Armenian Apostolic Church, are known to have accepted converts among Muslims, Yazidis, and Jews in the nineteenth century.
==== Comparison between Protestant denominations ====
While Calvinism is monergistic, like Lutherism, its monergism is through the inner calling of the Holy Spirit, which is irresistible according to the tradition. Lutherism, on the other hand, is monergistic through the means of grace, and holds the Word to be resistible. The Arminian view on salvation, unlike the other two, is synergistic, and considers salvation resistible due to the common grace of free will.
==== Latter Day Saint movement ====
Much of the theology of Latter Day Saint baptism was established during the early Latter Day Saint movement founded by Joseph Smith, Jr. According to this theology, baptism must be by immersion, for the remission of sins (meaning that through baptism, past sins are forgiven), and occurs after one has shown faith and repentance. Mormon baptism does not purport to remit any sins other than personal ones, as adherents do not believe in original sin. Latter Day Saints baptisms also occur only after an "age of accountability" which is defined as the age of eight years. The theology thus rejects infant baptism.
In addition, Latter Day Saint theology requires that baptism may only be performed with one who has been called and ordained by God with priesthood authority. Because the churches of the Latter Day Saint movement operate under a lay priesthood, children raised in a Mormon family are usually baptized by a father or close male friend or family member who has achieved the office of priest, which is conferred upon worthy male members at least 16 years old in the LDS Church.
Baptism is seen as symbolic both of Jesus' death, burial and resurrection and is also symbolic of the baptized individual putting off of the natural or sinful man and becoming spiritually reborn as a disciple of Jesus.
Membership into a Latter Day Saint church is granted only by baptism whether or not a person has been raised in the church. Latter Day Saint churches do not recognize baptisms of other faiths as valid because they believe baptisms must be performed under the church's unique authority. Thus, all who come into one of the Latter Day Saint faiths as converts are baptized, even if they have previously received baptism in another faith.
When performing a Baptism, Latter Day Saints say the following prayer before performing the ordinance:
Having been commissioned of Jesus Christ, I baptize you in the name of the Father, and of the Son, and of the Holy Ghost. Amen.
Baptisms inside and outside the temples are usually done in a baptistry, although they can be performed in any body of water in which the person may be completely immersed. The person administering the baptism must recite the prayer exactly, and immerse every part, limb, hair and clothing of the person being baptized. If there are any mistakes, or if any part of the person being baptized is not fully immersed, the baptism must be redone. In addition to the baptizer, two members of the church witness the baptism to ensure that it is performed properly.
Following baptism, Latter Day Saints receive the Gift of the Holy Ghost by the laying on of hands of a Melchizedek Priesthood holder.
Latter Day Saints hold that one may be baptized after death through the vicarious act of a living individual. Members of The Church of Jesus Christ of Latter-day Saints with a valid temple recommend (beginning in the year they turn twelve, and after being ordained to the Aaronic Priesthood for men and boys) have the opportunity to practice baptism for the dead as a missionary ritual. However, individuals for whom such baptisms are performed are not counted in figures regarding church membership statistics, such as total membership in the church, or the number of convert baptisms in a given year. Other churches of the Latter Day Saint movement also perform baptisms for the dead. This doctrine, in combination with others regarding the time between an individual's death and resurrection, also explains what happens to the righteous non-believer and the unevangelized by providing a post-mortem means of repentance and salvation.
=== Islam ===
Converting to Islam requires one to declare the shahādah, the Muslim profession of faith ("there is no god but God; Muhammad is the messenger of God"). According to Clinton Bennett, British–American scholar of Religious studies, one's declaration of the Muslim profession of faith does not imply faith in God alone, since the conversion to Islam includes other distinct Islamic beliefs as well as part of the Muslim creed (ʿaqīdah):
Technically, the Shahadah (first pillar) is the only obligatory statement of faith in Islam; however, over time a list of six items evolved, the essentials of faith (Iman Mufassal), namely: belief in God, in God's angels, scriptures, messengers, day of judgment, and God's power.
In the Islamic religion, it is believed that everyone is Muslim at birth. Due to this, those who convert are typically referred to as reverts. In Islam, the practice of religious circumcision is considered a sunnah custom, not a requirement for conversion, and furthermore it is never mentioned in the Quran. The majority of clerical opinions holds that circumcision is not required upon entering the Muslim faith. In the Sunnī branch of Islam, the Shāfiʿī and Ḥanbalī schools regard both male and female circumcision as legally obligatory for Muslims, while the Mālikī and Ḥanafī schools regard it as non-binding and only recommended for both sexes.
=== Judaism ===
Conversion to Judaism is the religious conversion of non-Jews to become members of the Jewish religion and Jewish ethnoreligious community. The procedure and requirements for conversion depend on the sponsoring denomination. A conversion in accordance with the process of a denomination is not a guarantee of recognition by another denomination. A formal conversion is also sometimes undertaken by individuals whose Jewish ancestry is questioned, even if they were raised Jewish, but may not actually be considered Jews according to traditional Jewish law.
As late as the 6th century, the Eastern Roman empire and Caliph Umar ibn Khattab were issuing decrees against conversion to Judaism, implying that this was still occurring.
=== Spiritism ===
There are no rituals or dogmas, nor any sort of procedures in conversion to Spiritism. The doctrine is first considered as science, then philosophy and lastly as a religion. Allan Kardec's codification of Spiritism occurred between the years 1857 and 1868. Currently there are 25 to 60 million people studying Spiritism in various countries, mainly in Brazil, through its essential books, which include The Spirits Book, The Book on Mediums, The Gospel According to Spiritism, Heaven and Hell and The Genesis According to Spiritism.
Chico Xavier wrote over 490 additional books, which expand on the spiritualist doctrine.
As explained in the first of the 1,019 questions and answers in The Spirits Book:
1. What is God? Answer: "God is the Supreme Intelligence-First Cause of all things."
The consensus in Spiritism is that God, the Great Creator, is above everything, including all human things such as rituals, dogmas, denominations or any other thing.
== Dharmic religions ==
=== Buddhism ===
Persons newly adhering to Buddhism traditionally "Taking Three Refuge" (express faith in the Three Jewels – Buddha, Dhamma, and Sangha) before a monk, nun, or similar representative, with often the sangha, the community of practitioners, also in ritual attendance.
Throughout the timeline of Buddhism, conversions of entire countries and regions to Buddhism were frequent, as Buddhism spread throughout Asia. For example, in the 11th century in Burma, king Anoratha converted his entire country to Theravada Buddhism. At the end of the 12th century, Jayavarman VII set the stage for conversion of the Khmer people to Theravada Buddhism. Mass conversions of areas and communities to Buddhism occur up to the present day, for example, in the Dalit Buddhist movement in India there have been organized mass conversions.
Exceptions to encouraging conversion may occur in some Buddhist movements. In Tibetan Buddhism, for example, the current Dalai Lama discourages active attempts to win converts.
=== Hinduism ===
Hinduism is a diverse system of thought with beliefs spanning monotheism, polytheism, panentheism, pantheism, pandeism, monism, and atheism among others. Hinduism has no traditional ecclesiastical order, no centralized religious authorities, no universally accepted governing body, no binding holy book nor any mandatory prayer attendance requirements. In its diffuse and open structure, numerous schools and sects of Hinduism have developed and spun off in India with help from its ascetic scholars, since the Vedic age. The six Astika and two Nastika schools of Hindu philosophy, in its history, did not develop a missionary or proselytization methodology, and they co-existed with each other. Most Hindu sub-schools and sects do not actively seek converts. Individuals have had a choice to enter, leave or change their god(s), spiritual convictions, accept or discard any rituals and practices, and pursue spiritual knowledge and liberation (moksha) in different ways. However, various schools of Hinduism do have some core common beliefs, such as the belief that all living beings have Atman (soul), a belief in karma theory, spirituality, ahimsa (non-violence) as the greatest dharma or virtue, and others.
Religious conversion to Hinduism has a long history outside India. Merchants and traders of India, particularly from Indian peninsula, carried their religious ideas, which led to religious conversions to Hinduism in Indonesia, Champa, Cambodia and Burma. Some sects of Hindus, particularly of the Bhakti schools began seeking or accepting converts in early to mid 20th century. For example, groups like the International Society for Krishna Consciousness accept those who have a desire to follow their sects of Hinduism and have their own religious conversion procedure.
Since 1800 CE, religious conversion from and to Hinduism has been a controversial subject within Hinduism. Some have suggested that the concept of missionary conversion, either way, is contrary to the precepts of Hinduism. Religious leaders of some of Hinduism sects such as Brahmo Samaj have seen Hinduism as a non-missionary religion yet welcomed new members, while other leaders of Hinduism's diverse schools have stated that with the arrival of missionary Islam and Christianity in India, the view that "there is no such thing as proselytism in Hinduism" must be re-examined.
In recent decades, mainstream Hinduism schools have attempted to systematize ways to accept religious converts, with an increase in inter-religious mixed marriages. The steps involved in becoming a Hindu have variously included a period where the interested person gets an informal ardha-Hindu name and studies ancient literature on spiritual path and practices (English translations of Upanishads, Agama, Itihasa, ethics in Sutra, Hindu festivals, yoga). If after a period of study, the individual still wants to convert, a Namakarana Samskara ceremony is held, where the individual adopts a traditional Hindu name. The initiation ceremony may also include Yajna (i.e., fire ritual with Sanskrit hymns) under guidance of a local Hindu priest. Some of these places are mathas and asramas (hermitage, monastery), where one or more gurus (spiritual guide) conduct the conversion and offer spiritual discussions. Some schools encourage the new convert to learn and participate in community activities such as festivals (Diwali etc.), read and discuss ancient literature, learn and engage in rites of passages (ceremonies of birth, first feeding, first learning day, age of majority, wedding, cremation and others).
=== Jainism ===
Jainism accepts anyone who wants to embrace the religion. There is no specific ritual for becoming a Jain. One does not need to ask any authorities for admission. One becomes a Jain on one's own by observing the five vows (vratas) The five main vows as mentioned in the ancient Jain texts like Tattvarthasutra are:
Ahimsa - Not to injure any living being by actions and thoughts.
Satya - Not to lie or speak words that hurt others.
Asteya - Not to take anything if not given.
Brahmacharya - Chastity for householders / Celibacy in action, words and thoughts for monks and nuns.
Aparigraha (Non-possession) - non-attachment to possessions.
Following the five vows is the main requirement in Jainism. All other aspects such as visiting temples are secondary. Jain monks and nuns are required to observe these five vows strictly.
=== Sikhism ===
Sikhism is not known to openly proselytize conversions, however it is open and accepting to anyone wanting to take on the Sikh faith.
== Other religions and sects ==
In the second half of the 20th century, the rapid growth of new religious movements (NRMs) led some psychologists and other scholars to propose that these groups were using "brainwashing" or "mind control" techniques to gain converts. This theory was publicized by the popular news media but disputed by other scholars, including some sociologists of religion.
In the 1960s sociologist John Lofland lived with Unification Church missionary Young Oon Kim and a small group of American church members in California and studied their activities in trying to promote their beliefs and win converts to their church. Lofland noted that most of their efforts were ineffective and that most of the people who joined did so because of personal relationships – often family relationships – with existing members. Lofland summarised his findings in 1964 in a doctoral thesis entitled "The World Savers: A Field Study of Cult Processes", and in 1966 in book form (published by Prentice-Hall) as Doomsday Cult: A Study of Conversion, Proselytization, and Maintenance of Faith. It is considered to be one of the most important and widely cited studies of the process of religious conversion, and one of the first modern sociological studies of a new religious movement.
The Church of Scientology attempts to gain converts by offering "free stress tests". It has also used the celebrity status of some of its members (most notably that of the American actor Tom Cruise) to attract converts. The Church of Scientology requires that all converts sign a legal waiver which covers their relationship with the Church of Scientology before engaging in Scientology services.
Research in the United States and in the Netherlands has shown a positive correlation between areas lacking mainstream churches and the percentage of people who are members of a new religious movement. This applies also for the presence of New Age centres.
On the other end of the proselytising scale are religions that do not accept any converts. Often these are relatively small, close-knit minority religions that are ethnically based such as the Yazidis, Druze, and Mandaeans. The Parsis, a Zoroastrianism group based in India, classically does not accept converts, but this issue became controversial in the 20th century due to a rapid decline in membership. Chinese traditional religion lacks clear criteria for membership, and hence for conversion. However, Taoism does have its own religious conversion ceremony which seems to be adopted and modified from Chinese Buddhist refuge-taking ceremonies. The Shakers and some Indian eunuch brotherhoods do not allow procreation, so that every member is a convert.
== Fostering conversion ==
Different factors and circumstances may operate and interact to persuade individuals of groups to convert and adopt a new set of religious doctrines and habits.
Religious enthusiasm for proselytism can play a role. For example, the New Testament chronicles the personal activities of the Apostles and their followers in inspired preaching, miracle-working and the subsequent gathering of followers. Freshly-converted Irish and Anglo-Saxon priests spread their new-found faith among pagan British and Germanic peoples. Missions of the 19th century spread against a background of North Atlantic revivalism with its emotionalism and mass-meeting crowd psychological behaviours.
Messianism may prepare groups for the coming of a Messiah or of a saviour. Thus the 1st-century Levant, steeped in expectations of overturning the political situation, provided fertile ground for nascent Christianity and other Jewish messianic sects, such as the Zealots.
Some religious traditions, rather than stressing emotion in the conversion process, emphasise the importance of philosophical thought as a pathway to adopting a new religion. Saint Paul in Athens fits here, as do some of the Indic religions (such as Buddhism and Jainism). The historical God-fearers may represent a philosophical bridge between Hellenism and Abrahamic faith.
A religious creed which can capture the ear and support of secular power can become a prestige movement, encouraging the mass of a people to follow its tenets. Christianity grew after becoming the state religion in Armenia, in the Roman Empire, and in Ethiopia. Eastern Orthodoxy expanded when it gained official sanction in Kievan Rus'.
Some people convert under the influence of other social conditions. Early Christianity attracted followers by offering community material support and enhanced status for disadvantaged groups such as women and slaves. Islam allegedly spread in North Africa through just administration, and in the Balkans by integrating new believers with improved tax conditions and social prestige. Colonial missions since the 19th century have attracted people to an implied nexus of material well-being, civilisation, and European-style religion.
Force can – at least apparently – coerce people into adopting different ideas. Religious police in (for example) Iran and Saudi Arabia answer for the correct religious expression of those in their purview. The Inquisition in France and in Iberia worked to convert heretics – with varying success. Frankish armies spread Roman Catholicism eastwards in the Middle Ages. Religious wars and suppression shaped the histories of the Baltic tribes, the Hussites and the Huguenots.
On the other hand, persecution can drive religious faith and practice underground and strengthen the resolve of oppressed adherents – as in the cases of the Waldenses or the Baháʼí Faith.
== International law ==
The United Nations Universal Declaration of Human Rights defines religious conversion as a human right: "Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief" (Article 18). Despite this UN-declared human right, some groups forbid or restrict religious conversion (see below).
Based on the declaration the United Nations Commission on Human Rights (UNCHR) drafted the International Covenant on Civil and Political Rights, a legally binding treaty. It states that "Everyone shall have the right to freedom of thought, conscience and religion. This right shall include freedom to have or to adopt a religion or belief of his choice" (Article 18.1). "No one shall be subject to coercion which would impair his freedom to have or to adopt a religion or belief of his choice" (Article 18.2).
The UNCHR issued a General Comment on this Article in 1993: "The Committee observes that the freedom to 'have or to adopt' a religion or belief necessarily entails the freedom to choose a religion or belief, including the right to replace one's current religion or belief with another or to adopt atheistic views ... Article 18.2 bars coercion that would impair the right to have or adopt a religion or belief, including the use of threat of physical force or penal sanctions to compel believers or non-believers to adhere to their religious beliefs and congregations, to recant their religion or belief or to convert." (CCPR/C/21/Rev.1/Add.4, General Comment No. 22.; emphasis added)
Some countries distinguish voluntary, motivated conversion from organized proselytism, attempting to restrict the latter. The boundary between them is not easily defined: what one person considers legitimate evangelizing, or witness-bearing, another may consider intrusive and improper. Illustrating the problems that can arise from such subjective viewpoints is this extract from an article by C. Davis, published in Cleveland State University's Journal of Law and Health: "According to the Union of American Hebrew Congregations, Jews for Jesus and Hebrew Christians constitute two of the most dangerous cults, and its members are appropriate candidates for deprogramming. Anti-cult evangelicals ... protest that 'aggressiveness and proselytizing ... are basic to authentic Christianity,' and that Jews for Jesus and Campus Crusade for Christ are not to be labeled as cults. Furthermore, certain Hassidic groups who physically attacked a meeting of the Hebrew Christian 'cult' have themselves been labeled a 'cult' and equated with the followers of Reverend Moon, by none other than the President of the Central Conference of American Rabbis."
Since the collapse of the former Soviet Union the Russian Orthodox Church has enjoyed a revival. However, it takes exception to what it considers illegitimate proselytizing by the Roman Catholic Church, the Salvation Army, Jehovah's Witnesses, and other religious movements in what it refers to as its canonical territory.
Greece has a long history of conflict, mostly with Jehovah's Witnesses, but also with some Pentecostals, over its laws on proselytism. This situation stems from a law passed in the 1930s by the dictator Ioannis Metaxas. A Jehovah's Witness, Minos Kokkinakis, won the equivalent of $14,400 in damages from the Greek state after being arrested for trying to preach his faith from door to door. In another case, Larissis v. Greece, a member of the Pentecostal church also won a case in the European Court of Human Rights.
== See also ==
== References ==
== Further reading ==
Barker, Eileen The Making of a Moonie: Choice or Brainwashing? (1984)
Barrett, D. V. The New Believers: A survey of sects, cults and alternative religions (2001) UK, Cassell & Co ISBN 0-304-35592-5
Buckser, A. S. and S. D. Glazier. eds. The Anthropology of Religious Conversion Rowman and Littlefield, 2003
Cooper, Richard S. "The Assessment and Collection of Kharaj Tax in Medieval Egypt" Journal of the American Oriental Society, Vol. 96, No. 3. (Jul–Sep., 1976), pp. 365–382.
Curtin, Phillip D. Cross-Cultural Trade in World History. Cambridge University Press, 1984.
Hoiberg, Dale, and Indu Ramachandran. Students' Britannica India. Popular Prakashan, 2000.
Idris, Gaefar, Sheikh. The Process of Islamization. Plainfield, Ind.: Muslim Students' Association of the U.S. and Canada, 1977. vi, 20 p. Without ISBN
James, William, The varieties of religious experience: a study in human nature. Being the Gifford lectures on natural religion delivered at Edinburgh in 1901–1902; Longmans, Green & Co, New York (1902)
Morris, Harold C., and Lin M. Morris. "Power and purpose: Correlates to conversion." Psychology: A Journal of Human Behavior, Vol 15(4), Nov–Dec 1978, 15–22.
Rambo, Lewis R. Understanding Religious Conversion. Yale University Press, 1993.
Rambo, Lewis R., & Farhadian, Charles. Oxford Handbook of Religious Conversion. Oxford University Press, 2014.
Ramstedt, Martin. Hinduism in Modern Indonesia: A Minority Religion Between Local, National, and Global Interests. Routledge, 2004.
Rawat, Ajay S. StudentMan and Forests: The Khatta and Gujjar Settlements of Sub-Himalayan Tarai. Indus Publishing, 1993.
Vasu, Srisa Chandra (1919), The Catechism Of Hindu Dharma, New York: Kessinger Publishing, LLC
Jain, Vijay K. (2011), Tattvârthsûtra (1st ed.), (Uttarakhand) India: Vikalp Printers, ISBN 978-81-903639-2-1, Non-Copyright
Sangave, Vilas Adinath (2001), Aspects of Jaina religion (3rd ed.), Bharatiya Jnanpith, ISBN 81-263-0626-2
== External links ==
Quotations related to religious conversion at Wikiquote
"Conversion: A Family Affair", Craig Harline, Berfrois, 4 October 2011 | Wikipedia/Religious_conversion |
The following is a list of notable unsolved problems grouped into broad areas of physics.
Some of the major unsolved problems in physics are theoretical, meaning that existing theories seem incapable of explaining a certain observed phenomenon or experimental result. The others are experimental, meaning that there is a difficulty in creating an experiment to test a proposed theory or investigate a phenomenon in greater detail.
There are still some questions beyond the Standard Model of physics, such as the strong CP problem, neutrino mass, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself—the Standard Model is inconsistent with that of general relativity, to the point that one or both theories break down under certain conditions (for example within known spacetime singularities like the Big Bang and the centres of black holes beyond the event horizon).
== General physics ==
Theory of everything: Is there a singular, all-encompassing, coherent theoretical framework of physics that fully explains and links together all physical aspects of the universe?
Dimensionless physical constants: At the present time, the values of various dimensionless physical constants cannot be calculated; they can be determined only by physical measurement. What is the minimum number of dimensionless physical constants from which all other dimensionless physical constants can be derived? Are dimensional physical constants necessary at all?
== Quantum gravity ==
Quantum gravity: Can quantum mechanics and general relativity be realized as a fully consistent theory (perhaps as a quantum field theory)? Is spacetime fundamentally continuous or discrete? Would a consistent theory involve a force mediated by a hypothetical graviton, or be a product of a discrete structure of spacetime itself (as in loop quantum gravity)? Are there deviations from the predictions of general relativity at very small or very large scales or in other extreme circumstances that flow from a quantum gravity mechanism?
Black holes, black hole information paradox, and black hole radiation: Do black holes produce thermal radiation, as expected on theoretical grounds? Does this radiation contain information about their inner structure, as suggested by gauge–gravity duality, or not, as implied by Hawking's original calculation? If not, and black holes can evaporate away, what happens to the information stored in them (since quantum mechanics does not provide for the destruction of information)? Or does the radiation stop at some point, leaving black hole remnants? Is there another way to probe their internal structure somehow, if such a structure even exists?
The cosmic censorship hypothesis and the chronology protection conjecture: Can singularities not hidden behind an event horizon, known as "naked singularities", arise from realistic initial conditions, or is it possible to prove some version of the "cosmic censorship hypothesis" of Roger Penrose which proposes that this is impossible? Similarly, will the closed timelike curves which arise in some solutions to the equations of general relativity (and which imply the possibility of backwards time travel) be ruled out by a theory of quantum gravity which unites general relativity with quantum mechanics, as suggested by the "chronology protection conjecture" of Stephen Hawking?
Holographic principle: Is it true that quantum gravity admits a lower-dimensional description that does not contain gravity? A well-understood example of holography is the AdS/CFT correspondence in string theory. Similarly, can quantum gravity in a de Sitter space be understood using dS/CFT correspondence? Can the AdS/CFT correspondence be vastly generalized to the gauge–gravity duality for arbitrary asymptotic spacetime backgrounds? Are there other theories of quantum gravity other than string theory that admit a holographic description?
Quantum spacetime or the emergence of spacetime: Is the nature of spacetime at the Planck scale very different from the continuous classical dynamical spacetime that exists in General relativity? In loop quantum gravity, the spacetime is postulated to be discrete from the beginning. In string theory, although originally spacetime was considered just like in General relativity (with the only difference being supersymmetry), recent research building upon the Ryu–Takayanagi conjecture has taught that spacetime in string theory is emergent by using quantum information theoretic concepts such as entanglement entropy in the AdS/CFT correspondence. However, how exactly the familiar classical spacetime emerges within string theory or the AdS/CFT correspondence is still not well understood.
Problem of time: In quantum mechanics, time is a classical background parameter, and the flow of time is universal and absolute. In general relativity, time is one component of four-dimensional spacetime, and the flow of time changes depending on the curvature of spacetime and the spacetime trajectory of the observer. How can these two concepts of time be reconciled?
== Quantum physics ==
Yang–Mills theory: Given an arbitrary compact gauge group, does a non-trivial quantum Yang–Mills theory with a finite mass gap exist? (This problem is also listed as one of the Millennium Prize Problems in mathematics.)
Quantum field theory (this is a generalization of the previous problem): Is it possible to construct, in a mathematically rigorous way, a quantum field theory in 4-dimensional spacetime that includes interactions and does not resort to perturbative methods?
== Cosmology and general relativity ==
Cosmic inflation: Is the theory of cosmic inflation in the very early universe correct, and, if so, what are the details of this epoch? What is the hypothetical inflaton scalar field that gave rise to this cosmic inflation? If inflation happened at one point, is it self-sustaining through inflation of quantum-mechanical fluctuations, and thus ongoing in some extremely distant place?
Horizon problem: Why is the distant universe so homogeneous when the Big Bang theory seems to predict larger measurable anisotropies of the night sky than those observed? Cosmological inflation is generally accepted as the solution, but are other possible explanations such as a variable speed of light more appropriate?
Origin and future of the universe: How did the conditions for anything to exist arise? Is the universe heading towards a Big Freeze, a Big Rip, a Big Crunch, or a Big Bounce?
Size of universe: The diameter of the observable universe is about 93 billion light-years, but what is the size of the whole universe? Is the universe infinite?
Matter–antimatter asymmetry Theoretical models suggest that the early universe should have produced equal amounts of matter and antimatter. However, observations indicate no significant primordial antimatter. Understanding the mechanisms that led to this asymmetry is a major unsolved problem in physics.: 22.3.6
Cosmological principle: Is the universe homogeneous and isotropic at large enough scales, as claimed by the cosmological principle and assumed by all models that use the Friedmann–Lemaître–Robertson–Walker metric, including the current version of the ΛCDM model, or is the universe inhomogeneous or anisotropic? Is the CMB dipole purely kinematic, or does it signal anisotropy of the universe, resulting in the breakdown of the FLRW metric and the cosmological principle? Is the Hubble tension evidence that the cosmological principle is false? Even if the cosmological principle is correct, is the Friedmann–Lemaître–Robertson–Walker metric the right metric to use for our universe? Are the observations usually interpreted as the accelerating expansion of the universe rightly interpreted, or are they instead evidence that the cosmological principle is false?
Cosmological constant problem: Why does the zero-point energy of the vacuum not cause a large cosmological constant? What cancels it out?
Dark matter: What is the identity of dark matter? Is it a particle? If so, is it a WIMP, axion, the lightest superpartner (LSP), or some other particle? Or, are the phenomena attributed to dark matter the result of an alternate theory of gravity separate from general relativity altogether? Despite extensive research, the exact composition of dark matter remains unknown. It is inferred from gravitational effects on visible matter, radiation, and the universe's large-scale structure. Understanding its properties is crucial for a comprehensive understanding of the universe.
Dark energy: What is the cause of the observed accelerating expansion of the universe (the de Sitter phase)? Are the observations rightly interpreted as the accelerating expansion of the universe, or are they evidence that the cosmological principle is false? Why is the energy density of the dark energy component of the same magnitude as the density of matter at present when the two evolve quite differently over time; could it be simply that we are observing at exactly the right time? Is dark energy a pure cosmological constant or are models of quintessence such as phantom energy applicable?
Dark flow: Is a non-spherically symmetric gravitational pull from outside the observable universe responsible for some of the observed motion of large objects such as galactic clusters in the universe?
Shape of the universe: What is the 3-manifold of comoving space, i.e., of a comoving spatial section of the universe, informally called the "shape" of the universe? Neither the curvature nor the topology is presently known, though the curvature is known to be "close" to zero on observable scales. Is the shape unmeasurable; the Poincaré space; or another 3-manifold?
Extra dimensions: Does nature have more than four spacetime dimensions? If so, what is their size? Are dimensions a fundamental property of the universe or an emergent result of other physical laws? Can we experimentally observe evidence of higher spatial dimensions?
== High-energy/particle physics ==
Hierarchy problem: Why is gravity such a weak force? It becomes strong for particles only at the Planck scale, around 1019 GeV, much above the electroweak scale (100 GeV, the energy scale dominating physics at low energies); why are these scales so different from each other? What prevents quantities at the electroweak scale, such as the Higgs boson mass, from getting quantum corrections on the order of the Planck scale? Is the solution supersymmetry, extra dimensions, or just anthropic fine-tuning?
Magnetic monopoles: Did particles that carry "magnetic charge" exist in some past, higher-energy epoch? If so, do any remain today? (Paul Dirac showed the existence of some types of magnetic monopoles would explain charge quantization.)
Neutron lifetime puzzle: While the neutron lifetime has been studied for decades, there currently exists a lack of consilience on its exact value, due to different results from two experimental methods ("bottle" versus "beam").
Proton decay and spin crisis: Is the proton fundamentally stable? Or does it decay with a finite lifetime as predicted by some extensions to the standard model? How do the quarks and gluons carry the spin of protons?
Grand Unification: Are the electromagnetic and nuclear forces different aspects of a Grand Unified Theory? If so, what symmetry governs this force and its behaviours?
Supersymmetry: Is spacetime supersymmetry realized at TeV scale? If so, what is the mechanism of supersymmetry breaking? Does supersymmetry stabilize the electroweak scale, preventing high quantum corrections? Does the lightest supersymmetric particle (LSP) comprise dark matter?
Color confinement: The quantum chromodynamics (QCD) color confinement conjecture is that color-charged particles (such as quarks and gluons) cannot be separated from their parent hadron without producing new hadrons. Is it possible to provide an analytic proof of color confinement in any non-abelian gauge theory?
The QCD vacuum: Many of the equations in non-perturbative QCD are currently unsolved. These energies are the energies sufficient for the formation and description of atomic nuclei. How thus does low energy /non-pertubative QCD give rise to the formation of complex nuclei and nuclear constituents?
Generations of matter: Why are there three generations of quarks and leptons? Is there a theory that can explain the masses of particular quarks and leptons in particular generations from first principles (a theory of Yukawa couplings)?
Neutrino mass: What is the mass of neutrinos, whether they follow Dirac or Majorana statistics? Is the mass hierarchy normal or inverted? Is the CP violating phase equal to 0?
Reactor antineutrino anomaly: There is an anomaly in the existing body of data regarding the antineutrino flux from nuclear reactors around the world. Measured values of this flux appears to be only 94% of the value expected from theory. It is unknown whether this is due to unknown physics (such as sterile neutrinos), experimental error in the measurements, or errors in the theoretical flux calculations.
Strong CP problem and axions: Why is the strong nuclear interaction invariant to parity and charge conjugation? Is Peccei–Quinn theory the solution to this problem? Could axions be the main component of dark matter?
Anomalous magnetic dipole moment: Why is the experimentally measured value of the muon's anomalous magnetic dipole moment ("muon g − 2") significantly different from the theoretically predicted value of that physical constant?
Proton radius puzzle: What is the electric charge radius of the proton? How does it differ from a gluonic charge?
Pentaquarks and other exotic hadrons: What combinations of quarks are possible? Why were pentaquarks so difficult to discover? Are they a tightly bound system of five elementary particles, or a more weakly-bound pairing of a baryon and a meson?
Mu problem: A problem in supersymmetric theories, concerned with understanding the reasons for parameter values of the theory.
Koide formula: An aspect of the problem of particle generations. The sum of the masses of the three charged leptons, divided by the square of the sum of the roots of these masses, to within one standard deviation of observations, is Q = 2⁄3. It is unknown how such a simple value comes about, and why it is the exact arithmetic average of the possible extreme values of 1 /3 (equal masses) and 1 (one mass dominates).
Strange Matter: Does Strange Matter exist? Is it stable? Can they form Strange Stars? Is strange matter stable at 0 pressure (i.e in the vacuum)?
Glueballs: Do they exist in nature?
The gallium anomaly: The measurements of the charged-current capture rate of neutrinos on Ga from strong radioactive sources have yielded results below those expected, based on the known strength of the principal transition supplemented by theory.
== Astronomy and astrophysics ==
Solar cycle: How does the Sun generate its periodically reversing large-scale magnetic field? How do other solar-like stars generate their magnetic fields, and what are the similarities and differences between stellar activity cycles and that of the Sun? What caused the Maunder Minimum and other grand minima, and how does the solar cycle recover from a minima state?
Coronal heating problem: Why is the Sun's corona (atmosphere layer) so much hotter than the Sun's surface? Why is the magnetic reconnection effect many orders of magnitude faster than predicted by standard models?
Astrophysical jet: Why do only certain accretion discs surrounding certain astronomical objects emit relativistic jets along their polar axes? Why are there quasi-periodic oscillations in many accretion discs? Why does the period of these oscillations scale as the inverse of the mass of the central object? Why are there sometimes overtones, and why do these appear at different frequency ratios in different objects?
Diffuse interstellar bands: What is responsible for the numerous interstellar absorption lines detected in astronomical spectra? Are they molecular in origin, and if so which molecules are responsible for them? How do they form?
Supermassive black holes: What is the origin of the M–sigma relation between supermassive black hole mass and galaxy velocity dispersion? How did the most distant quasars grow their supermassive black holes up to 1010 solar masses so early in the history of the universe?
Kuiper cliff: Why does the number of objects in the Solar System's Kuiper belt fall off rapidly and unexpectedly beyond a radius of 50 astronomical units?
Flyby anomaly: Why is the observed energy of satellites flying by planetary bodies sometimes different by a minute amount from the value predicted by theory?
Galaxy rotation problem: Is dark matter responsible for differences in observed and theoretical speed of stars revolving around the centre of galaxies, or is it something else?
Supernovae: What is the exact mechanism by which an implosion of a dying star becomes an explosion?
p-nuclei: What astrophysical process is responsible for the nucleogenesis of these rare isotopes?
Ultra-high-energy cosmic ray: Why is it that some cosmic rays appear to possess energies that are impossibly high, given that there are no sufficiently energetic cosmic ray sources near the Earth? Why is it that (apparently) some cosmic rays emitted by distant sources have energies above the Greisen–Zatsepin–Kuzmin limit?
Rotation rate of Saturn: Why does the magnetosphere of Saturn exhibit a (slowly changing) periodicity close to that at which the planet's clouds rotate? What is the true rotation rate of Saturn's deep interior?
Origin of magnetar magnetic field: What is the origin of magnetar magnetic fields?
Large-scale anisotropy: Is the universe at very large scales anisotropic, making the cosmological principle an invalid assumption? The number count and intensity dipole anisotropy in radio, NRAO VLA Sky Survey (NVSS) catalogue is inconsistent with the local motion as derived from cosmic microwave background and indicate an intrinsic dipole anisotropy. The same NVSS radio data also shows an intrinsic dipole in polarization density and degree of polarization in the same direction as in number count and intensity. There are several other observations revealing large-scale anisotropy. The optical polarization from quasars shows polarization alignment over a very large scale of Gpc. The cosmic-microwave-background data shows several features of anisotropy, which are not consistent with the Big Bang model.
Age–metallicity relation in the Galactic disk: Is there a universal age–metallicity relation (AMR) in the Galactic disk (both "thin" and "thick" parts of the disk)? Although in the local (primarily thin) disk of the Milky Way there is no evidence of a strong AMR, a sample of 229 nearby "thick" disk stars has been used to investigate the existence of an age–metallicity relation in the Galactic thick disk, and indicate that there is an age–metallicity relation present in the thick disk. Stellar ages from asteroseismology confirm the lack of any strong age–metallicity relation in the Galactic disc.
The lithium problem: Why is there a discrepancy between the amount of lithium-7 predicted to be produced in Big Bang nucleosynthesis and the amount observed in very old stars?
Ultraluminous X-ray sources (ULXs): What powers X-ray sources that are not associated with active galactic nuclei but exceed the Eddington limit of a neutron star or stellar black hole? Are they due to intermediate-mass black holes? Some ULXs are periodic, suggesting non-isotropic emission from a neutron star. Does this apply to all ULXs? How could such a system form and remain stable?
Fast radio bursts (FRBs): What causes these transient radio pulses from distant galaxies, lasting only a few milliseconds each? Why do some FRBs repeat at unpredictable intervals, but most do not? Dozens of models have been proposed, but none have been widely accepted.
Origin of Cosmic Magnetic Fields
Observations reveal that magnetic fields are present throughout the universe, from galaxies to galaxy clusters. However, the mechanisms that generated these large-scale cosmic magnetic fields remain unclear. Understanding their origin is a significant unsolved problem in astrophysics.
== Nuclear physics ==
Quantum chromodynamics: What are the phases of strongly interacting matter, and what roles do they play in the evolution of the cosmos? What is the detailed partonic structure of the nucleons? What does QCD predict for the properties of strongly interacting matter? What determines the key features of QCD, and what is their relation to the nature of gravity and spacetime? Does QCD truly lack CP violations?
Quark–gluon plasma: Where is the onset of deconfinement: 1) as a function of temperature and chemical potentials? 2) as a function of relativistic heavy-ion collision energy and system size? What is the mechanism of energy and baryon-number stopping leading to creation of quark-gluon plasma in relativistic heavy-ion collisions? Why is sudden hadronization and the statistical-hadronization model a near-to-perfect description of hadron production from quark–gluon plasma? Is quark flavor conserved in quark–gluon plasma? Are strangeness and charm in chemical equilibrium in quark–gluon plasma? Does strangeness in quark–gluon plasma flow at the same speed as up and down quark flavours? Why does deconfined matter show ideal flow?
Specific models of quark–gluon plasma formation: Do gluons saturate when their occupation number is large? Do gluons form a dense system called colour glass condensate? What are the signatures and evidences for the Balitsky–Fadin–Kuarev–Lipatov, Balitsky–Kovchegov, Catani–Ciafaloni–Fiorani–Marchesini evolution equations?
Nuclei and nuclear astrophysics: Why is there a lack of convergence in estimates of the mean lifetime of a free neutron based on two separate—and increasingly precise—experimental methods? What is the nature of the nuclear force that binds protons and neutrons into stable nuclei and rare isotopes? What is the explanation for the EMC effect? What is the nature of exotic excitations in nuclei at the frontiers of stability and their role in stellar processes? What is the nature of neutron stars and dense nuclear matter? What is the origin of the elements in the cosmos? What are the nuclear reactions that drive stars and stellar explosions? What is the heaviest possible chemical element?
== Fluid dynamics ==
Under what conditions do smooth solutions exist for the Navier–Stokes equations, which are the equations that describe the flow of a viscous fluid? This problem, for an incompressible fluid in three dimensions, is also one of the Millennium Prize Problems in mathematics.
Turbulent flow: Is it possible to make a theoretical model to describe the statistics of a turbulent flow (in particular, its internal structures)?
Granular convection: why does a granular material subjected to shaking or vibration exhibit circulation patterns similar to types of fluid convection? Why do the largest particles end up on the surface of a granular material containing a mixture of variously sized objects when subjected to a vibration/shaking?
== Condensed matter physics ==
Bose–Einstein condensation: How do we rigorously prove the existence of Bose–Einstein condensates for general interacting systems?
High-temperature superconductivity: What is the mechanism that causes certain materials to exhibit superconductivity at temperatures much higher than around 25 kelvins? Is it possible to make a material that is a superconductor at room temperature and atmospheric pressure?
Amorphous solids: What is the nature of the glass transition between a fluid or regular solid and a glassy phase? What are the physical processes giving rise to the general properties of glasses and the glass transition?
Universality of low-temperature amorphous solids: why is the small dimensionless ratio of the phonon wavelength to its mean free path nearly the same for a very large family of disordered solids? This small ratio is observed for very large range of phonon frequencies.
Cryogenic electron emission: Why does the electron emission in the absence of light increase as the temperature of a photomultiplier is decreased?
Sonoluminescence: What causes the emission of short bursts of light from imploding bubbles in a liquid when excited by sound?
Topological order: Is topological order stable at non-zero temperature? Equivalently, is it possible to have three-dimensional self-correcting quantum memory?
Gauge block wringing: What mechanism allows gauge blocks to be wrung together?
Fractional Hall effect: What mechanism explains the existence of the u = 5/2 state in the fractional quantum Hall effect? Does it describe quasiparticles with non-Abelian fractional statistics?
Liquid crystals: Can the nematic to smectic (A) phase transition in liquid crystal states be characterized as a universal phase transition?
Semiconductor nanocrystals: What is the cause of the nonparabolicity of the energy-size dependence for the lowest optical absorption transition of quantum dots?
Metal whiskering: In electrical devices, some metallic surfaces may spontaneously grow fine metallic whiskers, which can lead to equipment failures. While compressive mechanical stress is known to encourage whisker formation, the growth mechanism has yet to be determined.
Superfluid transition in helium-4: Explain the discrepancy between the experimental and theoretical determinations of the heat capacity critical exponent α.
Scharnhorst effect: Can light signals travel slightly faster than c between two closely spaced conducting plates, exploiting the Casimir effect?
== Quantum computing and quantum information ==
Threshold problem: Can we go beyond the noisy intermediate-scale quantum era? Can quantum computers reach fault tolerance? Is it possible to have enough qubit scalability to implement quantum error correction? What is the most promising candidate platforms to physically implement qubits?
Topological qubits: Topological quantum computers are promising but can they be built? Can we demonstrate Majorana zero modes conclusively?
Temperature: Can quantum computing be performed at non-cryogenic temperatures? Can we build room temperature quantum computers?
Complexity classes problems: What is the relation of BQP and BPP? What is the relation between BQP and NP? Can computation in plausible physical theories (quantum algorithms) go beyond BQP?
Post-quantum cryptography: Can we prove that some cryptographic protocols are safe against quantum computers?
Quantum capacity: The capacity of a quantum channel is in general not known.
== Plasma physics ==
Plasma physics and fusion power: Fusion energy may potentially provide power from an abundant resource (e.g. hydrogen) without the type of radioactive waste that fission energy currently produces. However, can ionized gases (plasma) be confined long enough and at a high enough temperature to create fusion power? What is the physical origin of H-mode?
The injection problem: Fermi acceleration is thought to be the primary mechanism that accelerates astrophysical particles to high energy. However, it is unclear what mechanism causes those particles to initially have energies high enough for Fermi acceleration to work on them.
Alfvénic turbulence: In the solar wind and the turbulence in solar flares, coronal mass ejections, and magnetospheric substorms are major unsolved problems in space plasma physics.
Ball lightning: the exact physical nature of this mystery in atmospheric electricity.
== Biophysics ==
Stochasticity and robustness to noise in gene expression: How do genes govern our body, withstanding different external pressures and internal stochasticity? Certain models exist for genetic processes, but we are far from understanding the whole picture, in particular in development where gene expression must be tightly regulated.
Quantitative study of the immune system: What are the quantitative properties of immune responses? What are the basic building blocks of immune system networks?
Homochirality: What is the origin of the preponderance of specific enantiomers in biochemical systems?
Magnetoreception: How do animals (e.g. migratory birds) sense the Earth's magnetic field?
Protein structure prediction: How is the three-dimensional structure of proteins determined by the one-dimensional amino acid sequence? How can proteins fold on microsecond to second timescales when the number of possible conformations is astronomical and conformational transitions occur on the picosecond to microsecond timescale? Can algorithms be written to predict a protein's three-dimensional structure from its sequence? Do the native structures of most naturally occurring proteins coincide with the global minimum of the free energy in conformational space? Or are most native conformations thermodynamically unstable, but kinetically trapped in metastable states? What keeps the high density of proteins present inside cells from precipitating?
Quantum biology: Can coherence be maintained in biological systems at timeframes long enough to be functionally important? Are there non-trivial aspects of biology or biochemistry that can only be explained by the persistence of coherence as a mechanism?
== Foundations of physics ==
Interpretation of quantum mechanics: How does the quantum description of reality, which includes elements such as the superposition of states and wavefunction collapse or quantum decoherence, give rise to the reality we perceive? Another way of stating this question regards the measurement problem: What constitutes a "measurement" which apparently causes the wave function to collapse into a definite state? Unlike classical physical processes, some quantum mechanical processes (such as quantum teleportation arising from quantum entanglement) cannot be simultaneously "local", "causal", and "real", but it is not obvious which of these properties must be sacrificed, or if an attempt to describe quantum mechanical processes in these senses is a category error such that a proper understanding of quantum mechanics would render the question meaningless. Can the many worlds interpretation resolve it?
Arrow of time (e.g. entropy's arrow of time): Why does time have a direction? Why did the universe have such low entropy in the past, and time correlates with the universal (but not local) increase in entropy, from the past and to the future, according to the second law of thermodynamics? Why are CP violations observed in certain weak force decays, but not elsewhere? Are CP violations somehow a product of the second law of thermodynamics, or are they a separate arrow of time? Are there exceptions to the principle of causality? Is there a single possible past? Is the present moment physically distinct from the past and future, or is it merely an emergent property of consciousness? What links the quantum arrow of time to the thermodynamic arrow?
Locality: Are there non-local phenomena in quantum physics? If they exist, are non-local phenomena limited to the entanglement revealed in the violations of the Bell inequalities, or can information and conserved quantities also move in a non-local way? Under what circumstances are non-local phenomena observed? What does the existence or absence of non-local phenomena imply about the fundamental structure of spacetime? How does this elucidate the proper interpretation of the fundamental nature of quantum physics?
Quantum mind: Do quantum mechanical phenomena, such as entanglement and superposition, play an important part in the brain's function and can it explain critical aspects of consciousness?
== Problems solved in the past 30 years ==
=== General physics/quantum physics ===
Perform a loophole-free Bell test experiment (1970–2015): In October 2015, scientists from the Kavli Institute of Nanoscience reported that the failure of the local hidden-variable hypothesis is supported at the 96% confidence level based on a "loophole-free Bell test" study. These results were confirmed by two studies with statistical significance over 5 standard deviations which were published in December 2015.
Create Bose–Einstein condensate (1924–1995): Composite bosons in the form of dilute atomic vapours were cooled to quantum degeneracy using the techniques of laser cooling and evaporative cooling.
=== Cosmology and general relativity ===
Existence of gravitational waves (1916–2016): On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging, which was also the first detection of a stellar binary black hole.
Numerical solution for binary black hole (1960s–2005): The numerical solution of the two body problem in general relativity was achieved after four decades of research. Three groups devised the breakthrough techniques in 2005 (annus mirabilis of numerical relativity).
Cosmic age problem (1920s–1990s): The estimated age of the universe was around 3 to 8 billion years younger than estimates of the ages of the oldest stars in the Milky Way. Better estimates for the distances to the stars, and the recognition of the accelerating expansion of the universe, reconciled the age estimates.
=== High-energy physics/particle physics ===
Existence of pentaquarks (1964–2015): In July 2015, the LHCb collaboration at CERN identified pentaquarks in the Λ0b→J/ψK−p channel, which represents the decay of the bottom lambda baryon (Λ0b) into a J/ψ meson (J/ψ), a kaon (K−) and a proton (p). The results showed that sometimes, instead of decaying directly into mesons and baryons, the Λ0b decayed via intermediate pentaquark states. The two states, named P+c(4380) and P+c(4450), had individual statistical significances of 9 σ and 12 σ, respectively, and a combined significance of 15 σ—enough to claim a formal discovery. The two pentaquark states were both observed decaying strongly to J/ψp, hence must have a valence quark content of two up quarks, a down quark, a charm quark, and an anti-charm quark (uudcc), making them charmonium-pentaquarks.
Existence of quark-gluon plasma, a new phase of matter was discovered and confirmed in experiments at CERN-SPS (2000), BNL-RHIC (2005) and CERN-LHC (2010).
Higgs boson and electroweak symmetry breaking (1963–2012): The mechanism responsible for breaking the electroweak gauge symmetry, giving mass to the W and Z bosons, was solved with the discovery of the Higgs boson of the Standard Model, with the expected couplings to the weak bosons. No evidence of a strong dynamics solution, as proposed by technicolor, has been observed.
Origin of mass of most elementary particles: Solved with the discovery of the Higgs boson, which implies the existence of the Higgs field giving mass to these particles.
There is a discrepancy in the results of neutron lifetime measurements obtained by the storage method and the beam method. The "neutron lifetime anomaly" was discovered after the refinement of experiments with ultracold neutrons.
=== Astronomy and astrophysics ===
Origin of short gamma-ray burst (1993–2017): From binary neutron stars merger, produce a kilonova explosion and short gamma-ray burst GRB 170817A was detected in both electromagnetic waves and gravitational wave GW170817.
Missing baryon problem (1998–2017): proclaimed solved in October 2017, with the missing baryons located in hot intergalactic gas.
Long-duration gamma-ray bursts (1993–2003): Long-duration bursts are associated with the deaths of massive stars in a specific kind of supernova-like event commonly referred to as a collapsar. However, there are also long-duration GRBs that show evidence against an associated supernova, such as the Swift event GRB 060614.
Solar neutrino problem (1968–2001): Solved by a new understanding of neutrino physics, requiring a modification of the Standard Model of particle physics—specifically, neutrino oscillation.
Saturn's core spin was determined from its gravitational field.
=== Rapidly solved problems ===
Existence of time crystals (2012–2016): The idea of a quantized time crystal was first theorized in 2012 by Frank Wilczek. In 2016, Khemani et al. and Else et al. independently of each other suggested that periodically driven quantum spin systems could show similar behaviour. Also in 2016, Norman Yao at Berkeley and colleagues proposed a different way to create discrete time crystals in spin systems. This was then used by two teams, a group led by Christopher Monroe at the University of Maryland and a group led by Mikhail Lukin at Harvard University, who were both able to show evidence for time crystals in the laboratory setting, showing that for short times the systems exhibited the dynamics similar to the predicted one.
Photon underproduction crisis (2014–2015): This problem was resolved by Khaire and Srianand. They show that a factor 2 to 5 times large metagalactic photoionization rate can be easily obtained using updated quasar and galaxy observations. Recent observations of quasars indicate that the quasar contribution to ultraviolet photons is a factor of 2 larger than previous estimates. The revised galaxy contribution is a factor of 3 larger. These together solve the crisis.
Hipparcos anomaly (1997–2012): The High Precision Parallax Collecting Satellite (Hipparcos) measured the parallax of the Pleiades and determined a distance of 385 light years. This was significantly different from other measurements made by means of actual to apparent brightness measurement or absolute magnitude. The anomaly was due to the use of a weighted mean when there is a correlation between distances and distance errors for stars in clusters. It is resolved by using an unweighted mean. There is no systematic bias in the Hipparcos data when it comes to star clusters.
Faster-than-light neutrino anomaly (2011–2012): In 2011, the OPERA experiment mistakenly observed neutrinos appearing to travel faster than light. On 12 July 2012 OPERA updated their paper after discovering an error in their previous flight time measurement. They found agreement of neutrino speed with the speed of light.
Pioneer anomaly (1980–2012): There was a deviation in the predicted accelerations of the Pioneer 10 and 11 spacecraft as they left the Solar System. It is believed that this is a result of previously unaccounted-for thermal recoil force.
== See also ==
Hilbert's sixth problem
Lists of unsolved problems
Physical paradox
List of unsolved problems in mathematics
List of unsolved problems in neuroscience
== References ==
== External links ==
What problems of physics and astrophysics seem now to be especially important and interesting (thirty years later, already on the verge of XXI century)? V. L. Ginzburg, Physics-Uspekhi 42 (4) 353–373, 1999
What don't we know? Science journal special project for its 125th anniversary: top 25 questions and 100 more.
List of links to unsolved problems in physics, prizes and research.
A list of open problems in quantum information theory maintained by the Institute for Quantum Optics and Quantum Information (IQOQI) in Vienna.
Ideas Based On What We'd Like to Achieve Archived 24 September 2013 at the Wayback Machine
2004 SLAC Summer Institute: Nature's Greatest Puzzles Archived 30 July 2014 at the Wayback Machine
Dual Personality of Glass Explained at Last
What we do and don't know Review on current state of physics by Steven Weinberg, November 2013
The crisis of big science Steven Weinberg, May 2012 | Wikipedia/Unsolved_problems_in_physics |
The Vega Science Trust was a not-for-profit organisation which provided a platform from which scientists can communicate directly with the public on science by using moving image, sound and other related means. The Trust closed in 2012 but the website and streaming video remains active (based at Sheffield University).
== History ==
Founded in 1995 by Nobel Laureate Sir Harry Kroto and BBC Education Producer Patrick Reams the Vega Science Trust was awarded a COPUS start-up grant from the Royal Society in 1995 and then went on in 1999 to be allocated core funding from the Office of Science and Technology (OST). Starting with recording science programmes for terrestrial television the Vega Science Trust produced a number of programmes such as recordings of Royal Institution Discources which were broadcast on BBC 2 and a set of Masterclasses. In 2001 Harry Kroto was awarded the Royal Society Michael Faraday Prize - the UK's premier award for science communication 'for his dedication to the notion of working scientists being communicators of their work and in particular for his establishment of the Vega Science Trust whose films and related activities reflect the excitement of scientific discovery to the public'. The Trust went on to co-produce with the BBC Open University a set of science discussion programmes covering hot topics such as Stem Cells, Energy, Mobile Phones, GM Food, Disease, Nanotechnology and Ageing. With the BBC/Open University the Trust also produced with sponsorship from HEFCE Widening Participation Team a set of award-winning career programmes featuring young scientists. Both series were broadcast on BBC2.
Very early audio-visual recordings of individual scientists are relatively rare but in the recent past some recordings were carried out by such organisations as the BBC. In 1997 the Vega Science Trust embarked on a plan to record in-depth interviews with scientists such as Rotblat, Sanger, Perutz, Cornforth, Walter Kohn and Richard Ernst which could be both viewed and preserved as an historical record for the future. More recently the British Library embarked on a similar project of recording audio-visual interviews under the National Life Stories project although at present their archive consists of oral recordings of scientists. The Vega Science Trust's in-depth interviews with scientists led onto a project recording interviews with Nobel Laureates attending the annual Nobel Laureate Meetings at Lindau in 2004/5/6. In 2006 the Vega Science Trust's website received a special mention at The International Association for Media Science Awards.
In 2007 the Vega Science Trust started on-going work with Jonathan Hare BBC Rough Science on a series of short instructional films intended to show how things work. For instance a number of the films show how we can generate electricity, another shows how we can generate wind power, others the molecular structure of C60, carbon nanotubes and graphene.
From 2007–2010 the Trust concentrated on bringing to the public's attention the process of science research. The Nano2Hybrids EU STReP project for instance was an innovative project where research scientists recorded their own progress on a research project to invent a gas sensor made using carbon nanotubes. In addition recording science in society projects such as Women in Nanotechnology and Diversity illuminate work towards promoting women scientists into decision making positions in science research environments.
The Vega Science Trust closed in March 2012 after 17 years of operation. However, the website will continue to host the existing film archive.
== Governance ==
The Vega Science Trust was governed by a board of five Trustees who are active research scientists, media, copyright, and educational specialists. Trustees step down and/or are re-elected each year.
The Vega Science Trust employed one member of staff (and for a period, a second technical member) and operated in a mixed economy of core grant-in-aid support from Florida State University, and from research grants and sponsorship. It was an independent body with its own self-contained offices, initially in the University of Sussex chemistry department, and later at the Innovation Centre, University of Sussex, Brighton.
== Vision ==
The Vega Science Trust aimed to see science more fully integrated into our everyday culture. Vega's vision has been to do so by providing a platform from which scientists can broadcast science programmes directly to the public.
Activities
Recording scientists.
Covering a wide array of research.
Making available to the public by streaming from the Vega Science Trust's website.
Videos stream in different formats.
Creating an historical archive of some of the world's most eminent scientists.
Interviewing Nobel Laureates and other eminent scientists.
Experimenting with new ways of outreaching/engaging with the public via audio-visual means.
Creating different programme formats.
Provide on-line science resources for school, university and the public via informal learning.
Creating informative videos for imparting scientific concepts.
Vega Science Trust Collection
The collection of recordings also acts as an historical record and archive of world scientists and their research discoveries. Recorded to broadcast quality they provide a valuable collection, much of which is open to the public via the Vega Science Trust's website.
=== Video clips ===
Vega Science Trust YouTube channel
== References ==
== External links ==
Vega Science Trust
British Council Public Engagement of Science
Nobel Meeting at Lindau
Creative Science Centre | Wikipedia/Vega_Science_Trust |
Tensor–vector–scalar gravity (TeVeS), developed by Jacob Bekenstein in 2004, is a relativistic generalization of Mordehai Milgrom's Modified Newtonian dynamics (MOND) paradigm.
The main features of TeVeS can be summarized as follows:
As it is derived from the action principle, TeVeS respects conservation laws;
In the weak-field approximation of the spherically symmetric, static solution, TeVeS reproduces the MOND acceleration formula;
TeVeS avoids the problems of earlier attempts to generalize MOND, such as superluminal propagation;
As it is a relativistic theory it can accommodate gravitational lensing.
The theory is based on the following ingredients:
A unit vector field;
A dynamical scalar field;
A nondynamical scalar field;
A matter Lagrangian constructed using an alternate metric;
An arbitrary dimensionless function.
These components are combined into a relativistic Lagrangian density, which forms the basis of TeVeS theory.
== Details ==
MOND is a phenomenological modification of the Newtonian acceleration law. In Newtonian gravity theory, the gravitational acceleration in the spherically symmetric, static field of a point mass
M
{\displaystyle M}
at distance
r
{\displaystyle r}
from the source can be written as
a
=
−
G
M
r
2
,
{\displaystyle a=-{\frac {GM}{r^{2}}},}
where
G
{\displaystyle G}
is Newton's constant of gravitation. The corresponding force acting on a test mass
m
{\displaystyle m}
is
F
=
m
a
.
{\displaystyle F=ma.}
To account for the anomalous rotation curves of spiral galaxies, Milgrom proposed a modification of this force law in the form
F
=
μ
(
a
a
0
)
m
a
,
{\displaystyle F=\mu \left({\frac {a}{a_{0}}}\right)ma,}
where
μ
(
x
)
{\displaystyle \mu (x)}
is an arbitrary function subject to the following conditions:
μ
(
x
)
=
{
1
|
x
|
≫
1
x
|
x
|
≪
1
{\displaystyle \mu (x)={\begin{cases}1&|x|\gg 1\\x&|x|\ll 1\end{cases}}}
In this form, MOND is not a complete theory: for instance, it violates the law of momentum conservation.
However, such conservation laws are automatically satisfied for physical theories that are derived using an action principle. This led Bekenstein to a first, nonrelativistic generalization of MOND. This theory, called AQUAL (for A QUAdratic Lagrangian) is based on the Lagrangian
L
=
−
a
0
2
8
π
G
f
(
|
∇
Φ
|
2
a
0
2
)
−
ρ
Φ
,
{\displaystyle {\mathcal {L}}=-{\frac {a_{0}^{2}}{8\pi G}}f\left({\frac {|\nabla \Phi |^{2}}{a_{0}^{2}}}\right)-\rho \Phi ,}
where
Φ
{\displaystyle \Phi }
is the Newtonian gravitational potential,
ρ
{\displaystyle \rho }
is the mass density, and
f
(
y
)
{\displaystyle f(y)}
is a dimensionless function.
In the case of a spherically symmetric, static gravitational field, this Lagrangian reproduces the MOND acceleration law after the substitutions
a
=
−
∇
Φ
{\displaystyle a=-\nabla \Phi }
and
μ
(
y
)
=
d
f
(
y
)
/
d
y
{\displaystyle \mu ({\sqrt {y}})=df(y)/dy}
are made.
Bekenstein further found that AQUAL can be obtained as the nonrelativistic limit of a relativistic field theory. This theory is written in terms of a Lagrangian that contains, in addition to the Einstein–Hilbert action for the metric field
g
μ
ν
{\displaystyle g_{\mu \nu }}
, terms pertaining to a unit vector field
u
α
{\displaystyle u^{\alpha }}
and two scalar fields
σ
{\displaystyle \sigma }
and
ϕ
{\displaystyle \phi }
, of which only
ϕ
{\displaystyle \phi }
is dynamical. The TeVeS action, therefore, can be written as
S
T
e
V
e
S
=
∫
(
L
g
+
L
s
+
L
v
)
d
4
x
.
{\displaystyle S_{\mathrm {TeVeS} }=\int \left({\mathcal {L}}_{g}+{\mathcal {L}}_{s}+{\mathcal {L}}_{v}\right)d^{4}x.}
The terms in this action include the Einstein–Hilbert Lagrangian (using a metric signature
[
+
,
−
,
−
,
−
]
{\displaystyle [+,-,-,-]}
and setting the speed of light,
c
=
1
{\displaystyle c=1}
):
L
g
=
−
1
16
π
G
R
−
g
,
{\displaystyle {\mathcal {L}}_{g}=-{\frac {1}{16\pi G}}R{\sqrt {-g}},}
where
R
{\displaystyle R}
is the Ricci scalar and
g
{\displaystyle g}
is the determinant of the metric tensor.
The scalar field Lagrangian is
L
s
=
−
1
2
[
σ
2
h
α
β
∂
α
ϕ
∂
β
ϕ
+
1
2
G
l
2
σ
4
F
(
k
G
σ
2
)
]
−
g
,
{\displaystyle {\mathcal {L}}_{s}=-{\frac {1}{2}}\left[\sigma ^{2}h^{\alpha \beta }\partial _{\alpha }\phi \partial _{\beta }\phi +{\frac {1}{2}}{\frac {G}{l^{2}}}\sigma ^{4}F\left(kG\sigma ^{2}\right)\right]{\sqrt {-g}},}
where
h
α
β
=
g
α
β
−
u
α
u
β
,
l
{\displaystyle h^{\alpha \beta }=g^{\alpha \beta }-u^{\alpha }u^{\beta },l}
is a constant length,
k
{\displaystyle k}
is the dimensionless parameter and
F
{\displaystyle F}
an unspecified dimensionless function; while the vector field Lagrangian is
L
v
=
−
K
32
π
G
[
g
α
β
g
μ
ν
(
B
α
μ
B
β
ν
)
+
2
λ
K
(
g
μ
ν
u
μ
u
ν
−
1
)
]
−
g
{\displaystyle {\mathcal {L}}_{v}=-{\frac {K}{32\pi G}}\left[g^{\alpha \beta }g^{\mu \nu }\left(B_{\alpha \mu }B_{\beta \nu }\right)+2{\frac {\lambda }{K}}\left(g^{\mu \nu }u_{\mu }u_{\nu }-1\right)\right]{\sqrt {-g}}}
where
B
α
β
=
∂
α
u
β
−
∂
β
u
α
,
{\displaystyle B_{\alpha \beta }=\partial _{\alpha }u_{\beta }-\partial _{\beta }u_{\alpha },}
while
K
{\displaystyle K}
is a dimensionless parameter.
k
{\displaystyle k}
and
K
{\displaystyle K}
are respectively called the scalar and vector coupling constants of the theory. The consistency between the Gravitoelectromagnetism of the TeVeS theory and that predicted and measured by the Gravity Probe B leads to
K
=
k
2
π
{\displaystyle K={\frac {k}{2\pi }}}
,
and requiring consistency between the near horizon geometry of a black hole in TeVeS and that of the Einstein theory, as observed by the Event Horizon Telescope leads to
K
=
−
30
+
72
π
k
.
{\displaystyle K=-30+{\frac {72\pi }{k}}.}
So the coupling constants read:
K
=
3
(
±
29
−
5
)
,
k
=
6
π
(
±
29
−
5
)
{\displaystyle K=3(\pm {\sqrt {29}}-5),\qquad k=6\pi (\pm {\sqrt {29}}-5)}
The function
F
{\displaystyle F}
in TeVeS is unspecified.
TeVeS also introduces a "physical metric" in the form
g
^
μ
ν
=
e
2
ϕ
g
μ
ν
−
2
u
α
u
β
sinh
(
2
ϕ
)
.
{\displaystyle {\hat {g}}^{\mu \nu }=e^{2\phi }g^{\mu \nu }-2u^{\alpha }u^{\beta }\sinh(2\phi ).}
The action of ordinary matter is defined using the physical metric:
S
m
=
∫
L
(
g
^
μ
ν
,
f
α
,
f
|
μ
α
,
…
)
−
g
^
d
4
x
,
{\displaystyle S_{m}=\int {\mathcal {L}}\left({\hat {g}}_{\mu \nu },f^{\alpha },f_{|\mu }^{\alpha },\ldots \right){\sqrt {-{\hat {g}}}}d^{4}x,}
where covariant derivatives with respect to
g
^
μ
ν
{\displaystyle {\hat {g}}_{\mu \nu }}
are denoted by
|
.
{\displaystyle |.}
TeVeS solves problems associated with earlier attempts to generalize MOND, such as superluminal propagation. In his paper, Bekenstein also investigated the consequences of TeVeS in relation to gravitational lensing and cosmology.
== Problems and criticisms ==
In addition to its ability to account for the flat rotation curves of galaxies (which is what MOND was originally designed to address), TeVeS is claimed to be consistent with a range of other phenomena, such as gravitational lensing and cosmological observations. However, Seifert shows that with Bekenstein's proposed parameters, a TeVeS star is highly unstable, on the scale of approximately 106 seconds (two weeks). The ability of the theory to simultaneously account for galactic dynamics and lensing is also challenged. A possible resolution may be in the form of massive (around 2 eV) neutrinos.
A study in August 2006 reported an observation of a pair of colliding galaxy clusters, the Bullet Cluster, whose behavior, it was reported, was not compatible with any current modified gravity theory.
A quantity
E
G
{\displaystyle E_{G}}
probing general relativity (GR) on large scales (a hundred billion times the size of the Solar System) for the first time has been measured with data from the Sloan Digital Sky Survey to be
E
G
=
0.392
±
0.065
{\displaystyle E_{G}=0.392\pm {0.065}}
(~16%) consistent with GR, GR plus Lambda CDM and the extended form of GR known as
f
(
R
)
{\displaystyle f(R)}
theory, but ruling out a particular TeVeS model predicting
E
G
=
0.22
{\displaystyle E_{G}=0.22}
. This estimate should improve to ~1% with the next generation of sky surveys and may put tighter constraints on the parameter space of all modified gravity theories.
TeVeS appears inconsistent with recent measurements made by LIGO of gravitational waves.
== See also ==
Gauge vector–tensor gravity
Modified Newtonian dynamics
Nonsymmetric gravitational theory
Scalar–tensor–vector gravity
== References ==
== Further reading ==
Bekenstein, J. D.; Sanders, R. H. (2006), "A Primer to Relativistic MOND Theory", EAS Publications Series, 20: 225–230, arXiv:astro-ph/0509519, Bibcode:2006EAS....20..225B, doi:10.1051/eas:2006075, S2CID 6539084
Zhao, H. S.; Famaey, B. (2006), "Refining the MOND Interpolating Function and TeVeS Lagrangian", The Astrophysical Journal, 638 (1): L9 – L12, arXiv:astro-ph/0512425, Bibcode:2006ApJ...638L...9Z, doi:10.1086/500805, S2CID 14867245
Dark Matter Observed (SLAC Today)
Einstein's Theory 'Improved'? (PPARC)
Einstein Was Right: General Relativity Confirmed ' TeVeS, however, made predictions that fell outside the observational error limits', (Space.com) | Wikipedia/Tensor–vector–scalar_gravity |
In theoretical physics, Lovelock's theory of gravity (often referred to as Lovelock gravity) is a generalization of Einstein's theory of general relativity introduced by David Lovelock in 1971. It is the most general metric theory of gravity yielding conserved second order equations of motion in an arbitrary number of spacetime dimensions D. In this sense, Lovelock's theory is the natural generalization of Einstein's general relativity to higher dimensions. In three and four dimensions (D = 3, 4), Lovelock's theory coincides with Einstein's theory, but in higher dimensions the theories are different. In fact, for D > 4 Einstein gravity can be thought of as a particular case of Lovelock gravity since the Einstein–Hilbert action is one of several terms that constitute the Lovelock action.
== Lagrangian density ==
The Lagrangian of the theory is given by a sum of dimensionally extended
Euler densities, and it can be written as follows
L
=
−
g
∑
n
=
0
t
α
n
R
n
,
R
n
=
1
2
n
δ
α
1
β
1
.
.
.
α
n
β
n
μ
1
ν
1
.
.
.
μ
n
ν
n
∏
r
=
1
n
R
μ
r
ν
r
α
r
β
r
{\displaystyle {\mathcal {L}}={\sqrt {-g}}\ \sum \limits _{n=0}^{t}\alpha _{n}\ {\mathcal {R}}^{n},\qquad {\mathcal {R}}^{n}={\frac {1}{2^{n}}}\delta _{\alpha _{1}\beta _{1}...\alpha _{n}\beta _{n}}^{\mu _{1}\nu _{1}...\mu _{n}\nu _{n}}\prod \limits _{r=1}^{n}R_{\quad \mu _{r}\nu _{r}}^{\alpha _{r}\beta _{r}}}
where Rμναβ represents the Riemann tensor, and where the generalized Kronecker delta δ is defined as the antisymmetric product
δ
α
1
β
1
⋯
α
n
β
n
μ
1
ν
1
.
.
.
μ
n
ν
n
=
(
2
n
)
!
δ
[
α
1
μ
1
δ
β
1
ν
1
⋯
δ
α
n
μ
n
δ
β
n
]
ν
n
.
{\displaystyle \delta _{\alpha _{1}\beta _{1}\cdots \alpha _{n}\beta _{n}}^{\mu _{1}\nu _{1}...\mu _{n}\nu _{n}}=(2n)!\delta _{\lbrack \alpha _{1}}^{\mu _{1}}\delta _{\beta _{1}}^{\nu _{1}}\cdots \delta _{\alpha _{n}}^{\mu _{n}}\delta _{\beta _{n}]}^{\nu _{n}}.}
Each term
R
n
{\displaystyle {\mathcal {R}}^{n}}
in
L
{\displaystyle {\mathcal {L}}}
corresponds to the dimensional extension of the Euler density in 2n dimensions, so that these only contribute to the equations of motion for n < D/2. Consequently, without lack of generality, t in the equation above can be taken to be D = 2t + 2 for even dimensions and D = 2t + 1 for odd dimensions.
== Coupling constants ==
The coupling constants αn in the Lagrangian
L
{\displaystyle {\mathcal {L}}}
have dimensions of [length]2n − D, although it is usual to normalize the Lagrangian density in units of the Planck scale
α
1
=
(
16
π
G
)
−
1
=
l
P
2
−
D
.
{\displaystyle \alpha _{1}=(16\pi G)^{-1}=l_{P}^{2-D}\,.}
Expanding the product in
L
{\displaystyle {\mathcal {L}}}
, the Lovelock Lagrangian takes the form
L
=
−
g
(
α
0
+
α
1
R
+
α
2
(
R
2
+
R
α
β
μ
ν
R
α
β
μ
ν
−
4
R
μ
ν
R
μ
ν
)
+
α
3
O
(
R
3
)
)
,
{\displaystyle {\mathcal {L}}={\sqrt {-g}}\ (\alpha _{0}+\alpha _{1}R+\alpha _{2}\left(R^{2}+R_{\alpha \beta \mu \nu }R^{\alpha \beta \mu \nu }-4R_{\mu \nu }R^{\mu \nu }\right)+\alpha _{3}{\mathcal {O}}(R^{3})),}
where one sees that coupling α0 corresponds to the cosmological constant Λ, while αn with n ≥ 2 are coupling constants of additional terms that represent ultraviolet corrections to Einstein theory, involving higher order contractions of the Riemann tensor Rμναβ. In particular, the second order term
R
2
=
R
2
+
R
α
β
μ
ν
R
α
β
μ
ν
−
4
R
μ
ν
R
μ
ν
{\displaystyle {\mathcal {R}}^{2}=R^{2}+R_{\alpha \beta \mu \nu }R^{\alpha \beta \mu \nu }-4R_{\mu \nu }R^{\mu \nu }}
is precisely the quadratic Gauss–Bonnet term, which is the dimensionally extended version of the four-dimensional Euler density.
== Equations of motion ==
By noting that
T
=
−
g
R
2
=
−
g
(
R
2
+
R
μ
ν
ρ
σ
R
μ
ν
ρ
σ
−
4
R
μ
ν
R
μ
ν
)
{\displaystyle T={\sqrt {-g}}{\mathcal {R}}^{2}={\sqrt {-g}}\left(R^{2}+R_{\mu \nu \rho \sigma }R^{\mu \nu \rho \sigma }-4R_{\mu \nu }R^{\mu \nu }\right)}
is a topological constant, we can eliminate the Riemann tensor term and thus we can put the Lovelock Lagrangian into the form
S
=
−
∫
d
D
x
−
g
(
α
R
μ
ν
R
μ
ν
−
β
R
2
+
γ
κ
−
2
R
)
{\displaystyle S=-\int d^{D}x{\sqrt {-g}}\left(\alpha R_{\mu \nu }R^{\mu \nu }-\beta R^{2}+\gamma \kappa ^{-2}R\right)}
which has the equations of motion
α
(
−
1
2
R
ρ
σ
R
ρ
σ
g
μ
ν
−
∇
ν
∇
μ
R
−
2
R
ρ
ν
μ
σ
R
σ
ρ
+
1
2
g
μ
ν
◻
R
+
◻
R
μ
ν
)
+
{\displaystyle \alpha \left(-{\frac {1}{2}}R_{\rho \sigma }R^{\rho \sigma }g_{\mu \nu }-\nabla _{\nu }\nabla _{\mu }R-2R_{\rho \nu \mu \sigma }R^{\sigma \rho }+{\frac {1}{2}}g_{\mu \nu }\Box R+\Box R_{\mu \nu }\right)+}
β
(
1
2
R
2
g
μ
ν
−
2
R
R
μ
ν
+
2
∇
ν
∇
μ
R
−
2
g
μ
ν
◻
R
)
+
{\displaystyle \beta \left({\frac {1}{2}}R^{2}g_{\mu \nu }-2RR_{\mu \nu }+2\nabla _{\nu }\nabla _{\mu }R-2g_{\mu \nu }\Box R\right)+}
γ
(
−
1
2
κ
−
2
R
g
μ
ν
+
κ
−
2
R
μ
ν
)
=
0.
{\displaystyle \gamma \left(-{\frac {1}{2}}\kappa ^{-2}Rg_{\mu \nu }+\kappa ^{-2}R_{\mu \nu }\right)=0.}
== Other contexts ==
Because Lovelock action contains, among others, the quadratic Gauss–Bonnet term (i.e. the four-dimensional Euler characteristic extended to D dimensions), it is usually said that Lovelock theory resembles string-theory-inspired models of gravity. This is because a quadratic term is present in the low energy effective action of heterotic string theory, and it also appears in six-dimensional Calabi–Yau compactifications of M-theory. In the mid-1980s, a decade after Lovelock proposed his generalization of the Einstein tensor, physicists began to discuss the quadratic Gauss–Bonnet term within the context of string theory, with particular attention to its property of being ghost-free in Minkowski space. The theory is known to be free of ghosts about other exact backgrounds as well, e.g. about one of the branches of the spherically symmetric solution found by Boulware and Deser in 1985. In general, Lovelock's theory represents a very interesting scenario to study how the physics of gravity is corrected at short distance due to the presence of higher order curvature terms in the action, and in the mid-2000s the theory was considered as a testing ground to investigate the effects of introducing higher-curvature terms in the context of AdS/CFT correspondence.
== See also ==
Lovelock's theorem
f(R) gravity
Gauss–Bonnet gravity
Curtright field
Horndeski's theory
== Notes ==
== References ==
Lovelock, D. (1971). "The Einstein tensor and its generalizations". Journal of Mathematical Physics. 12 (3): 498–502. Bibcode:1971JMP....12..498L. doi:10.1063/1.1665613.
Lovelock, D. (1969). "The uniqueness of the Einstein field equations in a four-dimensional space". Archive for Rational Mechanics and Analysis. 33 (1): 54–70. Bibcode:1969ArRMA..33...54L. doi:10.1007/BF00248156. S2CID 119985583.
Lovelock, D. (1972). "The four-dimensionality of space and the Einstein tensor". Journal of Mathematical Physics. 13 (6): 874–876. Bibcode:1972JMP....13..874L. doi:10.1063/1.1666069.
Lovelock, David; Rund, Hanno (1989), Tensors, Differential Forms, and Variational Principles, Dover, ISBN 978-0-486-65840-7
Navarro, A.; Navarro, J. (2011). "Lovelock's theorem revisited". Journal of Mathematical Physics. 61 (10): 1950–1956. arXiv:1005.2386. Bibcode:2011JGP....61.1950N. doi:10.1016/j.geomphys.2011.05.004. S2CID 119314288.
Zwiebach, B. (1985). "Curvature Squared Terms and String Theories". Phys. Lett. B. 156 (5–6): 315. Bibcode:1985PhLB..156..315Z. doi:10.1016/0370-2693(85)91616-8..
Boulware, D.; Deser, S. (1985). "String Generated Gravity Models". Phys. Rev. Lett. 55 (24): 2656–2660. Bibcode:1985PhRvL..55.2656B. doi:10.1103/PhysRevLett.55.2656. PMID 10032204. S2CID 43449319. | Wikipedia/Lovelock_theory_of_gravity |
Selective exposure is a theory within the practice of psychology, often used in media and communication research, that historically refers to individuals' tendency to favor information which reinforces their pre-existing views while avoiding contradictory information. Selective exposure has also been known and defined as "congeniality bias" or "confirmation bias" in various texts throughout the years.
According to the historical use of the term, people tend to select specific aspects of exposed information which they incorporate into their mindset. These selections are made based on their perspectives, beliefs, attitudes, and decisions. People can mentally dissect the information they are exposed to and select favorable evidence, while ignoring the unfavorable. The foundation of this theory is rooted in the cognitive dissonance theory (Festinger 1957), which asserts that when individuals are confronted with contrasting ideas, certain mental defense mechanisms are activated to produce harmony between new ideas and pre-existing beliefs, which results in cognitive equilibrium. Cognitive equilibrium, which is defined as a state of balance between a person's mental representation of the world and his or her environment, is crucial to understanding selective exposure theory. According to Jean Piaget, when a mismatch occurs, people find it to be "inherently dissatisfying".
Selective exposure relies on the assumption that one will continue to seek out information on an issue even after an individual has taken a stance on it. The position that a person has taken will be colored by various factors of that issue that are reinforced during the decision-making process. According to Stroud (2008), theoretically, selective exposure occurs when people's beliefs guide their media selections.
Selective exposure has been displayed in various contexts such as self-serving situations and situations in which people hold prejudices regarding outgroups, particular opinions, and personal and group-related issues. Perceived usefulness of information, perceived norm of fairness, and curiosity of valuable information are three factors that can counteract selective exposure.
Also of great concern is the theory of "Selective Participation" proposed by Sir Godson David in 2024
This theory suggests that individuals have the ability to selectively participate in certain aspects of events or activities that are most meaningful or important to them, while being fully aware of the consequences of neglecting other aspects.
In this theory, individuals may prioritize certain elements of an event based on personal values, interests, or goals, and may choose to invest their time, energy, and resources in these specific areas. They may also make conscious decisions to limit participation in other aspects of the event, recognizing that they cannot engage fully in all aspects simultaneously.
By selectively participating in specific aspects of events, individuals can focus on what matters most to them, optimize their resources and efforts in those areas, and compensate for any potential neglect in other areas. This approach may allow individuals to maintain a sense of control, satisfaction, and well-being while navigating complex events or activities.
Overall, the theory of Selective Participation emphasizes the importance of intentional decision-making and prioritization in event participation, acknowledging that individuals have the agency to choose where to direct their time and attention based on their individual preferences and goals.
== Effect on decision-making ==
=== Individual versus group decision-making ===
Selective exposure can often affect the decisions people make as individuals or as groups because they may be unwilling to change their views and beliefs either collectively or on their own, despite conflicting and reliable information. An example of the effects of selective exposure is the series of events leading up to the Bay of Pigs Invasion in 1961. President John F. Kennedy was given the go ahead by his advisers to authorize the invasion of Cuba by poorly trained expatriates despite overwhelming evidence that it was a foolish and ill-conceived tactical maneuver. The advisers were so eager to please the President that they confirmed their cognitive bias for the invasion rather than challenging the faulty plan. Changing beliefs about one's self, other people, and the world are three variables as to why people fear new information. A variety of studies has shown that selective exposure effects can occur in the context of both individual and group decision making. Numerous situational variables have been identified that increase the tendency toward selective exposure. Social psychology, specifically, includes research with a variety of situational factors and related psychological processes that eventually persuade a person to make a quality decision. Additionally, from a psychological perspective, the effects of selective exposure can both stem from motivational and cognitive accounts.
=== Effect of information quantity ===
According to research study by Fischer, Schulz-Hardt, et al. (2008), the quantity of decision-relevant information that the participants were exposed to had a significant effect on their levels of selective exposure. A group for which only two pieces of decision-relevant information were given had experienced lower levels of selective exposure than the other group who had ten pieces of information to evaluate. This research brought more attention to the cognitive processes of individuals when they are presented with a very small amount of decision-consistent and decision-inconsistent information. The study showed that in situations such as this, an individual becomes more doubtful of their initial decision due to the unavailability of resources. They begin to think that there is not enough data or evidence in this particular field in which they are told to make a decision about. Because of this, the subject becomes more critical of their initial thought process and focuses on both decision-consistent and inconsistent sources, thus decreasing his level of selective exposure. For the group who had plentiful pieces of information, this factor made them confident in their initial decision because they felt comfort from the fact that their decision topic was well-supported by a large number of resources. Therefore, the availability of decision-relevant and irrelevant information surrounding individuals can influence the level of selective exposure experienced during the process of decision-making.
Selective exposure is prevalent within singular individuals and groups of people and can influence either to reject new ideas or information that is not commensurate with the original ideal. In Jonas et al. (2001) empirical studies were done on four different experiments investigating individuals' and groups' decision making. This article suggests that confirmation bias is prevalent in decision making. Those who find new information often draw their attention towards areas where they hold personal attachment. Thus, people are driven toward pieces of information that are coherent with their own expectations or beliefs as a result of this selective exposure theory occurring in action. Throughout the process of the four experiments, generalization is always considered valid and confirmation bias is always present when seeking new information and making decisions.
=== Accuracy motivation and defense motivation ===
Fischer and Greitemeyer (2010) explored individuals' decision making in terms of selective exposure to confirmatory information. Selective exposure posed that individuals make their decisions based on information that is consistent with their decision rather than information that is inconsistent. Recent research has shown that "Confirmatory Information Search" was responsible for the 2008 bankruptcy of the Lehman Brothers Investment Bank which then triggered the 2008 financial crisis. In the zeal for profit and economic gain, politicians, investors, and financial advisors ignored the mathematical evidence that foretold the housing market crash in favor of flimsy justifications for upholding the status quo. Researchers explain that subjects have the tendency to seek and select information using their integrative model. There are two primary motivations for selective exposure: Accuracy Motivation and Defense Motivation. Accuracy Motivation explains that an individual is motivated to be accurate in their decision making and Defense Motivation explains that one seeks confirmatory information to support their beliefs and justify their decisions. Accuracy motivation is not always beneficial within the context of selective exposure and can instead be counterintuitive, increasing the amount of selective exposure. Defense motivation can lead to reduced levels of selective exposure.
=== Personal attributes ===
Selective exposure avoids information inconsistent with one's beliefs and attitudes. For example, former Vice President Dick Cheney would only enter a hotel room after the television was turned on and tuned to a conservative television channel. When analyzing a person's decision-making skills, his or her unique process of gathering relevant information is not the only factor taken into account. Fischer et al. (2010) found it important to consider the information source itself, otherwise explained as the physical being that provided the source of information. Selective exposure research generally neglects the influence of indirect decision-related attributes, such as physical appearance. In Fischer et al. (2010) two studies hypothesized that physically attractive information sources resulted in decision makers to be more selective in searching and reviewing decision-relevant information. Researchers explored the impact of social information and its level of physical attractiveness. The data was then analyzed and used to support the idea that selective exposure existed for those who needed to make a decision. Therefore, the more attractive an information source was, the more positive and detailed the subject was with making the decision. Physical attractiveness affects an individual's decision because the perception of quality improves. Physically attractive information sources increased the quality of consistent information needed to make decisions and further increased the selective exposure in decision-relevant information, supporting the researchers' hypothesis. Both studies concluded that attractiveness is driven by a different selection and evaluation of decision-consistent information. Decision makers allow factors such as physical attractiveness to affect everyday decisions due to the works of selective exposure.
In another study, selective exposure was defined by the amount of individual confidence. Individuals can control the amount of selective exposure depending on whether they have a low self-esteem or high self-esteem. Individuals who maintain higher confidence levels reduce the amount of selective exposure. Albarracín and Mitchell (2004) hypothesized that those who displayed higher confidence levels were more willing to seek out information both consistent and inconsistent with their views. The phrase "decision-consistent information" explains the tendency to actively seek decision-relevant information. Selective exposure occurs when individuals search for information and show systematic preferences towards ideas that are consistent, rather than inconsistent, with their beliefs. On the contrary, those who exhibited low levels of confidence were more inclined to examine information that did not agree with their views. The researchers found that in three out of five studies participants showed more confidence and scored higher on the Defensive Confidence Scale, which serves as evidence that their hypothesis was correct.
Bozo et al. (2009) investigated the anxiety of fearing death and compared it to various age groups in relation to health-promoting behaviors. Researchers analyzed the data by using the terror management theory and found that age had no direct effect on specific behaviors. The researchers thought that a fear of death would yield health-promoting behaviors in young adults. When individuals are reminded of their own death, it causes stress and anxiety, but eventually leads to positive changes in their health behaviors. Their conclusions showed that older adults were consistently better at promoting and practicing good health behaviors, without thinking about death, compared to young adults. Young adults were less motivated to change and practice health-promoting behaviors because they used the selective exposure to confirm their prior beliefs. Selective exposure thus creates barriers between the behaviors in different ages, but there is no specific age at which people change their behaviors.
Though physical appearance will impact one's personal decision regarding an idea presented, a study conducted by Van Dillen, Papies, and Hofmann (2013) suggests a way to decrease the influence of personal attributes and selective exposure on decision-making. The results from this study showed that people do pay more attention to physically attractive or tempting stimuli; however, this phenomenon can be decreased through increasing the "cognitive load." In this study, increasing cognitive activity led to a decreased impact of physical appearance and selective exposure on the individual's impression of the idea presented. This is explained by acknowledging that we are instinctively drawn to certain physical attributes, but if the required resources for this attraction are otherwise engaged at the time, then we might not notice these attributes to an equal extent. For example, if a person is simultaneously engaging in a mentally challenging activity during the time of exposure, then it is likely that less attention will be paid to appearance, which leads to a decreased impact of selective exposure on decision-making.
== Theories accounting for selective exposure ==
=== Cognitive dissonance theory ===
Leon Festinger is widely considered as the father of modern social psychology and as an important figure to that field of practice as Freud was to clinical psychology and Piaget was to developmental psychology. He was considered to be one of the most significant social psychologists of the 20th century. His work demonstrated that it is possible to use the scientific method to investigate complex and significant social phenomena without reducing them to the mechanistic connections between stimulus and response that were the basis of behaviorism. Festinger proposed the groundbreaking theory of cognitive dissonance that has become the foundation of selective exposure theory today despite the fact that Festinger was considered as an "avant-garde" psychologist when he had first proposed it in 1957. In an ironic twist, Festinger realized that he himself was a victim of the effects of selective exposure. He was a heavy smoker his entire life and when he was diagnosed with terminal cancer in 1989, he was said to have joked, "Make sure that everyone knows that it wasn't lung cancer!" Cognitive dissonance theory explains that when a person either consciously or unconsciously realizes conflicting attitudes, thoughts, or beliefs, they experience mental discomfort. Because of this, an individual will avoid such conflicting information in the future since it produces this discomfort, and they will gravitate towards messages sympathetic to their own previously held conceptions. Decision makers are unable to evaluate information quality independently on their own (Fischer, Jonas, Dieter & Kastenmüller, 2008). When there is a conflict between pre-existing views and information encountered, individuals will experience an unpleasant and self-threatening state of aversive-arousal which will motivate them to reduce it through selective exposure. They will begin to prefer information that supports their original decision and neglect conflicting information. Individuals will then exhibit confirmatory information to defend their positions and reach the goal of dissonance reduction. Cognitive dissonance theory insists that dissonance is a psychological state of tension that people are motivated to reduce (Festinger 1957). Dissonance causes feelings of unhappiness, discomfort, or distress. Festinger (1957, p. 13) asserted the following: "These two elements are in a dissonant relation if, considering these two alone, the obverse of one element would follow from the other." To reduce dissonance, people add consonant cognition or change evaluations for one or both conditions in order to make them more consistent mentally. Such experience of psychological discomfort was found to drive individuals to avoid counterattitudinal information as a dissonance-reduction strategy.
In Festinger's theory, there are two basic hypotheses:
1) The existence of dissonance, being psychologically uncomfortable, will motivate the person to try to reduce the dissonance and achieve consonance.
2) When dissonance is present, in addition to trying to reduce it, the person will actively avoid situations and information which would likely increase the dissonance (Festinger 1957, p. 3).
The theory of cognitive dissonance was developed in the mid-1950s to explain why people of strong convictions are so resistant in changing their beliefs even in the face of undeniable contradictory evidence. It occurs when people feel an attachment to and responsibility for a decision, position or behavior. It increases the motivation to justify their positions through selective exposure to confirmatory information (Fischer, 2011). Fischer suggested that people have an inner need to ensure that their beliefs and behaviors are consistent. In an experiment that employed commitment manipulations, it impacted perceived decision certainty. Participants were free to choose attitude-consistent and inconsistent information to write an essay. Those who wrote an attitude-consistent essay showed higher levels of confirmatory information search (Fischer, 2011). The levels and magnitude of dissonance also play a role. Selective exposure to consistent information is likely under certain levels of dissonance. At high levels, a person is expected to seek out information that increases dissonance because the best strategy to reduce dissonance would be to alter one's attitude or decision (Smith et al., 2008).
Subsequent research on selective exposure within the dissonance theory produced weak empirical support until the dissonance theory was revised and new methods, more conducive to measuring selective exposure, were implemented. To date, scholars still argue that empirical results supporting the selective exposure hypothesis are still mixed. This is possibly due to the problems with the methods of the experimental studies conducted. Another possible reason for the mixed results may be the failure to simulate an authentic media environment in the experiments.
According to Festinger, the motivation to seek or avoid information depends on the magnitude of dissonance experienced (Smith et al., 2008). It is observed that there is a tendency for people to seek new information or select information that supports their beliefs in order to reduce dissonance.
There exist three possibilities which will affect extent of dissonance (Festinger 1957, pp. 127–131):
Relative absence of dissonance.
When little or no dissonance exists, there is little or no motivation to seek new information. For example, when there is an absence of dissonance, the lack of motivation to attend or avoid a lecture on 'The Advantages of Automobiles with Very High Horsepower Engines' will be independent of whether the car a new owner has recently purchased has a high or low horsepower engine. However, it is important to note the difference between a situation when there is no dissonance and when the information has no relevance to the present or future behavior. For the latter, accidental exposure, which the new car owner does not avoid, will not introduce any dissonance; while for the former individual, who also does not avoid information, dissonance may be accidentally introduced.
The presence of moderate amounts of dissonance.
The existence of dissonance and consequent pressure to reduce it will lead to an active search of information, which will then lead people to avoid information that will increase dissonance. However, when faced with a potential source of information, there will be an ambiguous cognition to which a subject will react in terms of individual expectations about it. If the subject expects the cognition to increase dissonance, they will avoid it. In the event that one's expectations are proven wrong, the attempt at dissonance reduction may result in increasing it instead. It may in turn lead to a situation of active avoidance.
The presence of extremely large amounts of dissonance.
If two cognitive elements exist in a dissonant relationship, the magnitude of dissonance matches the resistance to change. If the dissonance becomes greater than the resistance to change, then the least resistant elements of cognition will be changed, reducing dissonance. When dissonance is close to the maximum limit, one may actively seek out and expose oneself to dissonance-increasing information. If an individual can increase dissonance to the point where it is greater than the resistance to change, he will change the cognitive elements involved, reducing or even eliminating dissonance. Once dissonance is increased sufficiently, an individual may bring himself to change, hence eliminating all dissonance (Festinger 1957, pp. 127–131).
The reduction in cognitive dissonance following a decision can be achieved by selectively looking for decision-consonant information and avoiding contradictory information. The objective is to reduce the discrepancy between the cognitions, but the specification of which strategy will be chosen is not explicitly addressed by the dissonance theory. It will be dependent on the quantity and quality of the information available inside and outside the cognitive system.
=== Klapper's selective exposure ===
In the early 1960s, Columbia University researcher Joseph T. Klapper asserted in his book The Effects Of Mass Communication that audiences were not passive targets of political and commercial propaganda from mass media but that mass media reinforces previously held convictions. Throughout the book, he argued that the media has a small amount of power to influence people and, most of the time, it just reinforces our preexisting attitudes and beliefs. He argued that the media effects of relaying or spreading new public messages or ideas were minimal because there is a wide variety of ways in which individuals filter such content. Due to this tendency, Klapper argued that media content must be able to ignite some type of cognitive activity in an individual in order to communicate its message. Prior to Klapper's research, the prevailing opinion was that mass media had a substantial power to sway individual opinion and that audiences were passive consumers of prevailing media propaganda. However, by the time of the release of The Effects of Mass Communication, many studies led to a conclusion that many specifically targeted messages were completely ineffective. Klapper's research showed that individuals gravitated towards media messages that bolstered previously held convictions that were set by peer groups, societal influences, and family structures and that the accession of these messages over time did not change when presented with more recent media influence. Klapper noted from the review of research in the social science that given the abundance of content within the mass media, audiences were selective to the types of programming that they consumed. Adults would patronize media that was appropriate for their demographics and children would eschew media that was boring to them. So individuals would either accept or reject a mass media message based upon internal filters that were innate to that person.
The following are Klapper's five mediating factors and conditions to affect people:
Predispositions and the related processes of selective exposure, selective perception, and selective retention.
The groups, and the norms of groups, to which the audience members belong.
Interpersonal dissemination of the content of communication
The exercise of opinion leadership
The nature of mass media in a free enterprise society.
Three basic concepts:
Selective exposure – people keep away from communication of opposite hue.
Selective perception – If people are confronting unsympathetic material, they do not perceive it, or make it fit for their existing opinion.
Selective retention – refers to the process of categorizing and interpreting information in a way that favors one category or interpretation over another. Furthermore, they just simply forget the unsympathetic material.
Groups and group norms work as mediators. For example, one can be strongly disinclined to change to the Democratic Party if their family has voted Republican for a long time. In this case, the person's predisposition to the political party is already set, so they don't perceive information about Democratic Party or change voting behavior because of mass communication. Klapper's third assumption is inter-personal dissemination of mass communication. If someone is already exposed by close friends, which creates predisposition toward something, it will lead to an increase in exposure to mass communication and eventually reinforce the existing opinion. An opinion leader is also a crucial factor to form one's predisposition and can lead someone to be exposed by mass communication. The nature of commercial mass media also leads people to select certain types of media contents.
=== Cognitive economy model ===
This new model combines the motivational and cognitive processes of selective exposure. In the past, selective exposure had been studied from a motivational standpoint. For instance, the reason behind the existence of selective exposure was that people felt motivated to decrease the level of dissonance they felt while encountering inconsistent information. They also felt motivated to defend their decisions and positions, so they achieved this goal by exposing themselves to consistent information only. However, the new cognitive economy model not only takes into account the motivational aspects, but it also focuses on the cognitive processes of each individual. For instance, this model proposes that people cannot evaluate the quality of inconsistent information objectively and fairly because they tend to store more of the consistent information and use this as their reference point. Thus, inconsistent information is often observed with a more critical eye in comparison to consistent information. According to this model, the levels of selective exposure experienced during the decision-making process are also dependent on how much cognitive energy people are willing to invest. Just as people tend to be careful with their finances, cognitive energy or how much time they are willing to spend evaluating all the evidence for their decisions works the same way. People are hesitant to use this energy; they tend to be careful so they don't waste it. Thus, this model suggests that selective exposure does not happen in separate stages. Rather, it is a combined process of the individuals' certain acts of motivations and their management of the cognitive energy.
== Implications ==
=== Media ===
Recent studies have shown relevant empirical evidence for the pervasive influence of selective exposure on the greater population at large due to mass media. Researchers have found that individual media consumers will seek out programs to suit their individual emotional and cognitive needs. Individuals will seek out palliative forms of media during the recent times of economic crisis to fulfill a "strong surveillance need" and to decrease chronic dissatisfaction with life circumstances as well as fulfill needs for companionship. Consumers tend to select media content that exposes and confirms their own ideas while avoiding information that argues against their opinion. A study conducted in 2012 has shown that this type of selective exposure affects pornography consumption as well. Individuals with low levels of life satisfaction are more likely to have casual sex after consumption of pornography that is congruent with their attitudes while disregarding content that challenges their inherently permissive 'no strings attached' attitudes.
Music selection is also affected by selective exposure. A 2014 study conducted by Christa L. Taylor and Ronald S. Friedman at the SUNY University at Albany, found that mood congruence was effected by self-regulation of music mood choices. Subjects in the study chose happy music when feeling angry or neutral but listened to sad music when they themselves were sad. The choice of sad music given a sad mood was due less to mood-mirroring but as a result of subjects having an aversion to listening to happy music that was cognitively dissonant with their mood.
Politics are more likely to inspire selective exposure among consumers as opposed to single exposure decisions. For example, in their 2009 meta-analysis of Selective Exposure Theory, Hart et al. reported that "A 2004 survey by The Pew Research Center for the People & the Press (2006) found that Republicans are about 1.5 times more likely to report watching Fox News regularly than are Democrats (34% for Republicans and 20% of Democrats). In contrast, Democrats are 1.5 times more likely to report watching CNN regularly than Republicans (28% of Democrats vs. 19% of Republicans). Even more striking, Republicans are approximately five times more likely than Democrats to report watching "The O'Reilly Factor" regularly and are seven times more likely to report listening to "Rush Limbaugh" regularly." As a result, when the opinions of Republicans who only tune into conservative media outlets were compared to those of their fellow conservatives in a study by Stroud (2010), their beliefs were considered to be more polarized. The same result was retrieved from the study of liberals as well. Due to our greater tendency toward selective exposure, current political campaigns have been characterized as being extremely partisan and polarized. As Bennett and Iyengar (2008) commented, "The new, more diversified information environment makes it not only more feasible for consumers to seek out news they might find agreeable but also provides a strong economic incentive for news organizations to cater to their viewers' political preferences." Selective exposure thus plays a role in shaping and reinforcing individuals' political attitudes. In the context of these findings, Stroud (2008) comments "The findings presented here should at least raise the eyebrows of those concerned with the noncommercial role of the press in our democratic system, with its role in providing the public with the tools to be good citizens." The role of public broadcasting, through its noncommercial role, is to counterbalance media outlets that deliberately devote their coverage to one political direction, thus driving selective exposure and political division in a democracy.
Many academic studies on selective exposure, however, are based on the electoral system and media system of the United States. Countries with a strong public service broadcasting like many European countries, on the other hand, have less selective exposure based on political ideology or political party. In Sweden, for instance, there were no differences in selective exposure to public service news between the political left and right over a period of 30 years.
In early research, selective exposure originally provided an explanation for limited media effects. The "limited effects" model of communication emerged in the 1940s with a shift in the media effects paradigm. This shift suggested that while the media has effects on consumers' behavior such as their voting behavior, these effects are limited and influenced indirectly by interpersonal discussions and the influence of opinion leaders. Selective exposure was considered one necessary function in the early studies of media's limited power over citizens' attitudes and behaviors. Political ads deal with selective exposure as well because people are more likely to favor a politician that agrees with their own beliefs. Another significant effect of selective exposure comes from Stroud (2010) who analyzed the relationship between partisan selective exposure and political polarization. Using data from the 2004 National Annenberg Election Survey, analysts found that over time partisan selective exposure leads to polarization. This process is plausible because people can easily create or have access to blogs, websites, chats, and online forums where those with similar views and political ideologies can congregate. Much of the research has also shown that political interaction online tends to be polarized. Further evidence for this polarization in the political blogosphere can be found in the Lawrence et al. (2010)'s study on blog readership that people tend to read blogs that reinforce rather than challenge their political beliefs. According to Cass Sunstein's book, Republic.com, the presence of selective exposure on the web creates an environment that breeds political polarization and extremism. Due to easy access to social media and other online resources, people are "likely to hold even stronger views than the ones they started with, and when these views are problematic, they are likely to manifest increasing hatred toward those espousing contrary beliefs." This illustrates how selective exposure can influence an individual's political beliefs and subsequently his participation in the political system.
One of the major academic debates on the concept of selective exposure is whether selective exposure contributes to people's exposure to diverse viewpoints or polarization. Scheufele and Nisbet (2012) discuss the effects of encountering disagreement on democratic citizenship. Ideally, true civil deliberation among citizens would be the rational exchange of non-like-minded views (or disagreement). However, many of us tend to avoid disagreement on a regular basis because we do not like to confront with others who hold views that are strongly opposed to our own. In this sense, the authors question about whether exposure to non-like-minded information brings either positive or negative effects on democratic citizenship. While there are mixed findings of peoples' willingness to participate in the political processes when they encounter disagreement, the authors argue that the issue of selectivity needs to be further examined in order to understand whether there is a truly deliberative discourse in online media environment.
== See also ==
Algorithmic radicalization – Radicalization via social media algorithms
Attitude polarization – Tendency of a group to make more extreme decisions than the inclinations of its membersPages displaying short descriptions of redirect targets
Cherry picking
Communal reinforcement – Social phenomenon
Echo chamber – Situation that reinforces beliefs by repetition inside a closed system
Filter bubble – Intellectual isolation through internet algorithms
Group polarization – Tendency of a group to make more extreme decisions than the inclinations of its members
Low-information rationality – Psychological problem-solving tendencyPages displaying short descriptions of redirect targets
Media consumption – Usage of media
Reinforcement theory – limited effects media model applicable within the realm of communication. The theory generally states that people seek out and remember information that provides cognitive support for their pre-existing attitudes and beliefsPages displaying wikidata descriptions as a fallback
Russell's teapot – Analogy devised by Bertrand Russell
Selection bias
Solipsism – Philosophical idea that only one's own mind is sure to exist
Truthiness – Quality of preferring concepts or facts one wishes to be true, rather than actual truth
Voldemort effect
== References ==
=== Bibliography ===
Festinger, Leon (1957). A Theory of Cognitive Dissonance. Row, Peterson and Company. ISBN 978-0-8047-0911-8. LCCN 57011351. {{cite book}}: ISBN / Date incompatibility (help) | Wikipedia/Selective_exposure_theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.